paper_id
stringlengths
43
43
summaries
sequence
abstractText
stringlengths
98
40k
authors
list
references
list
sections
list
year
int64
1.98k
2.02k
title
stringlengths
4
183
SP:71d80c38253a4fd86d8e076b2dee8d0c47da4911
[ "To encourage disentanglement in the latent space of a variational autoencoder (VAE), the authors propose to learn two sets of latent z and w: the dimensions of w are independent of each other and each dimension w_i maps to a known ground truth generating factor y_i. Latent z captures all the other factors. The well studied Total Correlation regularisation is used to enforce the independence of z and w, and the same is used to enforce the independence of the dimensions of w. Each dimension is learned to predict a corresponding ground truth factor. The key difference from the previous approach is the use of invertible and Lipschitz smooth mapping to learn monotonic mappings from w to y." ]
Deep generative models have made important progress towards modeling complex, high dimensional data. Their usefulness is nevertheless often limited by a lack of control over the generative process or a poor understanding of the latent representation. To overcome these issues, attention is now focused on discovering latent variables correlated to the data properties and manipulating these properties. This paper presents the Property-controllable VAE (PCVAE), where a new Bayesian model is proposed to inductively bias the latent representation using explicit data properties via novel group-wise and property-wise disentanglement terms. Each data property corresponds seamlessly to a latent variable, by enforcing invertible mutual dependence between them. This allows us to move along the learned latent dimensions to control specific properties of the generated data with great precision. Quantitative and qualitative evaluations confirm that the PCVAE outperforms the existing models by up to 28% in capturing and 65% in manipulating the desired properties. The code for the proposed PCVAE is available at:https://github.com/xguo7/PCVAE.
[ { "affiliations": [], "name": "CODER VIA" }, { "affiliations": [], "name": "INVERTIBLE MUTUAL DEPENDENCE" }, { "affiliations": [], "name": "Xiaojie Guo" }, { "affiliations": [], "name": "Yuanqi Du" }, { "affiliations": [], "name": "Liang Zhao" } ]
[ { "authors": [ "Alexander A Alemi", "Ian Fischer", "Joshua V Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jens Behrmann", "Will Grathwohl", "Ricky TQ Chen", "David Duvenaud", "Jörn-Henrik Jacobsen" ], "title": "Invertible residual networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Samuel Bowman", "Luke Vilnis", "Oriol Vinyals", "Andrew Dai", "Rafal Jozefowicz", "Samy Bengio" ], "title": "Generating sentences from a continuous space", "venue": "In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning,", "year": 2016 }, { "authors": [ "Russel E Caflisch" ], "title": "Monte carlo and quasi-monte carlo methods", "venue": "Acta numerica,", "year": 1998 }, { "authors": [ "Ricky TQ Chen", "Xuechen Li", "Roger B Grosse", "David K Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Antonia Creswell", "Anil A Bharath", "Biswa Sengupta" ], "title": "Conditional autoencoders with adversarial information", "venue": "factorization. space,", "year": 2017 }, { "authors": [ "Harrison Edwards", "Amos Storkey" ], "title": "Censoring representations with an adversary", "venue": "arXiv preprint arXiv:1511.05897,", "year": 2015 }, { "authors": [ "Yaroslav Ganin", "Evgeniya Ustinova", "Hana Ajakan", "Pascal Germain", "Hugo Larochelle", "François Laviolette", "Mario Marchand", "Victor Lempitsky" ], "title": "Domain-adversarial training of neural networks", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Xiaojie Guo", "Liang Zhao", "Cameron Nowzari", "Setareh Rafatirad", "Houman Homayoun", "Sai Manoj Pudukotai Dinakarrao" ], "title": "Deep multi-attributed graph translation with node-edge coevolution", "venue": "IEEE International Conference on Data Mining (ICDM),", "year": 2019 }, { "authors": [ "Xiaojie Guo", "Liang Zhao", "Zhao Qin", "Lingfei Wu", "Amarda Shehu", "Yanfang Ye" ], "title": "Interpretable deep graph generation with node-edge co-disentanglement", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Prashnna Kumar Gyawali", "B Milan Horacek", "John L Sapp", "Linwei Wang" ], "title": "Sequential factorized autoencoder for localizing the origin of ventricular activation from 12-lead electrocardiograms", "venue": "IEEE Transactions on Biomedical Engineering,", "year": 2019 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Artur Kadurin", "Sergey Nikolenko", "Kuzma Khrabrov", "Alex Aliper", "Alex Zhavoronkov" ], "title": "drugan: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico", "venue": "Molecular pharmaceutics,", "year": 2017 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Durk P Kingma", "Shakir Mohamed", "Danilo Jimenez Rezende", "Max Welling" ], "title": "Semi-supervised learning with deep generative models", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Jack Klys", "Jake Snell", "Richard Zemel" ], "title": "Learning latent subspaces in variational autoencoders", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tejas D Kulkarni", "William F Whitney", "Pushmeet Kohli", "Josh Tenenbaum" ], "title": "Deep convolutional inverse graphics network", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Matt J Kusner", "Brooks Paige", "José Miguel Hernández-Lobato" ], "title": "Grammar variational autoencoder", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 1945 }, { "authors": [ "Guillaume Lample", "Neil Zeghidour", "Nicolas Usunier", "Antoine Bordes", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Fader networks: Manipulating images by sliding attributes", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Yujia Li", "Oriol Vinyals", "Chris Dyer", "Razvan Pascanu", "Peter W. Battaglia" ], "title": "Learning deep generative models of graphs. CoRR, abs/1803.03324, 2018", "venue": null, "year": 2018 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Qi Liu", "Miltiadis Allamanis", "Marc Brockschmidt", "Alexander L. Gaunt" ], "title": "Constrained graph variational autoencoders for molecule design", "venue": "The Thirty-second Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Francesco Locatello", "Stefan Bauer", "Mario Lucic", "Gunnar Raetsch", "Sylvain Gelly", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In international conference on machine learning,", "year": 2019 }, { "authors": [ "Francesco Locatello", "Michael Tschannen", "Stefan Bauer", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Disentangling factors of variations using few labels", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Jianxin Ma", "Peng Cui", "Kun Kuang", "Xin Wang", "Wenwu Zhu" ], "title": "Disentangled graph convolutional networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Michael F Mathieu", "Junbo Jake Zhao", "Junbo Zhao", "Aditya Ramesh", "Pablo Sprechmann", "Yann LeCun" ], "title": "Disentangling factors of variation in deep representation using adversarial training", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Loic Matthey", "Irina Higgins", "Demis Hassabis", "Alexander Lerchner" ], "title": "dsprites: Disentanglement testing sprites dataset", "venue": "URL https://github. com/deepmind/dsprites-dataset/.[Accessed on:", "year": 2017 }, { "authors": [ "Yunchen Pu", "Zhe Gan", "Ricardo Henao", "Xin Yuan", "Chunyuan Li", "Andrew Stevens", "Lawrence Carin" ], "title": "Variational autoencoder for deep learning of images, labels and captions", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Raghunathan Ramakrishnan", "Pavlo O Dral", "Matthias Rupp", "O Anatole Von Lilienfeld" ], "title": "Quantum chemistry structures and properties of 134 kilo molecules", "venue": "Scientific data,", "year": 2014 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Dna-gan: Learning disentangled representations from multi-attribute images", "venue": "Workshop of International conference on machine learning,", "year": 2017 }, { "authors": [ "Taihong Xiao", "Jiapeng Hong", "Jinwen Ma" ], "title": "Elegant: Exchanging latent encodings with gan for transferring multiple face attributes", "venue": "In Proceedings of the European conference on computer vision (ECCV),", "year": 2018 }, { "authors": [ "Liming Zhang", "Liang Zhao", "Shan Qin", "Dieter Pfoser" ], "title": "Tg-gan: Deep generative models for continuously-time temporal graph generation", "venue": "arXiv preprint arXiv:2005.08323,", "year": 2020 }, { "authors": [ "Liang Zhao" ], "title": "Event prediction in big data era: A systematic survey", "venue": "arXiv preprint arXiv:2007.09815,", "year": 2020 }, { "authors": [ "Shuchang Zhou", "Taihong Xiao", "Yi Yang", "Dieqiao Feng", "Qinyao He", "Weiran He" ], "title": "Genegan: Learning object transfiguration and attribute subspace from unpaired data", "venue": "arXiv preprint arXiv:1705.04932,", "year": 2017 }, { "authors": [ "Liu" ], "title": "A molecule is represented as a graph G(X,A)", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Important progress has been made towards learning the underlying low-dimensional representation and generative process of complex high dimensional data such as images (Pu et al., 2016), natural languages (Bowman et al., 2016), chemical molecules (Kadurin et al., 2017; Guo et al., 2019) and geo-spatial data (Zhao, 2020) via deep generative models. In recent years, a surge of research has developed new ways to further enhance the disentanglement and independence of the latent dimensions, creating models with better robustness, improved interpretability, and greater generalizability with inductive bias (see Figures 1(a) and 1(b)) (Kingma et al., 2014; Kulkarni et al., 2015; Creswell et al., 2017) or without any bias (Higgins et al., 2017; Chen et al., 2018; Kumar et al., 2018). Although it is generally assumed that the complex data is generated from the latent representations, their latent dimensions are typically not associated with physical meaning and hence cannot reflect real data generation mechanisms such as the relationships between structural and functional characteristics. A critical problem that remains unsolved is how to best identify and enforce the correspondence between the learned latent dimensions and key aspects of the data, such as the bio-physical properties of a molecule. Knowing such properties is crucial for many applications that depend on being able to interpret and control the data generation process with the desired properties.\nIn an effort to achieve this, several researchers (Klys et al., 2018; Locatello et al., 2019b) have suggested methods that enforce a subset of latent dimensions correspond to targeted categorical properties, as shown in Figure 1(c). Though the initial results have been encouraging, critical challenges remain unsolved such as: (1) Difficulty in handling continuous-valued properties. The control imposed on data generation limits existing techniques to categorical (typically binary) properties, to enable tractable model inference and sufficient coverage of the data. However, continuous-valued properties (e.g., the scale and light level of images) are also common in real world data, while their model inference usually can be easily intractable. Also, many cases require to generate data\n∗Corresponding author: liang.zhao@emory.edu\nwith properties of which the values are unseen during training process. This cannot be achieved by conventional techniques such as conditional models without making strong assumption on the model distributions. (2) Difficulty in efficiently enhancing mutual independence among latent variables relevant and irrelevant to the properties. This problem requires to ensure that each property is only correlated to its corresponding latent variable(s) and independent of all the others. Directly enforcing such mutual independence inherently between all pairs of latent variables incurs quadratic number of optimization efforts. Hence an efficient way is imperative. (3) Difficulty in capturing and controlling correlated properties. It is feasible that several independent latent variables can capture multiple independent properties. But when the properties are correlated, they cannot be “one-on-one” mapped to corresponding independent latent variables anymore. However, correlated properties are commonly found in formatting a real world data.\nTo solve the above challenges, we propose a new model, Property-controllable VAE (PCVAE), where a new Bayesian model is proposed to inductively bias the latent representation using explicit data properties via novel group-wise and property-wise disentanglement terms. Each data property is seamlessly linked to the corresponding latent variable by innovatively enforcing an invertible mutual dependence between them, as shown in Figure 1(d). Hence, when generating data, the corresponding latent variables are manipulated to simultaneously control multiple desired properties without influencing the others. We have also further extended our model to handle inter-correlated properties. Our key contributions are summarized as follows: • A new Bayesian model that inductively biases the latent representation using explicit real data\nproperties is proposed. A variational inference strategy and inference model have been customized to ensure effective Bayesian inference.\n• Group-wise and property-wise disentanglement terms are proposed to enhance the mutual independence among property, relevant and irrelevant latent variables .\n• The invertible mutual dependence between property-latent variable pair is achieved by enforcing an invertibility constraint over a residual-based decoder.\n• The quantitative and qualitative evaluation performed for this study revealed our PCVAE outperforms existing methods by up to 28% in capturing and 65% in manipulating the desired properties." }, { "heading": "2 RELATED WORKS", "text": "Disentanglement Representation Learning. An important relevant area of research is disentangled representation learning (Alemi et al., 2017; Chen et al., 2018; Higgins et al., 2017; Kim & Mnih, 2018), which structures the latent space by minimizing the mutual information between all pairs of latent variables. The goal here is to learn representations that separate out the underlying explanatory factors that are responsible for variations in the data, as these have been shown to be relatively resilient with respect to the complex variants involved (Bengio et al., 2013; Ma et al., 2019; Guo et al., 2020), and thus can be used to enhance generalizability as well as improve robustness against adversarial attack. As noted by Locatello et al. (2019a), it is impossible for disentangled representation learning to capture the desired properties without supervision and inductive biases.\nLearning latent representations via supervision. This ensures that the latent variables capture the desired properties though supervision, generally by directly defining properties as latent variables in the model (Locatello et al., 2019b). Unfortunately, apart from providing an explicit variable for the labelled property, this yields no other easily interpretable structures, such as discovering latent variables that are correlated to the properties, as the model proposed in the current study does. This\nis also an issue with other methods of structuring latent space that have been explored, such as batching data according to labels (Kulkarni et al., 2015; Zhang et al., 2020) or using a discriminator network in a non-generative model (Lample et al., 2017). Some researchers addressed this problem by introducing the architecture bias through a two-way factored autoencoder and realize the supervision based on a pair-wise contrastive loss (Gyawali et al., 2019). Other researchers addressed this problem by linking latent variables with observed labels through adversarial learning (Creswell et al., 2017; Edwards & Storkey, 2015; Ganin et al., 2016; Mathieu et al., 2016). The most relevant work for our purpose is CSVAE (Klys et al., 2018), where a subset of latent variables are correlated with binary properties via an adversarial learning. All the above works can not handle multiple continuous-valued properties due to their strict assumptions on the distribution of properties.\nData manipulation and generation. Here, trained machine learning models are utilized to manipulate and generate data in a controllable way with the desired properties, which is especially useful for applications in the image domain. Several works have specifically considered transferring attributes in images, which is the same goal as that in the CASVE. These earlier works (Zhou et al., 2017; Xiao et al., 2017; 2018) all transfer attributes from a source image onto a target image. These models can only perform categorical attribute transformation between images (e.g., “splice the beard style of image A onto image B”), but only through interpolation between existing images. Once trained, our proposed model can generate an objects with any value of a certain property (either observed or unobserved during training) that can be encoded in the subset of latent variables." }, { "heading": "3 PROPERTY CONTROLLABLE VAE", "text": "" }, { "heading": "3.1 PROBLEM FORMULATION", "text": "Suppose we are given a dataset D where each data instance is (x, y) with x ∈ Rn and y = {yk ∈ R}Kk=1 to represent K properties of interest of x. For example, if x is a molecule, then we may have properties of interest, such as cLogP and cLogS. We assume that the data (x, y) are generated by some random process from continuous latent random variables (z, w). Each variable in w controls one of the properties of interest in y, while the variables in z control all the other aspects of x.\nOur goal is to learn such a generative model involving (x, y) and (z, w), where the subset of variables (i.e., z) are disentangled from subset w, and the variables inside w are disentangled from each other. Once this model has been learned, then we can expect different elements of variables in w to control different properties of interest, which is a highly desirable goal for many data generation downstream tasks. For example, we may want to decrease the value of a specific property (e.g., protein energy) by changing the value of the corresponding element in w. It is also possible to directly set a desired property value (e.g., the mass of a molecule) and then generate the corresponding x with this target value (i.e., a molecule with the target mass value)." }, { "heading": "3.2 OVERALL OBJECTIVE", "text": "In this section, we first introduce the Bayesian variational inference of PCVAE. Then we introduce the group-wise and property-wise disentanglement terms as part of the overall objective. Following this, an invertibility constraint is introduced to enforce mutual dependence between each propertylatent variable pair. At last, PCVAE is extended to capture and control multiple correlated properties." }, { "heading": "3.2.1 BAYESIAN VARIATIONAL INFERENCE OF PCVAE", "text": "The goal in Section 3.1 requires us to not only model the dependence between x and (w, z) for latent representation learning and data generation, but also model the dependence between y and w for property manipulation. We propose to achieve this by maximizing a form of variational lower bound on the joint log likelihood p(x, y) of our model. Given an approximate posterior q(z, w∣x, y), we can use the Jensen’s equality to obtain the variational lower bound of p(x, y) as:\nlog p(x, y) = logEq(z,w∣x,y)[p(x, y,w, z)/q(z,w∣x, y)] ≥ Eq(z,w∣x,y)[log p(x, y,w, z)/q(z,w∣x, y)]. (1)\nThe joint likelihood log p(x, y, w, z) can be decomposed as log p(x, y∣z, w) + log p(z, w). We have two assumptions: (1) w only encodes the information from y, namely, x and y are conditionally independent given w (i.e., x ⊥ y∣w); (2) z is independent from w and y, namely z ⊥ w\nand z ⊥ y, which is equal to y ⊥ z∣w (see derivation in Appendix A.3). First, based on the two assumptions, we can get x ⊥ y∣(z, w) (see derivation in Appendix A.4). Thus, we have log p(x, y∣z, w) = log p(x∣z, w)+ log p(y∣z, w) . Then, based on the assumption y ⊥ z∣w, we can have log p(y∣z, w) = log p(y∣w). Then we get log p(x, y∣z, w) = log p(x∣z, w) + log p(y∣w). To explicitly represent the dependence between x and (z, w) as well as the dependence between y and w, we can parameterize the joint log-likelihood as log pθ,γ(x, y, w, z) with θ and γ as:\nlog pθ,γ(x, y, w, z) = log pθ(x∣z, w) + log p(z, w) + log pγ(y∣w). (2) Given the condition that a parameterized qφ(z, w∣x, y) = qφ(z, w∣x) = qφ(z∣x)qφ(w∣x) (since the information on y is included in x), by taking Eq. 2 into the above variational lower bound term in Eq. 1, we obtain the negative part as an upper bound on − log pθ,γ(x, y) (as shown in the right sub-figure of Figure 1(d)):\nL1 = − Eqφ(z,w∣x)[log pθ(x∣z,w)] − Eqφ(w∣x)[log pγ(y∣w)] +DKL(qφ(z,w∣x)∣∣p(z,w)) (3)\nThis gives us the proposed Bayesian variational inference of PCVAE. The detailed derivation of Eq. 3 can be found in Appendix A.1. As there are K properties of interest in y which are assigned and disentangled by the latent variables in w, the second term in Eq. 3 can be detailed as ∑Kk Eqφ(w∣x)[log pγ(yk∣wk)]." }, { "heading": "3.2.2 GROUP-WISE AND PROPERTY-WISE DISENTANGLEMENT", "text": "Considering that the above derivation is conditional on two requirements: (1) z is independent from w and y and (2) the variables in w are independent from each other, while in practice minimizing the above objective L1 will not imply that our model will satisfy these conditions. We therefore propose to further penalize the novel Group-wise- and Property-wise Disentanglement terms.\nWe first decompose the KL (Kullback-Leibler) divergence term in Eq. 3 as:\nEp(x)[DKL(qφ(z,w∣x)∣∣p(z,w))] = DKL(qφ(z,w, x) ∥ q(z,w)p(x)) +DKL(q(z,w) ∥ ∏i,j q(zi)q(wj))ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ\ntotal correlation term\n+∑ i DKL(q(zi) ∥ p(zi)) +∑j DKL(q(wj) ∥ p(wj)) (4)\nThe second term in the right of the above equation is referred to as the total correlation (TC), as defined by Chen et al. (2018), which is one of many generalizations of mutual information to more than two random variables. The detailed derivation of this decomposition can be found in Appendix A.1. The penalty on this TC term forces the model to find statistically independent factors in the data distribution. A heavier penalty on this term induces a more disentangled representation among all variables in both z and w, but as stated in our problem formulation, we only require that (1) variables in w are disentangled to capture different properties, and (2) although z is disentangled from w, the latent variables inside z do not need to be disentangled from each other. Roughly enforcing the disentanglement between all pairs of latent variables in w and z, as done by the existing TC term, can incur at least quadratic number of redundant optimization efforts and could lead to poor convergence. Thus, we further decompose and analyze the TC term as:\nDKL(q(z,w) ∥ ∏i,j q(zi)q(wj)) = DKL(q(z,w) ∥ q(z)q(w))ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏ group-wise disentanglement +DKL(q(w) ∥ ∏i q(wi))ÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑ ÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒ Ï property-wise disentanglement\n+DKL(q(z) ∥ ∏i q(zi)). (5) The first term in the right part of above decomposition enforces the independence between the two subsets of latent variables z and w, which we term group-wise disentanglement. The second term enforces the independences of variables inside w, ensuring that each latent variable can only capture the information of the single property assigned to it. We term this property-wise disentanglement. Imposing a heavy penalty on these two terms can satisfy the two requirements mentioned above. We can now obtain the second part for the objective of PCVAE by introducing the coefficient ρ as:\nL2 = DKL(q(z,w)∣∣q(z)q(w)) + ρDKL(q(w)∣∣∏i q(wi)) (6)" }, { "heading": "3.2.3 INVERTIBLE CONSTRAINT FOR PROPERTY CONTROL", "text": "As stated in the problem formulation, an important goal for our new model is to generate a data point x that retains the original property value of a given property yk with great precision. More importantly, there should be no strict assumptions of parameters for p(yk) and q(wk∣yk). The most\nstraightforward way to do this is to model both the mutual dependence between yk and its relevant latent variable wk, namely, q(wk∣yk) and p(yk∣wk), which, however, could incur double errors in this two-way mapping. To address it, we innovatively propose instead an invertible function that mathematically ensures the exact recovery of wk given yk based on the following deduction.\nIn the above objective, we only explicitly model the conditional distribution of pγ(yk∣wk), hence, to achieve the precisely control of property via z and w, which is necessary to generate x with a certain property yk = m, we need to maximize the probability that yk = m as follows: x ∼ pθ(x∣z, w), z, w ← arg maxz∼p(z),w∼p(w) pγ(yk = m∣z, w), (7) which is equal to:\nx ∼ pθ(x∣z, w), z ∼ p(z), wj,j/=k ∼ p(wj), wk ← arg maxwk∼p(wk) pγ(yk = m∣wk) (8) where wk can be determined as follows (N in the followings denotes Gaussian distribution):\nwk ← arg maxwk∼p(wk) pγ(yk = m∣wk) = wk ← arg maxwk∼p(wk) logN (yk = m∣fk(wk; γ), σk) (9) = wk ← arg maxwk∼p(wk) −(m − fk(wk; γ)) 2 = wk ← fk −1(m)\nTherefore, by learning an invertible function fk(wk; γ) from wk to the expectation of yk to model pγ(yk∣wk), we can easily achieve the desired precise control of the property. The above derivation are based on the assumption that y is a continuous-value. It can also be extended into the situation when y is discrete property, as detailed in Appendix A.2. To learn an invertible function fk(wk; γ), we propose to leverage an invertible neural network. Inspired by the invertible ResNet (Behrmann et al., 2019), we decompose the function fk(wk; γ) as fk(wk; γ) = f̄k(wk; γ) + wk. As proved by Behrmann et al. (2019), the sufficient condition that fk(wk; γ) is invertible is that Lip(f̄k) < 1, where Lip(f̄k) is the Lipschitz-constant of f̄k(wk; γ). Thus, the overall objective of the proposed PCVAE is finally formalized as: min\nθ,φ,γ L1 + αL2 with\nsubject to Lip(f̄k) < 1 for all k ∈ K, where α is the coefficient parameter. Remark (Monotonic relationship of property-latent variable pair). Given the condition that fk(wk; γ) is invertible and continuous (Lip(f̄k) is less than 1), fk(wk; γ) is thus a monotonic function. This is very important to increase (or decrease) the value of property yk by increasing (or decreasing) wk, especially when the desired value of property is not available." }, { "heading": "3.2.4 GENERALIZATION OF HANDLING CORRELATED PROPERTIES", "text": "As stated in the third challenge in Section 1, there are usually several groups of properties involved in describing the data x and each group has several properties. These different groups are independent, but the properties within the same group are correlated. Thus, we can further generalize the above objective framework to handle the correlated properties inside the same group.\nThe notation for yk is extended to yk = {yj,k ∈ R}Mkj=1, signifying that there are Mk correlated properties inside the k-th property group. The properties inside the same property set yk are correlated, while the different property sets (e.g. yp and yk) are independent. Similarly, the notation for wk is extended as a group of latent variables to control the corresponding property set. For the properties inside the same group, we assume all depend on the same group of latent variables wk as\np(yj,k∣wk) = N (yj,k∣fk(wk; γ)[j], σj,k)), (10) where fk(wk; γ)[j] denotes the j-th element of the output of fk(wk; γ). Thus, the second term in Eq. 3 can be generalized as ∑Kk ∑Mkj Eqφ(w∣x)[log pγ(yj,k∣wk)]." }, { "heading": "3.3 NEURAL NETWORK ARCHITECTURE OF PCVAE", "text": "As shown in Figure 1(d), there is an encoder (left-hand side of Figure 1(d)) that models the distribution q(z, w∣x) and two decoders (right-hand side of Figure 1(d)) that model the distribution p(y∣w) and p(x∣z, w). To implement the encoder and decoders in the first objective (i.e., L1), we use Multi-perceptions (MLPs) , Convolution Neural Networks (CNNs) or Graph Neural Networks (GNNs) to represent the distributions over relevant random variables.\nTo implement the second part L2, it is necessary to calculate the group-wise and property-wise disentanglement terms. Noting that the calculation of the density q(z), q(w) and q(wi) in groupwise and property-wise disentanglement terms depends on the entire data space. As such, it is inaccessible to compute it exactly during training. Thus, as the same operation conducted by Chen et al. (2018), we utilize the Naı̈ve Monte Carlo approximation based on a mini-batch of samples to underestimate q(z), q(w) and q(wk). The detailed operation is described in Appendix B.3. To implement the invertible constraint and model the distribution of pγ(yk∣wk), we utilize MLPs to model the function f̄k(⋅). Since the function f̄k(⋅) modeled by MLPs is a composition of contractive nonlinearities (e.g., ReLU, ELU, tanh) and linear mappings, based on the definition of Lipschitzconstant we have Lip(f̄k) < 1 if ∥ Wl ∥2< 1 for l ∈ L, where Wl refer to the weights of the l-th layer in f̄k, ∥ ⋅ ∥2 denotes the spectral norm, and L refers to the number of layer in the MLPs. To realize the above constraint on weights of neural networks, we propose to use the spectral normalization for each layer of MLPs, as introduced by Behrmann et al. (2019)." }, { "heading": "3.4 PRECISELY PROPERTY CONTROLLABLE GENERATION", "text": "Our proposed model can be applied to an important downstream task, namely precisely property controllable generation. Given the value of a required property yk, the goal of property controllable generation is to generate a data x which holds the same value as this desired property. To achieve this, three steps are conducted: (1) infer the value of wk based on the well-trained neural network f̄k(⋅) and the given property yk via fixed-point iteration by followingwi+1k ∶= yk−f̄k(wik; γ), where w i k is the updated latent variable at the i-th iteration step and w 0 k = yk; (2) randomly sample the values of z and the remaining variables inw from their prior distributions (i.e., Gaussian distribution) to obtain all the latent variables; and (3) generate a data x using the decoder based on the latent variables that are inferred from the previous two steps." }, { "heading": "4 EXPERIMENT", "text": "This section reports the results of the qualitative and quantitative evaluation carried out to test the performance of the proposed model on two datasets in two domains, namely images and molecules. All experiments were conducted on a 64-bit machine with an NVIDIA GPU (GTX 1080 Ti, 11016 MHz, 11 GB GDDR5). The architectures and hyper-parameters can be found in Appendix B. The code for the proposed PCVAE is available at:https://github.com/xguo7/PCVAE." }, { "heading": "4.1 EXPERIMENT SETUP", "text": "" }, { "heading": "4.1.1 DATASETS", "text": "The dSprites dataset (Matthey et al., 2017) consists of 2D shapes procedurally generated from ground truth independent semantic factors. The factors that are explored as properties of data in this experiment are scale, and the x and y positions (mentioned as x pos and y pos) of a sprite. All possible combinations of these semantic factors are used for generating a total of 730k images, where 580k/146k is the training/testing set split. The 3Dshapes dataset (Burgess & Kim, 2018) consists of 3D shapes procedurally generated from ground truth independent semantic factors. The factors that are explored as properties of data in this experiment are wall hue, floor hue and scale. All possible combinations of these semantic factors are used for generating a total of 480k images, where 390k/90k is the training/testing set split. The QM9 dataset (Ramakrishnan et al., 2014) consists of 134k stable small organic molecules, where 120k/20k is the training/testing set split." }, { "heading": "4.1.2 COMPARISON METHODS", "text": "In order to validate the superiority of our proposed model in capturing and manipulating the property during generation, we compare the performance of PCVAE to those achieved by three comparison models that are most relevant to our problem: (1) semi-VAE (Locatello et al., 2019b) is a semi-supervised model that enforces the value of each latent variable to be equal to the value of each property. Here we utilize all the labels for supervision for fairness; (2) CSVAE (Klys et al., 2018) is a VAE-based model that utilizes mutual information minimization to learn latent dimensions associated with only binary properties. Here we adjust this model to handle continuous property by assuming a Gaussian distribution; and (3) PCVAE tc is a baseline model that\nholds the same inference model and property controlling strategy as PCVAE, of which the proposed group-wise and property-wise disentanglement terms are replaced with the TC term, namely, DKL(q(z, w)∣∣∏i,j q(zi)q(wj)), as proposed in β-TCVAE (Chen et al., 2018). It is used as an ablation study to validate the effectiveness of the proposed group-wise and property-wise disentanglement. (4) PCVAE nsp is a baseline model that holds the same architecture and disentanglement terms as PCVAE except for the spectral normalization. It is used as an ablation study to validate the effectiveness of spectral normalization." }, { "heading": "4.2 EVALUATION FOR DISENTANGLED LATENT VARIABLES", "text": "In this section, we explore (1) whether each variable wk successfully captures the information of the property that is assigned to it through supervision, and (2) whether the subset z of latent variables is independent from the properties.\nQuantitative evaluation. We calculate the normalized mutual information between each encoded latent variable wk and the property yk that is assigned to it, as well as the average mutual information between latent z and each yk. Figure 2 shows the mutual information heat map by each model on dSprites(results on QM9 datasets can be found in Appendix C). The element in the row of zavg denotes to the average of all the mutual information between z and each property. In addition, we utilize the metric avgMI1 proposed by Locatello et al. (2019b) to show an overall quantitative comparison between different methods, as shown in Table 1. The proposed PCVAE achieves the least avgMI of 0.257, which demonstrate its strength in enforcing the relationship between wk and yk. These results also validate the effectiveness and necessity of the proposed group-wise and propertywise disentanglement term, as PCVAE outperforms the baseline model PCVAE tc on avgMI by around 28%. Though CSVAE shows an good performance in disentangling z and w, its latent variables in w turn in a poor performance for capturing the properties. Similar conclusions can also be drawn from the results on the 3Dshapes and QM9 dataset. For example, PCVAE outperforms the comparison models by about 16% in capturing two independent properties cLogP and Molweights.\nQualitative evaluation. We also qualitatively evaluate the dependence of each latent variable and its relevant property by visualizing the variation of the properties when traversing the priors of each latent variable. As shown in Figure 3, as the values of w1, w2 and w3 change between (−0.5, 0.5), the continuous variations of the assigned properties of scale, x position and y position of the generated images are clearly visible (as highlighted in red rectangle). The variables in z = {z1, z2, z3} have almost no influence on these properties, which validates the effectiveness of the group-wise disentanglement term. More evaluation results the other two datasets can be found in Appendix C.\n1 avgMI =∥ I(w, y) − E(k)) ∥2F , where k is the number of properties. I(w, y) is mutual information\nmatrix. The details can be found in Appendix B.4" }, { "heading": "4.3 EVALUATION FOR PROPERTY CONTROLLABLE GENERATION", "text": "In this section, we validate the performance of the property controllable generation. Specifically, given a predefined value of property yk, the aim is to explore whether the proposed model could generate a data point x with a property y′k that is the same as yk.\nQualitative evaluation. For dSprites and 3Dshapes dataset, since we have no ground-truth method with which to calculate the property y′k of the generated images, we directly visualize the images that are generated given different values of property yk. In Figure 4, each column contains four images generated given the same value of yk (here yk refers to the x position property) but given the different values for the other two properties. The objects of the generated images in the same column clearly share the same x position. Similar results can be observed in Figure 5, the wall hue or flooe hue of four images in the same column are the same given the same desired property. More visualizations for the other properties can be found in Appendix C.\nQuantitative evaluation. For the QM9 dataset, since the properties can not be visualized directly from the molecule, we quantitatively measure the property controllable performance in terms of the MSE (mean squared error) between the actual property y′k of generated molecule and the desired property yk, as shown in Table 2. The proposed PCVAE outperforms the other comparison models in successfully controlling cLogP and Molweights in molecule generation with a smaller MSE than that of the comparison methods by around 65.1% and 40.5% on average, respectively. This\ndemonstrate the superiority of PCVAE in precisely controlling the continuous-valued properties due to the effective invertible property prediction network. In addition, the obvious advantage of PCVAE over PCVAE tc demonstrate the effectiveness and necessity of the proposed group-wise and property-wise disentanglement term in precisely property controllable generation." }, { "heading": "4.4 EVALUATION FOR HANDLING CORRELATED PROPERTIES", "text": "In this section, we access the ability of the proposed model to capture and control the correlated properties. The performance is tested on the QM9 molecule set for two tasks: property prediction and property controllable generation. Three properties are selected: Molweights, cLogP and cLogS. Molweights is independent from cLogP and cLogS, while cLogP and cLogS are inner-correlated.\nFirst, we evaluate whether the subset of latent variables w can power the ability of property prediction, which is a very important task for new compound design in drug discovery. Given an input molecule, the trained encoder (inference model) is used to get the relevant latent variable wk, and then utilize the invertible function of p(yk∣wk) to predict the property yk. Table 3 compares the performance of property prediction task on PCVAE and comparison models for uncorrelated properties, as well as the extended model (denoted as PCVAE(cor))\nfor correlated properties. Though PCVAE(cor) deals with a more difficult case than PCVAE, where an additional property clogS that is correlated with cLogP is included, PCVAE(cor) still successfully captures the information of the added property cLogS with ignorable influence on the prediction of cLogP and Molweight. Specifically, as shown in Table 3, regarding the prediction of independent properties, PCVAE outperforms semi-VAE and CSVAE with a smaller MSE of 33.33 in terms of Molweights. It can be also observed that the prediction results of PCVAE nsp is better than PCVAE, which shows that enforcing both directions’ dependence (i.e., p(w∣y) as well as p(y∣w) via spectral normalization) can introduce more errors than only modelling the dependence p(y∣w). Next, we further explore the performance of the PCVAE(cor) to control the generation of the correlated properties. As shown in Table 2, PCVAE(cor) achieves the smallest MSE on all the properties. It demonstrate that adding the supervision of its correlated property cLogS does not influence the control of the property cLogP. This also demonstrates the effectiveness of the invertible function for handling multi-input and multi-output data. To test the necessities of the proposed PCVAE (cor), we also evaluate the performance of PCVAE and comparison models in dealing with correlated properties, of which the results could be found in Appendix C.4." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have proposed the PCVAE and its extended model, which learns a latent space that separates information correlated with the properties into a predefined subset of latent variables. To accomplish this, we first propose a novel Bayesian variational inference of PCVAE to jointly learn the distribution of data and properties followed by a novel group-wise and property-wise disentanglement term to deal with the complex dependency of subsets of latent variables. Then, we propose to enforce an invertible mutual dependence to allow the precise property controllable generation. At last, we demonstrate through quantitative and qualitative evaluations from three aspects that our proposed model achieves better performance than existing and baseline models. In future work, we plan to extend PCVAE to a semi-supervised setting, where some of the property labels are missing." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported by the National Science Foundation (NSF) Grant No. 1755850, No. 1841520, No. 2007716, No. 2007976, No. 1942594, No. 1907805, a Jeffress Memorial Trust Award, Amazon Research Award, NVIDIA GPU Grant, and Design Knowledge Company (subcontract number: 10827.002.120.04)." }, { "heading": "A ADDITIONAL DERIVATIONS ABOUT METHODOLOGY", "text": "" }, { "heading": "A.1 DETAILED DERIVATION OF BAYESIAN VARIATIONAL INFERENCE OF PCVAE", "text": "Given an approximate posterior q(z, w∣x, y), we can use the Jensen’s equality to obtain the variational lower bound of p(x, y) as:\nlog p(x, y) ≥ Eq(z,w∣x,y)[log p(x, y, w, z)/q(z, w∣x, y)]. We have two assumptions: (1) w only encode the information from y, namely, x and y are conditionally independent given w (i.e., x ⊥ y∣w); (2) z is independent from w and y, namely z ⊥ w and z ⊥ y, which is equal to y ⊥ z∣w (see derivation in Appendix A.3). First, based on the two assumptions, we can get x ⊥ y∣(z, w) (see derivation in Appendix A.4). Thus, we have log p(x, y∣z, w) = log p(x∣z, w)+ log p(y∣z, w) . Then, based on the assumption y ⊥ z∣w, we can have log p(y∣z, w) = log p(y∣w). Then we get log p(x, y∣z, w) = log p(x∣z, w) + log p(y∣w). To explicitly represent the dependence between x and (z, w) as well as the dependence between y and w, we can parameterize the joint log-likelihood as log pθ,γ(x, y, w, z) with θ and γ as:\nlog pθ,γ(x, y, w, z) = log pθ(x∣z, w) + log p(z, w) + log pγ(y∣w). (11) Given the condition that a parameterized qφ(z, w∣x, y) = qφ(z, w∣x) = qφ(z∣x)qφ(w∣x) (since the information on y is included in x), by taking it into the above variational lower bound term, we obtain the negative part as an upper bound on − log pθ,γ(x, y) as: L1 =Eq(z,w∣x,y)[log pθ(x∣z, w) − log p(z, w) − log pγ(y∣w) + log q(z, w∣x, y)]\n= −Eq(z,w∣x,y)[− log pθ(x∣z, w)] − Eq(z,w∣x,y)[log pγ(y∣w)] + Eq(z,w∣x,y)[log q(z, w∣x, y) − log p(z, w)] (12)\n= −Eq(z,w∣x)[− log pθ(x∣z, w)] − Eq(w∣x)[log pγ(y∣w)] + Eq(z,w∣x)[log q(z, w∣x) p(z, w) ]\n= −Eqφ(z,w∣x)[log pθ(x∣z, w)] − Eqφ(w∣x)[log pγ(y∣w)] +DKL(qφ(z, w∣x)∣∣p(z, w))\nBased on the above derivation of DKL(qφ(z, w∣x)∣∣p(z, w)), we could further decompose it as:\nDKL(qφ(z, w∣x)∣∣p(z, w)) = Eq(z,w∣x)[log q(z, w∣x) p(z, w) ]\n= Eq(z,w∣x)[log q(z, w∣x) − log p(z, w)]\n= Eq(z,w∣x)[log q(z, w∣x) q(z, w) + log q(z, w) ∏i,j q(zi)q(wj)) + log ∏i q(zi) ∏i p(zi) + log ∏i q(wi) ∏i p(wi) ].\nConsidering that q(z, w) = Ep(x)q(z, w∣x), we can get: Ep(x)[DKL(qφ(z, w∣x)∣∣p(z, w))]\n= Ep(x)[Eq(z,w∣x)[log q(z, w∣x) q(z, w) + log q(z, w) ∏i,j q(zi)q(wj)) + log ∏i q(zi) ∏i p(zi) + log ∏i q(wi) ∏i p(wi) ]] = Eq(z,w,x)[log q(z, w, x) q(z, w)p(x) + Eq(z,w)[log q(z, w) ∏i,j q(zi)q(wj)) ] + Eq(z)[log ∏i q(zi) ∏i p(zi) ] + Eq(w)[log ∏i q(wi) ∏i p(wi) ] (13) = DKL(qφ(z, w, x) ∥ q(z, w)p(x)) +DKL(q(z, w) ∥ ∏i,j q(zi)q(wj)) +∑\ni DKL(q(zi) ∥ p(zi)) +∑j DKL(q(wj) ∥ p(wj)) (14)" }, { "heading": "A.2 EXTENSION OF THE INVERTIBLE FUNCTION TO DISCRETE-VALUED PROPERTIES", "text": "Here we consider the situation when property y is discrete-valued and we denote yk = {0, 1} ∈ RC as a one-hot vector here to represent its category and C is the number of categories. In the overall\nobjective, we only explicitly model the conditional distribution of pγ(yk∣wk), hence, to achieve the precisely control of property via z and w, which is necessary to generate x with a certain property yk that belong to the mth category, we need to maximize the probability that yk =M as follows: x ∼ pθ(x∣z, w), z, w ← arg maxz∼p(z),w∼p(w) pγ(yk =M∣z, w), (15) where M is also an one-hot vector with M[m] = 1. Then the above equation is equal to: x ∼ pθ(x∣z, w), z ∼ p(z), wj,j/=k ∼ p(wj), wk ← arg maxwk∼p(wk) pγ(yk =M∣wk) (16)\nwhere wk can be determined as follows based on cross entropy objective (Cat in the followings denotes categorical distribution):\nwk ← arg maxwk∼p(wk) pγ(yk =M∣wk) = wk ← arg maxwk∼p(wk) log Cat(yk =M∣νk(wk; γ))\n= wk ← arg maxwk∼p(wk) log exp(νk(wk; γ)[m]) ∑Cj exp(νk(wk; γ)[j]\n= wk ← ν −1 K (M) (17)" }, { "heading": "A.3 DERIVATION PROCESS EXPLANATION 1", "text": "In this section, we proof that if z ⊥ w and z ⊥ y, we can have y ⊥ z∣w. First, based on the Bayesian theory, we could have p(y, z∣w) = p(z∣y, w)p(y∣w) = p(y∣z, w)p(z∣w), namely,\np(z∣y, w)p(y∣w) = p(y∣z, w)p(z∣w). (18) Then, considering that z ⊥ w and z ⊥ y, we can have p(z∣w) = p(z) and also p(z∣y, w) = p(z). Then the right and left sides of Equation 18 can be replaced as: p(z)p(y∣w) = p(y∣z, w)p(z), and then we have p(y∣w) = p(y∣z, w). Given p(y∣w) = p(y∣z, w), both sides of the equations are multiplied by p(z∣w), and we have p(z∣w)p(y∣w) = p(y∣z, w)p(z∣w) = p(y, z∣w). Thus, y ⊥ z∣w." }, { "heading": "A.4 DERIVATION PROCESS EXPLANATION 2", "text": "In this section, we proof that if x ⊥ y∣w, y ⊥ z, and z ⊥ w, we can have x ⊥ y∣(w, z). First, based on the Bayesian theory, we could have p(x, y∣w, z) = p(y∣x, z, w)p(x∣z, w) = p(x∣y, z, w)p(y∣z, w), namely,\np(y∣x, z, w)p(x∣z, w) = p(x∣y, z, w)p(y∣z, w) (19) Then, considering that y ⊥ z and z ⊥ w (namely y ⊥ z∣w, as proved in Section A.3), as well as y ⊥ x∣w, we can have p(y∣x, z, w) = p(y∣w) and also p(y∣z, w) = p(y∣w). Then the right and left sides of Equation 19 can be replaced as p(y∣w)p(x∣z, w) = p(x∣z, y, w)p(y∣w), and then we have p(x∣z, w) = p(x∣y, z, w). Thus, we get x ⊥ y∣(w, z)." }, { "heading": "B ARCHITECTURE AND HYPER-PARAMETERS", "text": "" }, { "heading": "B.1 ARCHITECTURE AND HYPER-PARAMETERS FOR DSPRITS AND 3DSHAPES DATASET", "text": "Based on the description of the implementation of the proposed objective, there are three components, namely, encoder 1 to model pθ(z, w∣x), decoder 1 to model pφ(x∣z, w) and decoder 2 to model pγ(w∣y). When evaluate the dSprites data, the number of latent dimensions in z is 3 and the number of latent dimensions in w is also 3. The detailed architectures are shown in Table 4. The hyper-paramter used for training is detailed in Table 6." }, { "heading": "B.2 ARCHITECTURE AND HYPER-PARAMETERS FOR MOLECULE QM9 DATASET", "text": "The architecture used for evaluation on the QM9 dataset are totally borrowed from the work proposed by Liu et al. (2018b). A molecule is represented as a graph G(X,A), where each atom is a\nnode andX refers to the features for all nodes; each bond is an edge, whereA denotes to the adjacent matrix of the graph. Considering it is not the concentration of this paper, we briefly introduce the model and provide it architecture parameters in Table 5. We recommend the reader to the work (Liu et al., 2018b) for more details.\nMolecule Encoder and Decoder. A encoder 1 is constructed to model pφ(z, w∣x) based on a gated graph neural network (GGNN). As a result, by sampling from the modelled distribution, (z, w) are obtained variables containing the graph representation vectors. The molecule decoder 1 models the distribution pθ(x∣z, w) to generate the molecule graph G. The molecule decoder 2 models the distribution pθ(y∣w) to predict the properties y. The process proceeds in an auto-regressive style. In each step a focus node is chosen to be visited, and then the edges are generated related to this focus node. The nodes are ordered by using the breadth-first traversal. The molecule decoder mainly contains three steps, namely node initialization, node update and edge selection and labelling.\nNode Initialization. We first define N as an upper bound on the number of nodes in the final generated graph. An initial state h(t=0)i is assigned with each node vi in a set of initially unconnected nodes. Specifically, h(t=0)i is the concatenation as [(z, w), τi], where τi is an one-hot vector indicating the atom type. τi is derived from (z, w) by sampling from the softmax output of a learned mapping τi ∼ f(Zi). From these node-level states, we can calculate global representations H(t), which is the average representation of nodes in the connected component at generation step t. In addition to N working nodes, a special “stop node” is also initialized to a learned representation hend for managing algorithm termination detailed as below.\nEdge Selection and Labeling At each step t, a focus node vi is picked from the queue of nodes. Then an edges ei,j is selected from node vi to node vj with label Ei,j . Specifically, for each nonfocus node vj , we construct a feature vector η (t) i,j = [h (t) i , h (t) j , di,j , H(t), H(0)], where di,j is the graph distance (i.e. path) between two nodes vi, vj . We use these representations to produce a distribution over candidate edges as p(ei,j , Ei,j∣η (t) i,j ) = p(Ei,j∣η (t) i,j , ei,j)⋅p(ei,j∣η (t) i,j ). The parameters of the distribution are calculated as softmax outputs from neural networks. New edges are sampled one by one from the above learned distributions. Any nodes that are connected to the graph for the first time during this edge selection are added to the node queue.\nNode Update. Whenever we obtain a new graph G(t+1) at step t, the previous node states h(t)i is discarded and a new node representations h(t+1)i for each node is calculated by taking their (possibly changed) neighborhood into account. To this end, a standard gated graph neural network (GGNN) is utilized through S steps, which is defined as a recurrent operation over messages r(s)i .\nTermination. In the edge generation process of each node, the edges to a node vi is kept added until an edge to the stop node is selected. Then we move the focus from the node vi, and regard vi as a “closed” node. The next focus node is then selected from the focus queue. In this way, a single connected component is grown in a breadth-first manner. The node and edge generations continue until the queue of nodes is empty." }, { "heading": "B.3 ESTIMATION OF GROUP-WISE AND PROPERTY-WISE DISENTANGLEMENT TERMS", "text": "To evaluate the density q(z), q(w) and q(wi) in the second loss L2, a naı̈ve Monte Carlo approximation (Caflisch et al., 1998) is utilized for the estimation. We describe the operation by taking q(z) as an example. A naı̈ve Monte Carlo approximation based on a minibatch of samples from p(n) (n is the data sample index) is likely to underestimate q(z). As stated by Chen et al. (2018), this can be intuitively seen by viewing q(z) as a mixture distribution where the data index n indicates the mixture component. With a randomly sampled component, q(z∣n) is close to 0, whereas q(z∣n) would be large if n is the component that z came from. So it is much better to sample this component and weight the probability appropriately. Thus, we can use a weighted version for estimating the function log q(z) during training. When provided with a mini-batch of samples {n1, ..., nM}, we can use the estimator as:\nEq(z)[log q(z)] ≈ 1\nM ∑ i=1 [log 1 MN\nM ∑ j=1 q(z(ni)∣ni)], (20)\nwhere z(ni) is a sample from q(z∣n), and N is the count of the whole samples in dataset. M is the count of samples in a mini-batch." }, { "heading": "B.4 DETAILED DESCRIPTION OF AVGMI", "text": "To evaluate the performance of the disentangled representation learning of the inference model, we adopt the metric avgMI proposed by Locatello et al. (2019b). The goal of avgMI is to evaluate the whether each latent variable wk only capture the information of the relevant property yk and has nothing correlation with the other properties. We utilize the MI matrix (i.e. the matrix of pairwise mutual information between w and y) to represent the overall disentanglement performance. The optimal MI matrix should be like an identity matrix with diagonal entries all 1 and other entries all 0, where the mutual information between each wk and yk is 1 and the MI between wk and other property yj is 0. Then avgMI score is calculated as the distance between the real MI matrix and the optimal MI matrix. The smaller the avgMI is, the better the performance are.\nEach entry, namely, mutual information MI(wi, yj), in the mutual information matrix I(x, y) is calculated as: MI(wi, yj) = ∑wi ∑yj [(p(wi, yj) log p(wi,yj) p(wi)p(yj)\n)]. Therefore, to empirically estimate p(wi), p(yj), and p(wi, yj), we need to have w and y in the experiments. And as we know, we have the observations on x and y, and w is generated from x by the encoder." }, { "heading": "C ADDITIONAL EXPERIMENT RESULTS", "text": "" }, { "heading": "C.1 EVALUATION ON THE QUALITY OF GENERATION ON QM9", "text": "We evaluate the quality of the generated molecules on the QM9 dataset by three metrics: Novelty measures the fraction of generated molecules that are not in the training dataset; Uniqueness measures the fraction of generated molecules after and before removing duplicates; Validity measures the fraction of generated molecules that are chemically valid. The results of the evaluation are shown in Table 7. As shown in Table 7, the proposed PCVAE still achieve 100% valdity and 99.5%, which is desirable in the problem of controlling generation. We could also found that the proposed PCVAE can have an influence on the uniqueness of the generated data, which may be explained by the supervision of the model. However, considering our focus is on data generation given the desired property, is not a critical issue, whereas the validity and novelty are still very high." }, { "heading": "C.2 EVALUATION ON DISENTANGLED LATENT VARIABLES", "text": "Evaluation results on dSprites. We provide the qualitative evaluation on the comparison experiments when traversing the values of latent variables in Fig. 6. As shown here, the latent variables w learned by the baseline model PCVAE tc could successfully capture the three properties, which validate the effectiveness of the proposed overall inference model. The latent variables w2 and w3 learned by CSVAE can capture the x pos and y pos properties, while w1 fails in capture the scale property. Semi-VAE can capture the three properties but the quality of the generated images is very bad and biased.\nEvaluation results on 3Dshapes. We provide the qualitative evaluation on the 3Dshapes when traversing the values of each latent variables w in Fig. 6. As shown here, the latent variables w learned by the proposed model PCVAE could successfully capture the three properties, object scale, wall hue and the floor hue in the images, which validate the effectiveness of the proposed overall inference model.\nEvaluation results on QM9 dataset. We calculate the mutual information between each encoded latent variable wk and the property yk that assigned to it, as well as the average mutual information between latent z and each yk, as shown in Figure 8 for the results on molecule QM9 dataset. For this difficult task in the molecule domain which contains the implicit properties, the proposed PCVAE still shows significant advantage in capturing the Mol weight and cLogP properties. We also qualitative evaluate the relationship of each latent variable and its relevant properties. We visualize the variation of the properties on QM9 datasets, when traversing on the priors of each latent variable, as shown in Figure 9. The variable w1 and w2 could successfully capture the properties Molweight and cLogP." }, { "heading": "C.3 EVALUATION FOR PRECISE PROPERTY CONTROL", "text": "Evaluation on dSprites dataset. For dsprits dataset, since we have no ground-truth method to calculatethe properties y′k of the image, we directly visualize the images that are generated given different properties yk on three comparison models. As shown in Figure 10, each column shows four images generated given the same value of yk, but given the different values of other properties. It could be easily observed that the objects of the generated images in the same column have the same value of y′k. We provide the visualization of y pos in Fig.10." }, { "heading": "C.4 EVALUATION ON THE NECESSARIES OF PCVAE(COR) FOR DEALING WITH CORRELATED PROPERTY", "text": "To validate the necessity of PCVAE (cor) for dealing with the correlated properties, we evaluated the performance of PCVAE and comparison models in dealing with correlated properties. As shown in Table 8, for generation task, the proposed PCVAE (cor) achieved much smaller MSE than those achieved by CSVAE and PCVAE by averagely around 72.9%, 52.5% and 58.0% on the control of\nMolweigt, cLogP and cLogS, respectively. This validates that traditional disentangled-based VAE models cannot handle the controllable generation for correlated properties. For prediction task, the proposed PCVAE (cor) also achieved much smaller MSE than those achieved by CSVAE and PCVAE by averagely around 53.12%, 13.2% and 22.1% on the prediction of Molweight, cLogP and cLogS respectively. The bad performance on PCVAE and CSVAE in dealing with correlated properties is caused by the conflicts between the independence enforcement on latent variables w and the dependence relationship enforcement on w and the correlated properties y, which largely deteriorate the optimization of the whole model." } ]
2,021
PROPERTY CONTROLLABLE VARIATIONAL AUTOEN-
SP:b2e0fd72f2a599a9cb288618decfcd709d712f38
[ "In the RL setting, this paper tackles the case where an agent may have access to large amounts of offline experience data. The objective of the work is to find an effective way to leverage this data for finding temporally extended primitive behaviors. The paper provides results that show how performing offline primitive learning can be leveraged for improving few-shot imitation learning as well as exploration and transfer on a variety of benchmark domains." ]
Reinforcement learning (RL) has achieved impressive performance in a variety of online settings in which an agent’s ability to query the environment for transitions and rewards is effectively unlimited. However, in many practical applications, the situation is reversed: an agent may have access to large amounts of undirected offline experience data, while access to the online environment is severely limited. In this work, we focus on this offline setting. Our main insight is that, when presented with offline data composed of a variety of behaviors, an effective way to leverage this data is to extract a continuous space of recurring and temporally extended primitive behaviors before using these primitives for downstream task learning. Primitives extracted in this way serve two purposes: they delineate the behaviors that are supported by the data from those that are not, making them useful for avoiding distributional shift in offline RL; and they provide a degree of temporal abstraction, which reduces the effective horizon yielding better learning in theory, and improved offline RL in practice. In addition to benefiting offline policy optimization, we show that performing offline primitive learning in this way can also be leveraged for improving few-shot imitation learning as well as exploration and transfer in online RL on a variety of benchmark domains. Visualizations and code are available at https://sites.google.com/view/opal-iclr
[ { "affiliations": [], "name": "Anurag Ajay" }, { "affiliations": [], "name": "Aviral Kumar" }, { "affiliations": [], "name": "Pulkit Agrawal" }, { "affiliations": [], "name": "Sergey Levine" }, { "affiliations": [], "name": "Ofir Nachum" } ]
[ { "authors": [ "Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel" ], "title": "Constrained policy optimization", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Christopher G Atkeson", "Benzun P Wisely Babu", "Nandan Banerjee", "Dmitry Berenson", "Christoper P Bove", "Xiongyi Cui", "Mathew DeDonato", "Ruixiang Du", "Siyuan Feng", "Perry Franklin" ], "title": "No falls, no resets: Reliable humanoid behavior in the darpa robotics challenge", "venue": "In 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids),", "year": 2015 }, { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Noam Brown", "Tuomas Sandholm" ], "title": "Superhuman ai for multiplayer poker", "venue": "doi: 10.1126/science.aay2400. URL https://science", "year": 2019 }, { "authors": [ "Sudeep Dasari", "Frederik Ebert", "Stephen Tian", "Suraj Nair", "Bernadette Bucher", "Karl Schmeckpeper", "Siddharth Singh", "Sergey Levine", "Chelsea Finn" ], "title": "Robonet: Large-scale multi-robot learning", "venue": null, "year": 1910 }, { "authors": [ "Gabriel Dulac-Arnold", "Daniel Mankowitz", "Todd Hester" ], "title": "Challenges of real-world reinforcement learning", "venue": "arXiv preprint arXiv:1904.12901,", "year": 2019 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "J. Fu", "A. Kumar", "O. Nachum", "G. Tucker", "S. Levine" ], "title": "D4rl: Datasets for deep data-driven reinforcement learning", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "arXiv preprint arXiv:1812.02900,", "year": 2018 }, { "authors": [ "Seyed Kamyar Seyed Ghasemipour", "Dale Schuurmans", "Shixiang Shane Gu" ], "title": "Emaq: Expectedmax q-learning operator for simple yet effective offline and online rl", "venue": "arXiv preprint arXiv:2007.11091,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Offpolicy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "arXiv preprint arXiv:1801.01290,", "year": 2018 }, { "authors": [ "K. Hausman", "J.T. Springenberg", "Z. Wang", "N. Heess", "M. Riedmiller" ], "title": "Learning an embedding space for transferable robot skills", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2016 }, { "authors": [ "Allan Jabri", "Kyle Hsu", "Abhishek Gupta", "Ben Eysenbach", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised curricula for visual meta-reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Natasha Jaques", "Asma Ghandeharioun", "Judy Hanwen Shen", "Craig Ferguson", "Agata Lapedriza", "Noah Jones", "Shixiang Gu", "Rosalind Picard" ], "title": "Way off-policy batch deep reinforcement learning of implicit human preferences in dialog", "venue": "arXiv preprint arXiv:1907.00456,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Sanjay Krishnan", "Roy Fox", "Ion Stoica", "Ken Goldberg" ], "title": "Ddco: Discovery of deep continuous options for robot learning from demonstrations", "venue": "arXiv preprint arXiv:1710.05421,", "year": 2017 }, { "authors": [ "Ashish Kumar", "Saurabh Gupta", "Jitendra Malik" ], "title": "Learning navigation subroutines from egocentric videos", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy qlearning via bootstrapping error reduction", "venue": "In Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Corey Lynch", "Mohi Khansari", "Ted Xiao", "Vikash Kumar", "Jonathan Tompson", "Sergey Levine", "Pierre Sermanet" ], "title": "Learning latent plans from play", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Tatsuya Matsushima", "Hiroki Furuta", "Yutaka Matsuo", "Ofir Nachum", "Shixiang Gu" ], "title": "Deploymentefficient reinforcement learning via model-based offline optimization", "venue": "arXiv preprint arXiv:2006.03647,", "year": 2020 }, { "authors": [ "Josh Merel", "Leonard Hasenclever", "Alexandre Galashov", "Arun Ahuja", "Vu Pham", "Greg Wayne", "Yee Whye Teh", "Nicolas Heess" ], "title": "Neural probabilistic motor primitives for humanoid control", "venue": "arXiv preprint arXiv:1811.11711,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Near-optimal representation learning for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1810.01257,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": "arXiv preprint arXiv:1908.05224,", "year": 2019 }, { "authors": [ "Ofir Nachum", "Haoran Tang", "Xingyu Lu", "Shixiang Gu", "Honglak Lee", "Sergey Levine" ], "title": "Why does hierarchy (sometimes) work so well in reinforcement learning", "venue": "arXiv preprint arXiv:1909.10618,", "year": 2019 }, { "authors": [ "Paavo Parmas", "Carl Edward Rasmussen", "Jan Peters", "Kenji Doya" ], "title": "Pipps: Flexible model-based policy search robust to the curse of chaos", "venue": null, "year": 1902 }, { "authors": [ "Xue Bin Peng", "Michael Chang", "Grace Zhang", "Pieter Abbeel", "Sergey Levine" ], "title": "Mcp: Learning composable hierarchical control with multiplicative compositional policies", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Jan Peters", "Katharina Mulling", "Yasemin Altun" ], "title": "Relative entropy policy search", "venue": "In Twenty-Fourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Lerrel Pinto", "Abhinav Gupta" ], "title": "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours", "venue": "In 2016 IEEE international conference on robotics and automation (ICRA),", "year": 2016 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: Discrete stochastic dynamic programming", "venue": null, "year": 1994 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Tanmay Shankar", "Abhinav Gupta" ], "title": "Learning robot skills with temporal variational inference", "venue": "arXiv preprint arXiv:2006.16232,", "year": 2020 }, { "authors": [ "Archit Sharma", "Shixiang Gu", "Sergey Levine", "Vikash Kumar", "Karol Hausman" ], "title": "Dynamics-aware unsupervised discovery of skills", "venue": "arXiv preprint arXiv:1907.01657,", "year": 2019 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Martin Stolle", "Doina Precup" ], "title": "Learning options in reinforcement learning. volume 2371", "venue": "pp. 212–223,", "year": 2002 }, { "authors": [ "G. Tesauro" ], "title": "Temporal difference learning and td-gammon", "venue": "J. Int. Comput. Games Assoc.,", "year": 1995 }, { "authors": [ "Hado Van Hasselt", "Arthur Guez", "David Silver" ], "title": "Deep reinforcement learning with double qlearning", "venue": "arXiv preprint arXiv:1509.06461,", "year": 2015 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1703.01161,", "year": 2017 }, { "authors": [ "Ziyu Wang", "Josh S Merel", "Scott E Reed", "Nando de Freitas", "Gregory Wayne", "Nicolas Heess" ], "title": "Robust imitation of diverse behaviors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior regularized offline reinforcement learning", "venue": "arXiv preprint arXiv:1911.11361,", "year": 2019 }, { "authors": [ "Tianhe Yu", "Deirdre Quillen", "Zhanpeng He", "Ryan Julian", "Karol Hausman", "Chelsea Finn", "Sergey Levine" ], "title": "Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "nan" ], "title": "2017) and use Double DQN (DDQN) (Van Hasselt et al., 2015) to learn a task policy", "venue": null, "year": 2015 }, { "authors": [ "Kumar" ], "title": "We used CQL (Kumar et al., 2020b) for offline RL experiments in learning a task policy either in action space A or latent space Z", "venue": null, "year": 2020 }, { "authors": [ "Jabri" ], "title": "We perform the clustering for 25 epochs with a fixed learning rate of 1e − 3, Adam optimizer (Kingma & Ba, 2014) and a batch size of 50 using the Algorithm", "venue": null, "year": 2019 }, { "authors": [ "Kumar" ], "title": "CQL Hyperparameters We used the standard hyperparameters for CQL(H) with discrete action", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement Learning (RL) systems have achieved impressive performance in a variety of online settings such as games (Silver et al., 2016; Tesauro, 1995; Brown & Sandholm, 2019) and robotics (Levine et al., 2016; Dasari et al., 2019; Peters et al., 2010; Parmas et al., 2019; Pinto & Gupta, 2016; Nachum et al., 2019a), where the agent can act in the environment and sample as many transitions and rewards as needed. However, in many practical applications the agent’s ability to continuously act in the environment may be severely limited due to practical concerns (DulacArnold et al., 2019). For example, a robot learning through trial and error in the real world requires costly human supervision, safety checks, and resets (Atkeson et al., 2015), rendering many standard online RL algorithms inapplicable (Matsushima et al., 2020). However, in such settings we might instead have access to large amounts of previously logged data, which could be logged from a baseline hand-engineered policy or even from other related tasks. For example, in self-driving applications, one may have access to large amounts of human driving behavior; in robotic applications, one might have data of either humans or robots performing similar tasks. While these offline datasets are often undirected (generic human driving data on various routes in various cities may not be directly relevant to navigation of a specific route within a specific city) and unlabelled (generic human driving data is often not labelled with the human’s intended route or destination), this data is still useful in that it can inform the algorithm about what is possible to do in the real world, without the need for active exploration.\nIn this paper, we study how, in this offline setting, an effective strategy to leveraging unlabeled and undirected past data is to utilize unsupervised learning to extract potentially useful and temporally extended primitive skills to learn what types of behaviors are possible. For example, consider a dataset of an agent performing undirected navigation in a maze environment (Figure 1). While the dataset does not provide demonstrations of exclusively one specific point-to-point navigation task,\n∗Work done during an internship at Google Brain\nit nevertheless presents clear indications of which temporally extended behaviors are useful and natural in this environment (e.g., moving forward, left, right, and backward), and our unsupervised learning objective aims to distill these behaviors into temporally extended primitives. Once these locomotive primitive behaviors are extracted, we can use them as a compact constrained temporallyextended action space for learning a task policy with offline RL, which only needs to focus on task relevant navigation, thereby making task learning easier. For example, once a specific point-to-point navigation is commanded, the agent can leverage the learned primitives for locomotion and only focus on the task of navigation, as opposed to learning locomotion and navigation from scratch.\nWe refer to our proposed unsupervised learning method as Offline Primitives for Accelerating offline reinforcement Learning (OPAL), and apply this basic paradigm to offline RL, where the agent is given a single offline dataset to use for both the initial unsupervised learning phase and then a subsequent task-directed offline policy optimization phase. Despite the fact that no additional data is used, we find that our proposed unsupervised learning technique can dramatically improve offline policy optimization compared to performing offline policy optimization on the raw dataset directly. To the best of our knowledge, ours is the first work to theoretically justify and experimentally verify the benefits of primitive learning in offline RL settings, showing that hierarchies can provide temporal abstraction that allows us to reduce the effect of compounding errors issue in offline RL. These theoretical and empirical results are notably in contrast to previous related work in online hierarchical RL (Nachum et al., 2019b), which found that improved exploration is the main benefit afforded by hierarchically learned primitives. We instead show significant benefits in the offline RL setting, where exploration is irrelevant.\nBeyond offline RL, and although this isn’t the main focus of the work, we also show the applicability of our method for accelerating RL by incorporating OPAL as a preprocessing step to standard online RL, few-shot imitation learning, and multi-task transfer learning. In all settings, we demonstrate that the use of OPAL can improve the speed and quality of downstream task learning." }, { "heading": "2 RELATED WORK", "text": "Offline RL. Offline RL presents the problem of learning a policy from a fixed prior dataset of transitions and rewards. Recent works in offline RL (Kumar et al., 2019; Levine et al., 2020; Wu et al., 2019; Ghasemipour et al., 2020; Jaques et al., 2019; Fujimoto et al., 2018) constrain the policy to be close to the data distribution to avoid the use of out-of-distribution actions (Kumar et al., 2019; Levine et al., 2020). To constrain the policy, some methods use distributional penalties, as measured by KL divergence (Levine et al., 2020; Jaques et al., 2019), MMD (Kumar et al., 2019), or Wasserstein distance (Wu et al., 2019). Other methods first sample actions from the behavior policy and then either clip the maximum deviation from those actions (Fujimoto et al., 2018) or just use those actions (Ghasemipour et al., 2020) during the value backup to stay within the support of the offline data. In contrast to these works, OPAL uses an offline dataset for unsupervised learning of a continuous space of primitives. The use of these primitives for downstream tasks implicitly constrains a learned primitive-directing policy to stay close to the offline data distribution. As we demonstrate in our experiments, the use of OPAL in conjunction with an off-the-shelf offline RL algorithm in this way can yield significant improvement compared to applying offline RL to the dataset directly.\nOnline skill discovery. There are a number of recent works (Eysenbach et al., 2018; Nachum et al., 2018a; Sharma et al., 2019) which use unsupervised objectives to discover skills and use the discovered skills for planning (Sharma et al., 2019), few-shot imitation learning, or online RL (Eysenbach et al., 2018; Nachum et al., 2018a). However, these works focus on online settings and assume access to the environment. In contrast, OPAL focuses on settings where a large dataset of diverse behaviors is provided but access to the environment is restricted. It leverages these static offline datasets to discover primitive skills with better state coverage and avoids the exploration issue of learning primitives from scratch.\nHierarchical policy learning. Hierarchical policy learning involves learning a hierarchy of policies where a low-level policy acts as primitive skills and a high-level policy directs the low-level policy to solve a task. While some works (Bacon et al., 2017; Stolle & Precup, 2002; Peng et al., 2019) learn a discrete set of lower-level policies, each behaving as a primitive skill, other works (Vezhnevets et al., 2017; Nachum et al., 2018b; 2019a; Hausman et al., 2018) learn a continuous space of primitive skills representing the lower-level policy. These methods have mostly been applied in online settings. However, there have been some recent variants of the above works (Lynch et al., 2020; Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) which extract skills from a prior dataset and using it for either performing tasks directly (Lynch et al., 2020) or learning downstream tasks (Shankar & Gupta, 2020; Krishnan et al., 2017; Merel et al., 2018) with online RL. While OPAL is related to these works, we mainly focus on leveraging the learned primitives for asymptotically improving the performance of offline RL; i.e., both the primitive learning and the downstream task must be solved using a single static dataset. Furthermore, we provide performance bounds for OPAL and enumerate the specific properties an offline dataset should possess to guarantee improved downstream task learning, while such theoretical guarantees are largely absent from existing work." }, { "heading": "3 PRELIMINARIES", "text": "We consider the standard Markov decision process (MDP) setting (Puterman, 1994), specified by a tupleM = (S,A,P, µ, r, γ) where S represents the state space, A represents the action space, P(s′|s, a) represents the transition probability, µ(s) represents the initial state distribution, r(s, a) ∈ (−Rmax,Rmax) represents the reward function, and γ ∈ (0, 1) represents the discount factor. A policy π in this MDP corresponds to a function S → ∆(A), where ∆(A) is the simplex over A. It induces a discounted future state distribution dπ , defined by dπ(s) = (1 − γ)∑∞t=0 γtP(st = s|π), where P(st = s|π) is the probability of reaching the state s at time t by running π on M. For a positive integer k, we use dπk (s) = (1 − γk) ∑∞ t=0 γ\ntkP(stk = s|π) to denote the everyk-step state distribution of π. The return of policy π in MDP M is defined as JRL(π,M) =\n1 1−γEs∼dπ,a∼π(a|s)[r(s, a)]. We represent the reward- and discount-agnostic environment as a tuple E = (S,A,P, µ). We aim to use a large, unlabeled, and undirected experience dataset D := {τ ri := (st, at)c−1t=0}Ni=1 associated with E to extract primitives and improve offline RL for downstream task learning. To account for the fact that the dataset D may be generated by a mixture of diverse policies starting at diverse initial states, we assume D is generated by first sampling a behavior policy π ∼ Π along with an initial state s ∼ κ, where Π, κ represent some (unknown) distributions over policies and states, respectively, and then running π on E for c time steps starting at s0 = s. We define the probability of a sub-trajectory τ := (s0, a0, . . . , sc−1, ac−1) in D under a policy π as π(τ) = κ(s0) ∏c−1 t=1 P(st|st−1, at−1) ∏c−1 t=0 π(at|st), and the conditional probability as\nπ(τ |s) = 1[s = s0] ∏c−1 t=1 P(st|st−1, at−1) ∏c−1 t=0 π(at|st). In this work, we will show how to apply unsupervised learning techniques to D to extract a continuous space of primitives πθ(a|s, z), where z ∈ Z , the latent space inferred by unsupervised learning. We intend to use the learned πθ(a|s, z) to asymptotically improve the performance of offline RL for downstream task learning. For offline RL, we assume the existence of a dataset Dr := {τ ri := (st, at, rt)c−1t=0}Ni=1, corresponding to the same sub-trajectories in D labelled with MDP rewards. Additionally, we can use the extracted primitives for other applications such as few-shot imitation learning, online RL, and online multi-task transfer learning. We review the additional assumptions for these applications in Appendix A." }, { "heading": "4 OFFLINE RL WITH OPAL", "text": "In this section, we elaborate on OPAL, our proposed method for extracting primitives from D and then leveraging these primitives to learn downstream tasks with offline RL. We begin by describing our unsupervised objective, which distills D into a continuous space of latent-conditioned and temporally-extended primitive policies πθ(a|s, z). For learning downstream tasks with offline RL, we first label Dr with appropriate latents using the OPAL encoder qφ(z|τ) and then learn a policy πψ(z|s) which is trained to sample an appropriate primitive every c steps to optimize a specific task, using any off-the-shelf offline RL algorithm. A graphical overview of offline RL with OPAL is shown in Figure 2. While we mainly focus on offline RL, we briefly discuss how to use the learned primitives for few-shot imitation learning, online RL, and multi-task online transfer learning in section 5 and provide more details in Appendix A." }, { "heading": "4.1 EXTRACTING TEMPORALLY-EXTENDED PRIMITIVES FROM DATA", "text": "We would like to extract a continuous space of temporally-extended primitives πθ(a|s, z) from D which we can later use as an action space for learning downstream tasks with offline RL. This would reduce our effective task horizon, thereby making the downstream learning easier, as well as allow the downstream policy to stay close to the offline data distribution, thereby bringing stability to the downstream learning. We propose the following objective for learning πθ, incorporating an auto-encoding loss function with a KL constraint to encourage better generalization:\nmin θ,φ,ω J(θ, φ, ω) = Êτ∼D,z∼qφ(z|τ)\n[ − c−1∑\nt=0\nlog πθ(at|st, z) ]\n(1)\ns.t. Êτ∼D[DKL(qφ(z|τ)||ρω(z|s0))] ≤ KL (2) where Ê indicates empirical expectation. The learned components of this objective may be interpreted as encoder, decoder, and prior:\nEncoder: qφ(z|τ) encodes the trajectory τ of state-action pairs into a distribution in latent space and gives out parameters of that distribution. In our case, we represent qφ with a bidirectional GRU which takes in τ and gives out parameters of a Gaussian distribution (µencz , σ enc z ).\nDecoder (aka Primitive Policy): πθ(a|s, z) is the latent-conditioned policy. It maximizes the conditional log-likelihood of actions in τ given the state and the latent vector. In our implementation, we parameterize it as a feed-forward neural network which takes in current state and latent vector and gives out parameters of a Gaussian distribution for the action (µa, σa).\nPrior/Primitive Predictor: ρω(z|s0) tries to predict the encoded distribution of the sub-trajectory τ from its initial state. Our implementation uses a feed-forward neural network which takes in the initial state and gives out parameters of a Gaussian distribution (µprz , σ pr z ).\nKL-constraint (Equation 2). As an additional component of the algorithm, we enforce consistency in the latent variables predicted by the encoder qφ(z|τ) and the prior ρω(z|s0). Since our goal is to obtain a primitive z that captures a temporal sequence of actions for a given sub-trajectory τ = (s0, a0, · · · , sc−1, ac−1) (as defined in Section 3), we utilize a regularization that enforces the distribution, qφ(z|τ) to be close to just predicting the primitive or the latent variable z given the start state of this sub-trajectory, s0 (i.e. ρω(z|s0)). This conditioning on the initial state regularizes the distribution qφ(z|τ) to not overfit to the the complete sub-trajectory τ as the same z should also be predictable only given s0. The above form of KL constraint is inspired from past works (Lynch et al., 2020; Kumar et al., 2020a). In particular Lynch et al. (2020) add a KL-constraint (Equation 2, “Plan prior matching” in Lynch et al. (2020)) that constrains the distribution over latent variables computed only given the initial state and the goal state to the distribution over latent variables computed using the entire trajectory. Our form in Equation 2 is similar to this prior except that we do not operate in a goal-conditioned RL setting and hence only condition ρω on the initial state s0.\nIn practice, rather than solving the constrained optimization directly, we implement the KL constraint as a penalty, weighted by an appropriately chosen coefficient β. Thus, one may interpret our unsupervised objective as using a sequential β-VAE (Higgins et al., 2016). However, as mentioned above, our prior is conditioned on s0 and learned as part of the optimization because the set of primitives active in D depends on s0. If β = 1, OPAL is equivalent to a conditional VAE maximizing log probability of τ conditioned on its initial state s0; see Appendix D for more details. Despite the similarities between our proposed objective and VAEs, our presentation of OPAL as a constrained auto-encoding objective is deliberate. As we will show in Section 4.3, our theoretical guarantees depend on a well-optimized auto-encoding loss to provide benefits of using learned primitives πθ for downstream tasks. In contrast, a VAE loss, which simply maximizes the likelihood of observed data, may not necessarily provide a benefit for downstream tasks. For example, if the data can be generated by a single stationary policy, a VAE-optimal policy πθ can simply ignore the latent z, thus producing a degenerate space of primitives. In contrast, when the KL constraint in our objective is weak (i.e., KL 0 or β < 1), the auto-encoding loss is encouraged to find a unique z for distinct τ to optimize reconstruction loss." }, { "heading": "4.2 OFFLINE RL WITH PRIMITIVES FOR DOWNSTREAM TASKS", "text": "After distilling learned primitives from D in terms of an encoder qφ(z|τ), a latent primitive policy (or decoder) πθ(a|s, z), and a prior ρω(z|s0), OPAL then applies these learned models to improve offline RL for downstream tasks.\nAs shown in Figure 2, our goal is to use a dataset with reward labeled sub-trajectories Dr = {τi := (sit, a i t, r i t) c−1 t=0}Ni=1 to learn a behavior policy π that maximizes cumulative reward. With OPAL, we use the learned primitives πθ(a|s, z) as low-level controllers, and then learn a high-level controller πψ(z|s). To do so, we relabel the dataset Dr in terms of temporally extended transitions using the learned encoder qφ(z|τ). Specifically, we create a dataset Drhi = {(si0, zi, ∑c−1 t=0 γ trit, s i c)}Ni=1, where zi ∼ qφ(·|τi). Given Drhi, any off-the-shelf offline RL algorithm can be used to learn πψ(z|s) (in our experiments we use CQL (Kumar et al., 2020b)). As a way to ensure that the c-step transitions τi := (s i t, a i t, r i t) c−1 t=0 remain consistent with the labelled latent action zi, we finetune πθ(a|s, z) on Drlo = {((sit, ait)c−1t=0 , zi)}Ni=1 with a simple latent-conditioned behavioral cloning loss:\nmin θ Ê(τ,z)∼Drlo\n[ − c−1∑\nt=0\nlog πθ(at|st, z) ] . (3)" }, { "heading": "4.3 SUBOPTIMALITY AND PERFORMANCE BOUNDS FOR OPAL", "text": "Now, we will analyze OPAL and derive performance bounds for it in the context of offline RL, formally examining the benefit of the temporal abstraction afforded by OPAL as well as studying what properties D should possess so that OPAL can improve downstream task performance. As explained above, when applying OPAL to offline RL, we first learn the primitives πθ(a|s, z) using D, and then learn a high-level task policy πψ(z|s) in the space of the primitives. Let πψ∗(z|s) be the optimal task policy. Thus the low-level πθ and high-level πψ∗ together comprise a hierarchical policy, which we denote as πθ,ψ∗ . To quantify the performance of policies obtained from OPAL,\nwe define the notion of suboptimality of the learned primitives πθ(a|s, z) in an MDP M with an associated optimal policy π∗ as\nSubOpt(θ) := |JRL(π∗,M)− JRL(πθ,ψ∗ ,M)|. (4) To relate SubOpt(θ) with some notion of divergence between π∗ and πθ,ψ∗ , we introduce the following performance difference lemma.\nLemma 4.0.1. If π1 and π2 are two policies inM, then\n|JRL(π1,M)− JRL(π2,M)| ≤ 2\n(1− γc)(1− γ)RmaxEs∼d π1 c [DTV(π1(τ |s)||π2(τ |s))], (5)\nwhere DTV(π1(τ |s)||π2(τ |s)) denotes the TV divergence over c-length sub-trajectories τ sampled from π1 vs. π2 (see section 3). Furthermore,\nSubOpt(θ) ≤ 2 (1− γc)(1− γ)RmaxEs∼dπ ∗ c [DTV(π ∗(τ |s)||πθ,ψ∗(τ |s))]. (6)\nThe proof of the above lemma and all the following results are provided in Appendix B.1.\nThrough above lemma, we showed that the suboptimality of the learned primitives can be bounded by the total variation divergence between the optimal policy π∗ inM and the optimal policy acting through the learned primitives πθ,ψ∗ . We now continue to bound the divergence between π∗ and πθ,ψ∗ in terms of how representative D is of π∗ and how optimal the primitives πθ are with respect to the auto-encoding objective (equation 1). We begin with a definition of how often an arbitrary policy appears in Π, the distribution generating D: Definition 1. We say a policy π inM is ζ-common in Π if Eπ∼Π,s∼κ[DTV(π(τ |s)||π(τ |s))] ≤ ζ. Theorem 4.1. Let θ, φ, ω be the outputs of solving equation 1, such that J(θ, φ, ω) = c. Then, with high probability 1 − δ, for any π that is ζ-common in Π, there exists a distribution H over z such that for πHθ (τ |s) := Ez∼H [πθ(τ |z, s)],\nEs∼κ[DTV(π(τ |s)||πHθ (τ |s))] ≤ ζ + √√√√1 2 ( c + √ SJ δ +Hc )\n(7)\nwhere Hc = Eπ∼Π,τ∼π,s0∼κ[ ∑c−1 t=0 log π(at|st)] (i.e. a constant and property of D) and SJ is a positive constant incurred due to sampling error in J(θ, φ, ω) and depends on concentration properties of πθ(a|s, z) and qφ(z|τ). Corollary 4.1.1. If the optimal policy π∗ of M is ζ-common in Π, and ∥∥∥d π∗ c\nκ ∥∥∥ ∞ ≤ ξ, then, with\nhigh probability 1− δ,\nSubOpt(θ) ≤ 2ξ (1− γc)(1− γ)Rmax\n ζ + √√√√1 2 ( c + √ SJ δ +Hc ) . (8)\nAs we can see, SubOpt(θ) will reduce as D gets closer to π∗ (i.e. ζ approaches 0) and better primitives are learned (i.e. c decreases). While it might be tempting to increase c (i.e. the length of sub-trajectories) to reduce the suboptimality, a larger c will inevitably make it practically harder to control the autoencoding loss c, thereby leading to an increase in overall suboptimality and inducing a trade-off in determining the best value of c. In our experiments we treat c as a hyperparameter and set it to c = 10, although more sophisticated ways to determine c can be an interesting avenue for future work.\nTill now, we have argued that there exists some near-optimal task policy πψ∗ if θ is sufficiently learned and π∗ is sufficiently well-represented in D. Now, we will show how primitive learning can improve downstream learning, by considering the benefits of using OPAL with offline RL. Building on the policy performance analysis from Kumar et al. (2020b), we now present theoretical results bounding the performance of the policy obtained when offline RL is performed with OPAL.\nTheorem 4.2. Let πψ∗(z|s) be the policy obtained by CQL and let πψ∗,θ(a|s) refer to the policy when πψ∗(z|s) is used together with πθ(a|s, z). Let πβ ≡ {π ; π ∼ Π} refer to the policy generating Dr in MDPM and z ∼ πHβ (z|s) ≡ τ ∼ πβ,s0=s, z ∼ qφ(z|τ). Then, J(πψ∗,θ,M) ≥ J(πβ ,M)−κ with high probability 1− δ where\nκ = O (\n1\n(1− γc)(1− γ)Es∼d πψ∗,θ M̂H (s)\n[√ |Z|(DCQL(πψ∗ , πHβ )(s) + 1) ]) (9)\n− α 1− γcEs∼d πψ∗ MH (s)\n[ DCQL(πψ∗,θ, π H β )(s) ] , (10)\nwhere DCQL is a measure of the divergence between two policies; see the appendix for a formal statement.\nThe precise bound along with a proof is described in Appendix B.1. Intuitively, this bound suggests that the worst-case deterioration over the learned policy depends on the divergence between the learned latent-space policy DCQL and the actual primitive distribution, which is controlled via any conservative offline RL algorithm (Kumar et al. (2020b) in our experiments) and the size of the latent space |Z|. Crucially, note that comparing Equation 9 to the performance bound for CQL (Equation 6 in Kumar et al. (2020b)) reveals several benefits pertaining to (1) temporal abstraction – a reduction in the factor of horizon by virtue of γc, and (2) reduction in the amount of worst-case error propagation due to a reduced action space |Z| vs. |A|. Thus, as evident from the above bound, the total error induced due to a combination of distributional shift and sampling is significantly reduced when OPAL is used as compared to the standard RL counterpart of this bound which is affected by the size of the entire action space for each and every timestep in the horizon. This formalizes our intuition that OPAL helps to partly mitigate distributional shift and sampling error. One downside of using a latent space policy is that we incur unsupervised learning error while learning primitives. However, empirically, this unsupervised learning error gets dominated by other error terms pertaining to offline RL. That is, it is much easier to control unsupervised loss than errors arising in offline RL." }, { "heading": "5 EVALUATION", "text": "In this section, we will empirically show that OPAL improves learning of downstream tasks with offline RL, and then briefly show the same with few-shot imitation learning, online RL, and online multi-task transfer learning. Unless otherwise stated, we use c = 10 and dim(Z) = 8. See Appendix C for further implementation and experimental details. Visualizations and code are available at https://sites.google.com/view/opal-iclr" }, { "heading": "5.1 OFFLINE RL WITH OPAL", "text": "Description: We use environments and datasets provided in D4RL (Fu et al., 2020). Since the aim of our method is specifically to perform offline RL in settings where the offline data comprises varied and undirected multi-task behavior, we focus on Antmaze medium (diverse dataset), Antmaze large (diverse dataset), and Franka kitchen (mixed and partial datasets). The Antmaze datasets involve a simulated ant robot performing undirected navigation in a maze. The task is to use this undirected dataset to solve a specific point-to-point navigation problem, traversing the maze from one corner to the opposite corner, with only sparse 0-1 completion reward for reaching the goal. The kitchen datasets involves a franka robot manipulating multiple objects (microwave, kettle, etc.) either in an\nundirected manner (mixed dataset) or in a partially task directed manner (partial dataset). The task is to use the datasets to arrange objects in a desired configuration, with only sparse 0-1 completion reward for every object that attains the target configuration.\nBaseline: We use Behavior cloning (BC), BEAR (Kumar et al., 2019), EMAQ (Ghasemipour et al., 2020), and CQL (Kumar et al., 2020b) as baselines. We compare it to CQL+OPAL, which first uses OPAL to distill primitives from the offline dataset before applying CQL to learn a primitive-directing high-level policy.\nResults: As shown in Table 1, CQL+OPAL outperforms nearly all the baselines on antmaze (see Figure 1 and Figure 3 for visualization) and kitchen tasks, with the exception of EMAQ having similar performance on kitchen mixed. To ensure fair comparison with EMAQ, we use an autoregressive primitive policy. With the exception of EMAQ on kitchen mixed, we are not aware of any existing offline RL algorithms that achieves similarly good performance on these tasks; moreover, we are not aware of any existing online RL algorithms which solve these tasks (see Table 3 for some comparisons), highlighting the benefit of using offline datasets to circumvent exploration challenges. There are two potential reasons for OPAL’s success. First, temporally-extended primitives could make the reward propagation learning problem easier. Second, the primitives may provide a better latent action space than the atomic actions of the environment. To understand the relative importance of these factors, we experimented with an ablation of CQL+OPAL that uses c = 1 to remove temporal abstraction. In this case, we find the method’s performance to be similar to standard CQL. This implies that the temporal abstraction provided by OPAL is one of the main contributing factors to its good performance. This observation also agrees with our theoretical analysis. See Appendix E for detailed discussion." }, { "heading": "5.2 FEW-SHOT IMITATION LEARNING WITH OPAL", "text": "Description: Previously, we assumed that we have access to a task reward function, but only undirected data that performs other tasks. Now, we will study the opposite case, where we are not provided with a reward function for the new task either, but instead receive a small number of taskspecific demonstrations that illustrate optimal behavior. Simply imitating these few demonstrations is insufficient to obtain a good policy, and our experiments evaluate whether OPAL can effectively incorporate the prior data to enable few-shot adaptation in this setting. We use the Antmaze environments (diverse datasets) to evaluate our method and use an expert policy for these environments to sample n = 10 successful trajectories.\nBaseline and Results: For baselines, we use Behavior cloning (BC) and the model from Wang et al. (2017), which prescribes using a sequential VAE (SVAE) over state trajectories in conjunction with imitation learning. As shown in Table 2, BC+OPAL clearly outperforms other baselines, show-\ning the importance of temporal abstraction and ascertaining the quality of learned primitives. See Appendix A for detailed discussion." }, { "heading": "5.3 ONLINE RL AND MULTI-TASK TRANSFER WITH OPAL", "text": "Description: For online RL and multi-task transfer learning, we learn a task policy in space of primitives πθ(a|s, z) while keeping it fixed. For multi-task transfer, the task policy also takes in the task id and we use c = 5 and Z = 8. Since the primitives need to transfer to a different state distribution for multi-task transfer, it only learns the action sub-trajectory distribution and doesn’t take in the state feedback. See Appendix A for a detailed description of models. For online RL, we use the Antmaze environments (diverse datasets) with sparse and dense rewards for evaluating our method. For online multi-task transfer learning, we learn primitives with expert data from pick-andplace task and then use it to learn multi-task policy for MT10 and MT50 (from metaworld (Yu et al., 2020)), containing 10 and 50 robotic manipulation tasks which needs to be solved simultaneously.\nBaseline and Results: For online RL, we use HIRO (Nachum et al., 2018b), a state-of-the-art hierarchical RL method, SAC (Haarnoja et al., 2018) with Behavior cloning (BC) pre-training on D, and Discovery of Continuous Options (DDCO) (Krishnan et al., 2017) which uses D to learn a discrete set of primitives and then learns a task policy in space of those primitives with online RL (Double DQN (DDQN) (Van Hasselt et al., 2015)). For online multi-task transfer learning, we use PPO (Schulman et al., 2017) and SAC (Haarnoja et al., 2018) as baselines. As shown in Table 3 and Table 4, OPAL uses temporal abstraction to improve exploration and thus accelerate online RL and multi-task transfer learning. See Appendix A for detailed discussion." }, { "heading": "6 DISCUSSION", "text": "We proposed Offline Primitives for Accelerating offline RL (OPAL) as a preproccesing step for extracting recurring primitive behaviors from undirected and unlabelled dataset of diverse behaviors. We derived theoretical statements which describe under what conditions OPAL can improve learning of downstream offline RL tasks and showed how these improvements manifest in practice, leading to significant improvements in complex manipulation tasks. We further showed empirical demonstrations of OPAL’s application to few-shot imitation learning, online RL, and online multi-task transfer learning. In this work, we focused on simple auto-encoding models for representing OPAL, and an interesting avenue for future work is scaling up this basic paradigm to image-based tasks." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "We would like to thank Ben Eysenbach and Kamyar Ghasemipour for valuable discussions at different points over the course of this work. This work was supported by Google, DARPA Machine Common Sense grant and MIT-IBM grant." }, { "heading": "Appendices", "text": "" }, { "heading": "A OTHER APPLICATIONS OF OPAL", "text": "" }, { "heading": "A.1 FEW-SHOT IMITATION LEARNING WITH OPAL", "text": "Additional Assumption: In addition to D, we assume access to a small number of expert demonstrations Dexp = {τi := (st, at)T−1t=0 }ni=1 where n N . How to use with OPAL? In imitation learning, the aim is to recover an expert policy given a small number of stochastically sampled expert demonstrations Dexp = {τi := (st, at)T−1t=0 }ni=1. As in the offline RL setting, we use the primitives πθ(a|s, z) learned by OPAL as a low-level controller and learn a high-level policy πψ(z|s). We first partition the expert demonstrations into sub-trajectories Dexppar = {τi,k := (sk+t, ak+t)c−1t=0 for k = 0, . . . , T − c}ni=1 of length c. We then use the learned encoder qφ(z|τ) to label these sub-trajectories with latent actions zi,k ∼ qφ(z|τi,k) and thus create a dataset Dexphi = {(sik+t, zi,k) for k = 0, . . . , T − c}ni=1. We use D exp hi to learn the high-level policy πψ(z|s) using behavioral cloning. As in the offline RL setting, we also finetune πθ(a|s, z) with latent-conditioned behavioral cloning to ensure consistency of the labelled latents.\nEvaluation Description: We receive a small number of task-specific demonstrations that illustrate optimal behavior. Simply imitating these few demonstrations is insufficient to obtain a good policy, and our experiments evaluate whether OPAL can effectively incorporate the prior data to enable fewshot adaptation in this setting. We use the Antmaze environments (diverse datasets) to evaluate our method and use an expert policy for these environments to sample n = 10 successful trajectories.\nBaseline: We evaluate two baselines. First, we test a simple behavioral cloning (BC) baseline, which trains using a max-likelihood loss on the expert data. In order to make the comparison fair to OPAL (which uses the offline dataset D in addition to the expert dataset), we pretrain the BC agent on the undirected dataset using the same max-likelihood loss. As a second baseline and to test the quality of OPAL-extracted primitives, we experiment with an alternative unsupervised objective from Wang et al. (2017), which prescribes using a sequential VAE (SVAE) over state trajectories in conjunction with imitation learning.\nResults: As shown in Table 2, BC+OPAL clearly outperforms other baselines, showing the importance of temporal abstraction and ascertaining the quality of learned primitives. SVAE’s slightly worse performance suggests that decoding the state trajectory directly is more difficult than simply predicting the actions, as OPAL does, and that this degrades downstream task learning." }, { "heading": "A.2 ONLINE RL WITH OPAL", "text": "Additional Assumptions: We assume online access toM; i.e., access to Monte Carlo samples of episodes fromM given an arbitrary policy π. How to use with OPAL? To apply OPAL to standard online RL, we fix the learned primitives πθ(a|s, z) and learn a high-level policy πψ(z|s) in an online fashion using the latents z as temporally-extended actions. Specifically, when interacting the environment, πψ(z|s) chooses an appropriate primitive every c steps, and this primitive acts on the environment directly for c timesteps. Any off-the-shelf online RL algorithm can be used to learn ψ. In our experiments, we use SAC (Haarnoja et al., 2018). To ensure that πψ(z|s) stays close to the data distribution and avoid generalization issues associated with the fixed πθ(a|s, z), we add an additional KL penalty to the reward of the form DKL(πψ(z|s)||ρω(z|s0)). Evaluation Description: We use Antmaze medium (diverse dataset) and Antmaze large (diverse dataset) from the D4RL task suite (Fu et al., 2020) to evaluate our method. We evaluate using both a dense distance-based reward−‖g−antxy‖ and a sparse success-based reward 1[‖g−antxy‖ ≤ 0.5] (the typical default for this task), where antxy is the 2d position of the ant in the maze.\nBaseline: To solve these tasks through online RL, we need both (i) hierarchy (i.e. learning a policy on top of primitives) which improves exploration (Nachum et al., 2019b) and (ii) unlabelled (i.e. no task reward) offline dataset which allows us to bootstrap the primitives. This informed our choice\nof these three baselines. First, to test the role of D in exploration, we use HIRO (Nachum et al., 2018b), a state-of-the-art hierarchical RL method, as a baseline. Second, to test the role of temporal abstraction, we pre-train a flat policy on D using behavioral cloning (BC) and then finetune the policy on downstream tasks with SAC. Third, to test the quality of extracted primitives for online RL, we extract a discrete set of primitives with Deep Discovery of Continuous Options (DDCO) (Krishnan et al., 2017) and use Double DQN (DDQN) (Van Hasselt et al., 2015) to learn a task policy in the space of learned discrete primitives.\nResults: As shown in Table 3, SAC+OPAL outperforms all the baselines, showing (1) the importance of D in exploration, (2) the role of temporal abstraction, and (3) the good quality of learned primitives. Except for HIRO on Antmaze large with dense rewards, all other baselines fail to make any progress at all. In contrast, SAC+OPAL only fails to make progress on Antmaze large with sparse rewards." }, { "heading": "A.3 ONLINE MULTI-TASK TRANSFER LEARNING WITH OPAL", "text": "Additional Assumption: We assume the existence of M additional MDPs {Mi = (Si,A,Pi, ri, γ)}Mi=1 where the action space A, and the discount factor γ are same as those of M. How to use it with OPAL? In the multi-task setting, we aim to learn near-optimal behavior policies on M MDPs {Mi = (Si,A,Pi, ri, γ)}Mi=1. As in the previous applications of OPAL, we learn a set of high-level policies πψ(z|s, i) which will direct pretrained primitives πθ(a|s, z) to maximize cumulative rewards. Since the state space in the M MDPs is potentially distinct from that in the offline dataset D, we cannot transfer the state distribution and can only hope to transfer the action sub-trajectory distribution. Therefore, during the unsupervised training phase for learning πθ, we make the encoder and the decoder blind to the states in sub-trajectory. Specifically, the encoder becomes (µencz , σ enc z ) = qφ(zt|st, {at+i}c−1i=0 ) and is represented by a bidirectional GRU. The decoder becomes πθ({at, . . . , at+c−1}|zt) which decodes the entire action sub-trajectory from the latent vector and is represented by a GRU. With these state-agnostic primitives in hand, we then learn a policy πψ(z|s, i) using any off-the-shelf online RL method. In our experiments, we use Proximal Policy Optimization (PPO) (Schulman et al., 2017).\nEvaluation Description: We use the Metaworld task suite (Yu et al., 2020) to evaluate our method. The dataset D for learning primitives consists of trajectories generated by an expert policy for a goal-conditioned pick-and-place task. The pick-and-place task is suitable for unsupervised primitive learning because it contains all the basic operations (eg: move, grasp, place) required for performing more complex manipulation tasks in the Metaworld. Once we have learned the primitives, we learn a policy πψ(z|s, i) for the MT10 and MT50 benchmarks, where MT10 and MT50 contain 10 and 50 robotic manipulation tasks, respectively, which we need to be solved simultaneously. In these experiments we use c = 5 and dim(Z) = 8. Baseline: We use SAC (Haarnoja et al., 2018) and PPO (Schulman et al., 2017) as baselines.\nResults: As shown in Table 4, PPO+OPAL clearly outperforms both PPO and SAC, showing the importance of temporal abstraction in online multi-task transfer." }, { "heading": "B PROOF OF THEOREMS", "text": "" }, { "heading": "B.1 BOUNDING THE SUBOPTIMALITY OF THE LEARNED PRIMITIVES", "text": "We will begin by proving the following lemma which bounds the sampling error incurred by J(φ, θ, ω).\nLemma B.0.1. With high probability 1− δ, ∣∣∣∣∣J(θ, φ, ω)− Eπ∼Π,τ∼π,z∼qφ(z|τ) [ − c−1∑\nt=0\nlog πθ(at|st, z) ]∣∣∣∣∣ ≤ √ SJ δ\n(11)\nwhere SJ is a constant dependent on concentration properties of πθ(a|s, z) and qφ(z|τ).\nProof. To be concise, let us denote the sampling error in J(φ, θ, ω) by\n∆J = ∣∣∣∣∣J(θ, φ, ω)− Eπ∼Π,τ∼π,z∼qφ(z|τ) [ − c−1∑\nt=0\nlog πθ(at|st, z) ]∣∣∣∣∣ (12)\nApplying Chebyshev’s inequality to ∆J , we get that, with high probability 1− δ,\n∆J ≤\n√ Varπ∼Π,τ∼π,z∼qφ(z|τ)(− ∑c−1 t=0 log πθ(at|st, z))\nδ = √ SJ δ\n(13)\nTherefore, combining all the equations, we have\nEπ∼Π,τ∼π,z∼qφ(z|τ)\n[ − c−1∑\nt=0\nlog πθ(at|st, z) ] ≤ J(θ, φ, ω) + √ SJ δ\n(14)\nWe present a general performance difference lemma that will help in our proof of Lemma 4.0.1. Lemma B.0.2. If π1 and π2 are two policies inM, then\n|JRL(π1,M)− JRL(π2,M)| ≤ 2\n(1− γ)2 RmaxEs∼dπ1 [DTV(π1(a|s)||π2(a|s))]. (15)\nProof. Following the derivations in Achiam et al. (2017) and Nachum et al. (2018a), we express the performance of a policy π inM in terms of linear opterators:\nJRL(π,M) = (1− γ)−1R>(I − γΠπP)−1Ππµ, (16) where R is a vector representation of the rewards of M, Ππ is a linear operator mapping state distributions to state-action distributions according to π, and µ is a vector representation of the initial state distribution ofM. Accordingly, we express the performance difference of π1, π2 as,\n|JRL(π1,M)− JRL(π2,M)| = |R>((I − γΠ1P)−1Π1µ− (I − γΠ2P)−1Π2µ)| (17) ≤ Rmax|(I − γΠ1P)−1Π1µ− (I − γΠ2P)−1Π2µ|. (18)\nBy the triangle inequality, we may bound equation 18 by\nRmax ( |(I − γΠ1P)−1Π1µ− (I − γΠ2P)−1Π1µ| +\n|(I − γΠ2P)−1Π1µ− (I − γΠ2P)−1Π2µ| ) . (19)\nWe begin by approaching the first term inside the parentheses of equation 19. That first term may be expressed as\n|(I − γΠ2P)−1(I − γΠ2P − (I − γΠ1P))(I − γΠ1P)−1Π1µ| (20)\n= |γ(I − γΠ2P)−1(Π1 −Π2)P(I − γΠ1P)−1Π1µ| (21) ≤ γ\n1− γ |(Π1 −Π2)P(I − γΠ1P) −1Π1µ| (22)\n= 2γ\n(1− γ)2Es∼(1−γ)P(I−γΠ1P)−1Π1µ[DTV(π1(a|s)‖π2(a|s))]. (23)\nNow we continue to the second term inside the parentheses of equation 19. This second term may be expressed as\n|(I − γΠ2P)−1(Π1 −Π2)µ| ≤ 1\n1− γ |(Π1 −Π2)µ| (24)\n= 2\n1− γEs∼µ[DTV(π1(a|s)‖π2(a|s))]. (25)\nTo combine equations 23 and 25, we note that\ndπ1 = γ · (1− γ)P(I − γΠ1P)−1Π1µ+ (1− γ) · µ. (26)\nThus, we have\n|JRL(π1,M)− JRL(π2,M)| ≤ 2\n(1− γ)2 RmaxEs∼dπ1 [DTV(π1(a|s)‖π2(a|s))], (27)\nas desired.\nLemma 4.0.1. If π1 and π2 are two policies inM, then\n|JRL(π1,M)− JRL(π2,M)| ≤ 2\n(1− γc)(1− γ)RmaxEs∼d π1 c [DTV(π1(τ |s)||π2(τ |s))], (28)\nwhere DTV(π1(τ |s)||π2(τ |s)) denotes the TV divergence over c-length sub-trajectories τ sampled from π1 vs. π2 (see section 3). Furthermore,\nSubOpt(θ) ≤ 2 (1− γc)(1− γ)RmaxEs∼dπ ∗ c [DTV(π ∗(τ |s)||πθ,ψ∗(τ |s))]. (29)\nProof. We focus on proving equation 28, as the subsequent derivation of equation 29 is straightforward by definition of SubOpt.\nTo derive equation 28, we may simply consider π1 and π2 acting in an “every-c-steps” version of M, where the action space is now τ and reward are accumulated over c steps using γ-discounting. Note that in this abstracted version ofM, the max reward is 1−γc1−γ Rmax and the MDP discount is γc. Plugging this into the result of Lemma B.0.2 immediately yields the desired claim.\nTheorem 4.1. Let θ, φ, ω be the outputs of solving equation 1, such that J(θ, φ, ω) = c. Then, with high probability 1 − δ, for any π that is ζ-common in Π, there exists a distribution H over z such that for πHθ (τ |s) := Ez∼H [πθ(τ |z, s)],\nEs∼κ[DTV(π(τ |s)||πHθ (τ |s))] ≤ ζ + √√√√1 2 ( c + √ SJ δ +Hc ) , (30)\nwhereHc = Eπ∼Π,s∼κ,τ∼π[ ∑c−1 t=0 log π(at|st)] (i.e. a constant and property ofD) and SJ is a positive constant incurred due to sampling error in J(θ, φ, ω) and depends on concentration properties of πθ(a|s, z) and qφ(z|τ).\nProof. We start with application of the triangle inequality:\nDTV(π(τ |s)||πHθ (τ |s)) ≤ DTV(π(τ |s)||π(τ |s)) + DTV(π(τ |s)||πHθ (τ |s)). (31)\nTaking expectation with respect to π ∼ Π, s ∼ κ on both the sides, we get\nEs∼κ[DTV(π(τ)||πHθ (τ))] ≤ Eπ∼Π,s∼κ[DTV(π(τ)||π(τ))] + Eπ∼Π,s∼µ[DTV(π(τ)||πHθ (τ))] (32) ≤ Eπ∼Π,s∼κ[DTV(π(τ)||π(τ))] (33)\n+ Eπ∼Π,s∼µ[ √ 1\n2 DKL(π(τ)||πHθ (τ))] (34)\n≤ ζ + √ 1\n2 Eπ∼Π,τ∼π,s∼κ[log π(τ)− logEz∼H [πθ(τ |z)]] (35)\n≤ ζ + √ 1\n2 Eπ∼Π,s∼κ,τ∼π,z∼H [log π(τ)− log πθ(τ |z)]. (36)\nThe last two inequality used Jensen’s inequality. Let H(z) = Eπ∼Π,τ∼π[qφ(z|τ)]. Cancelling out the dynamics and using equation 14 we get,\nEs∼κ[DTV(π(τ)||πHθ (τ))]\n≤ ζ + √√√√1 2 Eπ∼Π,s∼κ,τ∼π,z∼qφ(z|τ) [ c−1∑\nt=0\n(log π(at|st)− log πθ(at|st, z)) ]\n≤ ζ + √√√√1 2 ( Hc + √ SJ δ + J(θ, φ, ω) )\n= ζ + √√√√1 2 ( Hc + c + √ SJ δ )\nCorollary 4.1.1. If the optimal policy π∗ of M is ζ-common in Π, and ∥∥∥d π∗ c\nκ ∥∥∥ ∞ ≤ ξ, then, with\nhigh probability 1− δ,\nSubOpt(θ) ≤ 2ξ (1− γc)(1− γ)Rmax\n ζ + √√√√1 2 ( c + √ SJ δ +Hc ) (37)\nProof. Expanding lemma 4.0.1 using the above assumption, we have\nSubOpt(θ) ≤ |JRL(π∗,M)− JRL(πHθ ,M)| (38)\n≤ 2 (1− γc)(1− γ)RmaxEs∼dπ ∗ c [DTV(π ∗(τ |s)||πHθ (τ |s))] (39)\n≤ 2 (1− γc)(1− γ)Rmax ∥∥∥∥ dπ ∗ c κ ∥∥∥∥ ∞ Es∼κ[DTV(π∗(τ |s)||πHθ (τ |s))] (40)\n≤ 2ξ (1− γc)(1− γ)RmaxEs∼κ[DTV(π ∗(τ |s)||πHθ (τ |s))] (41)\nNow, we can use theorem 4.1 to prove the corollary." }, { "heading": "B.2 PERFORMANCE BOUNDS FOR OPAL", "text": "Theorem 4.2. Let πψ∗(z|s) be the policy obtained by CQL and let πψ∗,θ(a|s) refer to the policy when πψ∗(z|s) is used together with πθ(a|s, z). Let πβ ≡ {π ; π ∼ Π} refer to the policy generating Dr in MDPM and z ∼ πHβ (z|s) ≡ τ ∼ πβ,s0=s, z ∼ qφ(z|τ). Then, J(πψ∗,θ,M) ≥ J(πβ ,M)−κ with high probability 1− δ where\nκ = O (\n1\n(1− γc)(1− γ)Es∼d πψ∗,θ M̂H (s)\n[√ |Z|(DCQL(πψ∗ , πHβ )(s) + 1) ]) (42)\n− α 1− γcEs∼d πψ∗ MH (s)\n[ DCQL(πψ∗,θ, π H β )(s) ] (43)\nProof. We assume that the variational posterior qφ(z|τ) obtained after learning OPAL from D is same (or nearly same) as the true posterior p(z|τ). qφ can be used to define πHβ (z|s) as z ∼ πHβ ≡ τ ∼ πβ , z ∼ qφ(z|τ). This induces an MDPMH = (S,Z,Pz, rz, γc) where Z is the inferred latent space for choosing primitives, Pz and rz are the latent dynamics and reward function such that st+c ∼ Pz(st+c|st, zt) ≡ st+i+1 ∼ P(st+i+1|st+i, at+i), at+i ∼ π(at+i|st+i, zt) ∀i ∈ {0, 1, . . . , c − 1} and rz(st, zt) = ∑c−1 i=0 γ\nir(st+i, at+i), and γc is the new discount factor effectively reducing the task horizon by a factor of c. π(a|s, z) is the primitive induced by qφ and πβ . Since qφ captures the true posterior, π(a|s, z) is the optimal primitive you can learn and its\nautoencoding loss, under true expectation, is ∗c = Eπ∼Π,τ∼π,z∼qφ(z|τ) [ −∑c−1t=0 log π(at|st, z) ] . Therefore, τ ∼ πβ ≡ z ∼ πHβ , τ ∼ π(·|·, z). πβ is used to collect the data Dr which induces an empirical MDP. We refer to the empirical MDP induced by Dr as M̂ = (S,A, P̂, µ̂, r, γ) where P̂(ŝ′|ŝ, â) = ∑ (s,a,s′)∼D 1[s=ŝ,a=â,s\n′=ŝ′]∑ (s,a)∼D 1[s=ŝ,a=â]\nand µ̂(ŝ) = ∑ s0∼D 1[s0=ŝ]\nN . We use qφ to get Drhi from Dr which induces another empirical MDP M̂H . Using these definitions, we will try to bound |J(πψ∗,θ,M)− J(πβ ,M)|. Let’s break |J(πψ∗,θ,M)− J(πβ ,M)| into\n|J(πψ∗,θ,M)− J(πβ ,M)| ≤ |J(πψ∗,θ,M)− J(πψ∗ ,MH)| (44) + |J(πψ∗ ,MH − J(πHβ ,MH)| (45) + |J(πHβ ,MH)− J(πβ ,M)| (46)\nSince qφ captures the true variational posterior, τ ∼ πβ ≡ z ∼ πHβ , τ ∼ π(·|·, z) and therefore, |J(πHβ ,MH)− J(πβ ,M)| = 0. For bounding, |J(πψ∗ ,MH − J(πHβ ,MH)|, we use theorem 3.6 from Kumar et al. (2020b) and apply it toMH to get\n|J(πψ∗ ,MH)− J(πHβ ,MH)| (47)\n≤ 2 (\nCr,δ 1− γc + γcRmaxCP,δ (1− γc)(1− γ) ) E s∼dπψ∗,θ\nM̂H (s)\n[√ |Z| |D(s)|DCQL(πψ∗ , π H β )(s) + 1 ] (48)\n− α 1− γcEs∼d πψ∗ MH (s)\n[ DCQL(πψ∗,θ, π H β )(s) ] = κ2 (49)\nNow, we will try to bound |J(πψ∗,θ,M) − J(πψ∗ ,MH)|. The only difference between the two is that the primitive πθ(a|s, z) is used in M and the primitive π(a|s, z) is used in MH . Therefore, we can write the above bound as |J(πψ∗,θ,M) − J(πψ∗,π(·|·,z),M)|. Let’s first bound their value function at a particular state s. Using Lemma 4.0.1, we get\n|J(πψ∗,θ,M)− J(πψ∗,π(·|·,z),M)| (50)\n≤ 2Rmax (1− γc)(1− γ)Es∼d πψ∗,π(·|·,z) c [DTV(πψ∗,π(·|·,z)(τ |s)||πψ∗,θ(τ |s))] (51)\n≤ 2Rmax (1− γc)(1− γ)Es∼d πψ∗,π(·|·,z) c\n[√ 1\n2 DKL(πψ∗,π(·|·,z)(τ |s)||πψ∗,θ(τ |s))\n] (52)\nNow, we will try to bound DKL(πψ∗,π(·|·,z)(τ |s)||πψ∗,θ(τ |s)). We have\nEz∼πψ∗ (z|s),τ∼π(τ |s,z)\n[ log πψ∗(z|s) ∏c−1 t=1 P(st|st−1, at−1) ∏c−1 t=0 π(at|st, z)\nπψ∗(z|s) ∏c−1 t=1 P(st|st−1, at−1) ∏c−1 t=0 πθ(at|st, z)\n] (53)\n= Ez∼πψ∗ (z|s),τ∼π(τ |s,z)\n[ c−1∑\nt=0\nlog π(at|st, z)− log πθ(at|st, z) ]\n(54)\n= Ez∼πβ(z|s),τ∼π(τ |s,z) [( πψ∗(z|s) πHβ (z|s) ) c−1∑\nt=0\nlog π(at|st, z)− log πθ(at|st, z) ]\n(55)\n≤ ∣∣∣∣∣ πψ∗(z|s) πHβ (z|s) ∣∣∣∣∣ ∞ Ez∼πβ(z|s),τ∼π(τ |s,z) [ c−1∑ t=0 log π(at|st, z)− log πθ(at|st, z) ]\n(56)\n≤ ∣∣∣∣∣ πψ∗(z|s) πHβ (z|s) ∣∣∣∣∣ ∞ ( c − ∗c + √ SJ δ ) (57)\nThe last equation comes from above definition of ∗c and equation 14. We will now try to bound∣∣∣πψ∗ (z|s)πHβ (z|s) ∣∣∣ ∞ using DCQL(πψ∗ , πHβ )(s). Using definition of DCQL(πψ∗ , π H β )(s), we have\nDCQL(πψ∗ , π H β )(s) =\n∑\nz\nπψ∗(z|s) ( πψ∗(z|s) πHβ (z|s) − 1 )\n(58)\n⇒ DCQL(πψ∗ , πHβ )(s) + 1 = ∑\nz\nπHβ (z|s) ( πψ∗(z|s) πHβ (z|s) )2 ≤ ∣∣∣∣∣ πψ∗(z|s) πHβ (z|s) ∣∣∣∣∣ 2 ∞ πHβ (z̄|s) (59)\nwhere z̄ = arg maxz ( πψ∗ (z|s) πHβ (z|s) ) . To be concise, let\n∆c = ( c − ∗c + √ SJ δ ) (60)\nCombining above equations, we have\nDKL(πψ∗(z|s)π(τ |s, z)||πψ∗(z|s)πθ(τ |s, z)) ≤ ∆c\n√ DCQL(πψ∗ , πHβ )(s) + 1\nπHβ (z̄|s) (61)\nUsing this to bound the returns, we get\n|J(πψ∗,θ,M)− J(πψ∗,π(·|·,z),M)| (62)\n≤ 2Rmax (1− γc)(1− γ)Es∼d πψ∗,π(·|·,z) c\n ( DCQL(πψ∗ , π H β )(s) + 1\nπHβ (z̄|s)\n) 1 4 √\n1 2 ∆c\n (63)\n= κ1 (64)\nWe get κ = κ1 + κ2. We apply O to get the notation in the theorem." }, { "heading": "C EXPERIMENT DETAILS", "text": "" }, { "heading": "C.1 OPAL EXPERIMENT DETAILS", "text": "Encoder The encoder qφ(z|τ) takes in state-action trajectory τ of length c. It first passes the individual states through a fully connected network with 2 hidden layers of sizeH and ReLU activation. Then it concatenates the proccessed states with actions and passes it through a bidirectional GRU with hidden dimension of H and 4 GRU layers. It projects the output of GRU to mean and log standard deviation of the latent vector through linear layers.\nPrior The prior ρω(z|s) takes in the current state s and passes it through a fully connected network with 2 hidden layers of size H and ReLU activation. It then projects the output of the hidden layers to mean and log standard deviation of the latent vector through linear layers.\nPrimitive Policy The primitive (i.e. decoder) πθ(a|s, z) has same architecture as the Prior but it takes in state and latent vector and produces mean and log standard deviation of the action. For kitchen environments, we use an autoregressive primitive policy with same architecture as used by EMAQ (Ghasemipour et al., 2020).\nWe use H = 200 for antmaze environments and H = 256 for kitchen environments. In both cases, OPAL was trained for 100 epochs with a fixed learning rate of 1e− 3, β = 0.1 (Lynch et al., 2020), Adam optimizer (Kingma & Ba, 2014) and a batch size of 50." }, { "heading": "C.2 TASK POLICY ARCHITECTURE", "text": "In all environments, for task policy, we used a fully connected network with 3 hidden layers of size 256 and ReLU activation. It then projects the output of the hidden layers to mean and log standard deviation of the latent vector through linear layers." }, { "heading": "C.3 SAC HYPERPARAMETERS", "text": "We used SAC (Haarnoja et al., 2018) for online RL experiments in learning a task policy either in action space A or latent space Z . For the discrete primitives extracted from DDCO (Krishnan et al., 2017), we used Double DQN Van Hasselt et al. (2015). We used the standard hyperparameters for SAC and Double DQN as provided in rlkit code base (https://github.com/vitchyr/ rlkit) with both policy learning rate and q value learning rate as 3e− 4." }, { "heading": "C.4 CQL HYPERPARAMETERS", "text": "We used CQL (Kumar et al., 2020b) for offline RL experiments in learning a task policy either in action space A or latent space Z . We used the standard hyperparameters, as mentioned in Kumar et al. (2020b)., with minor differences. We used policy learning rate of 3e− 5, q value learning rate of 3e − 4, and primitive learning rate of 3e − 4. For antmaze tasks, we used CQL(H) variant with τ = 5 and learned α. For kitchen tasks we used CQL(ρ) variant with fixed α = 10. In both cases, we ensured α never dropped below 0.001." }, { "heading": "D CONNECTION BETWEEN OPAL AND VAE OBJECTIVES", "text": "We are given an undirected, unlabelled and diverse dataset D of sub-trajectories of length c. We would like to fit a sequential VAE model to D which maximizes\nmax θ\nEτ∼D[log pθ(τ |s0)] (65)\nwhere s0 is initial state of the sub-trajectory. Let’s consider\nlog pθ(τ |s0) = log ∫ pθ(τ, z|s0)dz = log ∫ pθ(τ, z|s0)qφ(z|τ)\nqφ(z|τ) dz (66)\n(using Jensen’s inequality)\n≥ ∫ qφ(z|τ)[log pθ(τ, z|s0)− log qφ(z|τ)]dz = Ez∼qφ(z|τ) [ log pθ(τ |z, s0)− log\nqφ(z|τ) pθ(z|s0)\n]\n(67) Using the above equation, we have the following lower-bound for our objective function\nmax θ Eτ∼D[log pθ(τ |s0)] ≥ max θ,φ\nEτ∼D̄,z∼qφ(z|τ) [ log pθ(τ |z, s0)− log\nqφ(z|τ) pθ(z|s0)\n] (68)\n= max θ,φ\nEτ∼D,z∼qφ(z|τ)[log pθ(τ |z, s0)]−DKL(qφ(z|τ)||pθ(z|s0)) (69)\nWe separate the parameters of decoder from prior and hence write pθ(z|s0) = ρω(z|s0). We can expand log pθ(τ |z, s0) = ∑c−1 t=1 logP(st|st−1, at−1) + ∑c−1 t=0 log πθ(at|st, z). Since P is fixed it can be removed from the objective function. Therefore, we can write the objective function as\nmax θ,φ Eτ∼D,z∼qφ(z|τ)\n[ c−1∑\nt=0\nlog πθ(at|st, z) ] − βDKL(qφ(z|τ)||ρω(z|s0)) (70)\nwhere β = 1. This is similar to the autoencoding loss function we described in section 4." }, { "heading": "E ABLATION STUDIES", "text": "As shown in Table 5, we experimented with different choices of dim(Z) on antmaze-medium (diverse). Using the hyperparameters from Nachum et al. (2018a), we fixed c = 10. While\ndim(Z) = 8, 16 gave similar performances, dim(Z) = 4 performed slightly worse. Therefore, we selected dim(Z) = 8 for our final model as it was simpler. Temporal abstraction actually helps: To empirically verify that the gain in performance was due to temporal abstraction and not better action space learned through latent space, we tried c = 1 (dim(Z) = 8) and found the performance to be similar to that of CQL (i.e. 55.3 ± 3.8) thereby empirically supporting the theoretical benefits of temporal abstraction.\nWe found dim(Z) = 8 and c = 10 to work well with other environments as well. However, we acknowledge that the performance of CQL+OPAL can be further improved by carefully choosing better hyperparameters for each environment or by using other offline hyperparameter selection methods for offline RL, which is a subject of future work." }, { "heading": "F ALTERNATIVE METHODS FOR EXTRACTING PRIMITIVES FROM OFFLINE DATA", "text": "We describe alternative methods for extracting a primitive policy from offline data. These methods are offline variants of CARML (Jabri et al., 2019) and DADS (Sharma et al., 2019). We tried these techniques in an early phase of our project and used the environment antmaze-medium (diverse) to evaluate these methods.\nLet’s consider an offline undirected, unlabelled and diverse dataset D = {(sit, ait)c−1t=0}Ni=1. Let τ = (st) c−1 t=0 represent the state trajectory. To extract primitives, we first cluster the trajectories by maximizing the mutual information between the state trajectory τ and latent variable z (indicating cluster index) with respect to the parameters of joint distribution pφ,ω(τ, z) = pω(z)pφ(τ |z). For now, we consider pω(z) = Cat(p1, . . . , pk) (i.e. discrete latent variables sampled from a Categorical distribution) and represent z as one-hot vector of dimension k. The choice of pω(z) is consistent with the choices made in Jabri et al. (2019) and Sharma et al. (2019). Since z is discrete, we can use Bayes rule to calculate pφ,ω(z|τ) as\npφ,ω(z|τ) = pω(z)pφ(τ |z)∑k i=1 pω(zi)pφ(τ |zi) . (71)\nOur objective function becomes\nmax φ,ω I(τ ; z) = max φ,ω\nEτ∼D,z∼pφ,ω(z|τ) [ log pφ(τ |z) p(τ) ] . (72)\nOffline CARML and offline DADS differ only in how they model pφ(τ |z):\n• Offline CARML We model pφ(τ |z) = ∏c−1 t=0 pφ(st|z) and hence, log pφ(τ |z) =∑c−1 t=0 log pφ(st|z). • Offline DADS We model pφ(τ |z) = p(s0) ∏c−1 t=1 pφ(st|st−1, z) and hence, log pφ(τ |z) =\nlog p(s0) + ∑c−1 t=0 log pφ(st|st−1, z). Here, we only model pφ(st|st−1, z) and not p(s0). Since, log is additive in nature, p(s0) will be ignored while calculating gradient.\nTo optimize equation 72, we use Algorithm 2 from Jabri et al. (2019). Once we have clustered the state trajectories τ with labels z by maximizing I(τ ; z), we can use behavioral cloning (BC) to learn πθ(a|s, z). Finally, we use pφ,ω(z|τ) to label the reward-labelled data Dr = {(sit, ait, rit)c−1t=0}Ni=1 with latents, and transform it intoDrhi = {(si0, zi, ∑c−1 t=0 γ irit, sc) N i=1}. The task policy πψ is trained onDrhi using Conservative Q Learning (CQL) (Kumar et al., 2020b). Since the primitive policy πθ is trained after pφ,ω(z|τ) is fully trained, it doesn’t need any additional finetuning." }, { "heading": "F.1 RESULTS", "text": "Using the hyperparameters from Nachum et al. (2018a), we used c = 10. We experimented with different values of k = 5, 10, 20 and found that k = 10, 20 works the best (see Table 6 for more\ndetails). We went with k = 10 as our final model since it’s simpler. Offline CARML effectively uses only 6 skills as the other 4 skills had pω(z) = 0. Offline DADS uses all the skills. The results are described in Table 7. In addition to calculating the average success rate, we also calculate the average cumulative dense rewards for entire trajectory and the average cumulative dense rewards for the last 5 time steps. Here, the dense reward is negative l2 distance to the goal. The resulting trajectory clusters (using a subset of the dataset) from discrete skills are also visualized in Figure 4 where different colors represent different clusters.\nSince offline CARML treats the states in the trajectory conditionally independent of each other given z, the clustering mainly focuses on the spatial location. Therefore, offline CARML isn’t able to separate out different control modes starting around the same spatial locations which explains its poor performance when combined with CQL. As we can see from Figure 5, offline CARML is able to make progress towards the goal, but gets stuck along the way due to poor separation of control modes. On the other hand, offline DADS treats the state transitions in the trajectory conditionally independent of each other given z and thus clusters trajectories with similar state transitions together. This allows it to more effectively separate out the control modes. Therefore, CQL+offline DADS slightly improves upon CQL but is still limited by discrete number of skills. Furthermore, increasing the number of skills from 10 to 20 gives similar performance. Moreover, in these methods, it’s intractable to use continuous skill space since we use Bayes rule to calculate pφ,ω(z|τ). Therefore, we decided to switch to learning a β-VAE (Higgins et al., 2016) style generative model with continuous skill space i.e. OPAL." }, { "heading": "F.2 TRAINING DETAILS", "text": "For clustering by optimizing equation 72, both offline CARML and offline DADS only considers the global x-y pose of the ant and ignores other dimensions of the state space. These methods fail to work when considering the full state space.\nOffline CARML pφ(s|z) takes in the latent one-hot vector z and passes it through a fully connected network with 2 hidden layers of size H = 200 and ReLU activation. It then projects the output of the hidden layers to mean and log standard deviation of the reduced state s (only global x-y pose considered) through linear layers.\nOffline DADS pφ(st|st−1, z) has the same architecture as pφ(s|z) but also takes in the reduced state from the previous timestep.\nPrimitive Policy The primitive policy πθ(a|s, z) takes in the current state s and latent one-hot vector z and passes it through a fully connected network with 2 hidden layers of size H = 200 and ReLU activation. It then projects the output of the hidden layers to mean and log standard deviation of action through linear layers.\nWe perform the clustering for 25 epochs with a fixed learning rate of 1e − 3, Adam optimizer (Kingma & Ba, 2014) and a batch size of 50 using the Algorithm 2 from Jabri et al. (2019).\nTask Policy For task policy πψ(s), we used a fully connected network with 3 hidden layers of size 256 and ReLU activation. It then projects the output of the hidden layers to the logits (corresponding to the components of discrete latent space) through linear layers.\nCQL Hyperparameters We used the standard hyperparameters for CQL(H) with discrete action space, as mentioned in Kumar et al. (2020b)." } ]
2,021
ERATING OFFLINE REINFORCEMENT LEARNING
SP:f2ba6d73cecdf611e6f58c93fb88e9f3dbe3bb24
[ "of the paper: The paper proposes a new formulation for the federated learning problem, in which each agent has its local model, and a penalty term is added to the objective function to control the deviation of these local models from their average. Next, the authors develop a randomized algorithm to tackle this problem and characterize its convergence under several assumptions, such as smoothness and strong convexity. They also discuss variants of their algorithm, which uses variance reduction techniques or considers users' partial participation." ]
We propose a new optimization formulation for training federated learning models. The standard formulation has the form of an empirical risk minimization problem constructed to find a single global model trained from the private data stored across all participating devices. In contrast, our formulation seeks an explicit trade-off between this traditional global model and the local models, which can be learned by each device from its own private data without any communication. Further, we develop several efficient variants of SGD (with and without partial participation and with and without variance reduction) for solving the new formulation and prove communication complexity guarantees. Notably, our methods are similar but not identical to federated averaging / local SGD, thus shedding some light on the essence of the elusive method. In particular, our methods do not perform full averaging steps and instead merely take steps towards averaging. We argue for the benefits of this new paradigm for federated learning.
[]
[ { "authors": [ "Zeyuan Allen-Zhu" ], "title": "Katyusha: The first direct acceleration of stochastic gradient methods", "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2017 }, { "authors": [ "Keith Bonawitz", "Vladimir Ivanov", "Ben Kreuter", "Antonio Marcedone", "H Brendan McMahan", "Sarvar Patel", "Daniel Ramage", "Aaron Segal", "Karn Seth" ], "title": "Practical secure aggregation for privacypreserving machine learning", "venue": "In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security,", "year": 2017 }, { "authors": [ "Chih-Chung Chang", "Chih-Jen Lin" ], "title": "LibSVM: A library for support vector machines", "venue": "ACM Transactions on Intelligent Systems and Technology (TIST),", "year": 2011 }, { "authors": [ "L. Corinzia", "J.M. Buhmann" ], "title": "Variational federated multi-task learning", "venue": "arXiv preprint arXiv:1906.06268,", "year": 2019 }, { "authors": [ "Jeffrey Dean", "Greg Corrado", "Rajat Monga", "Kai Chen", "Matthieu Devin", "Mark Mao", "Andrew Senior", "Paul Tucker", "Ke Yang", "Quoc V Le" ], "title": "Large scale distributed deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Aaron Defazio", "Francis Bach", "Simon Lacoste-Julien" ], "title": "SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "H. Eichner", "T. Koren", "H.B. McMahan", "N. Srebro", "K. Talwar" ], "title": "Semi-cyclic stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Nidham Gazagnadou", "Robert M Gower", "Joseph Salmon" ], "title": "Optimal mini-batch and step sizes for SAGA", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Eduard Gorbunov", "Darina Dvinskikh", "Alexander Gasnikov" ], "title": "Optimal decentralized distributed algorithms for stochastic convex optimization", "venue": "arXiv preprint arXiv:1911.07363,", "year": 2019 }, { "authors": [ "Eduard Gorbunov", "Filip Hanzely", "Peter Richtárik" ], "title": "A unified theory of sgd: Variance reduction, sampling, quantization and coordinate descent", "venue": "In The 23rd International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Robert Mansel Gower", "Peter Richtárik", "Francis Bach" ], "title": "Stochastic quasi-gradient methods: variance reduction via Jacobian sketching", "venue": null, "year": 2018 }, { "authors": [ "Robert Mansel Gower", "Nicolas Loizou", "Xun Qian", "Alibek Sailanbayev", "Egor Shulgin", "Peter Richtárik" ], "title": "SGD: General analysis and improved rates", "venue": "In Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Filip Hanzely", "Peter Richtárik" ], "title": "One method to rule them all: Variance reduction for data, parameters and many new methods", "venue": "arXiv preprint arXiv:1905.11266,", "year": 2019 }, { "authors": [ "Thomas Hofmann", "Aurelien Lucchi", "Simon Lacoste-Julien", "Brian McWilliams" ], "title": "Variance reduced stochastic gradient descent with neighbors", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems", "year": 2013 }, { "authors": [ "Rie Johnson", "Tong Zhang" ], "title": "Accelerating stochastic gradient descent using predictive variance reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Peter Kairouz", "H. Brendan McMahan" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977v1,", "year": 2019 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "SCAFFOLD: stochastic controlled averaging for on-device federated learning", "venue": null, "year": 1910 }, { "authors": [ "Ahmed Khaled", "Konstantin Mishchenko", "Peter Richtárik" ], "title": "First analysis of local GD on heterogeneous data", "venue": "In NeurIPS Workshop on Federated Learning for Data Privacy and Confidentiality,", "year": 2019 }, { "authors": [ "Ahmed Khaled", "Konstantin Mishchenko", "Peter Richtárik" ], "title": "Tighter theory for local SGD on identical and heterogeneous data", "venue": "In The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS", "year": 2020 }, { "authors": [ "M. Khodak", "M.-F. Balcan", "A. Talwalkar" ], "title": "Adaptive gradient-based meta-learning methods", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Jakub Konečný", "H. Brendan McMahan", "Daniel Ramage", "Peter Richtárik" ], "title": "Federated optimization: distributed machine learning for on-device intelligence", "venue": null, "year": 2016 }, { "authors": [ "Jakub Konečný", "H. Brendan McMahan", "Felix Yu", "Peter Richtárik", "Ananda Theertha Suresh", "Dave Bacon" ], "title": "Federated learning: strategies for improving communication efficiency", "venue": "In NIPS Private Multi-Party Machine Learning Workshop,", "year": 2016 }, { "authors": [ "Dmitry Kovalev", "Samuel Horváth", "Peter Richtárik" ], "title": "Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop", "venue": "In Proceedings of the 31st International Conference on Algorithmic Learning Theory,", "year": 2020 }, { "authors": [ "Guanghui Lan", "Soomin Lee", "Yi Zhou" ], "title": "Communication-efficient algorithms for decentralized and stochastic optimization", "venue": "Mathematical Programming,", "year": 2018 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: challenges, methods, and future directions", "venue": "arXiv preprint arXiv:1908.07873,", "year": 2019 }, { "authors": [ "Xianfeng Liang", "Shuheng Shen", "Jingchang Liu", "Zhen Pan", "Enhong Chen", "Yifei Cheng" ], "title": "Variance reduced local SGD with lower communication complexity", "venue": null, "year": 1912 }, { "authors": [ "Sulin Liu", "Sinno Jialin Pan", "Qirong Ho" ], "title": "Distributed multi-task relationship learning", "venue": "In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2017 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Blaise Agüera y Arcas" ], "title": "Federated learning of deep networks using model averaging", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Agüera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS),", "year": 2017 }, { "authors": [ "Konstantin Mishchenko", "Peter Richtárik" ], "title": "A stochastic decoupling method for minimizing the sum of smooth and non-smooth functions", "venue": null, "year": 1905 }, { "authors": [ "Yurii Nesterov" ], "title": "Introductory lectures on convex optimization: a basic course (Applied Optimization)", "venue": "Kluwer Academic Publishers,", "year": 2004 }, { "authors": [ "Daniel Peterson", "Pallika Kanani", "Virendra J Marathe" ], "title": "Private federated learning with domain adaptation", "venue": "arXiv preprint arXiv:1912.06733,", "year": 2019 }, { "authors": [ "Xun Qian", "Zheng Qu", "Peter Richtárik" ], "title": "L-SVRG and L-Katyusha with arbitrary sampling", "venue": "arXiv preprint arXiv:1906.01481,", "year": 2019 }, { "authors": [ "Xun Qian", "Zheng Qu", "Peter Richtárik" ], "title": "SAGA with arbitrary sampling", "venue": "In The 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Zheng Qu", "Peter Richtárik" ], "title": "Coordinate descent with arbitrary sampling II: Expected separable overapproximation", "venue": "Optimization Methods and Software,", "year": 2016 }, { "authors": [ "Sashank J. Reddi", "Jakub Konečný", "Peter Richtárik", "Barnabás Póczos", "Alex Smola" ], "title": "AIDE: fast and communication efficient distributed optimization", "venue": null, "year": 2016 }, { "authors": [ "Peter Richtárik", "Martin Takáč" ], "title": "Parallel coordinate descent methods for big data optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multitask learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Sebastian U. Stich" ], "title": "Local SGD converges fast and communicates little", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jianyu Wang", "Hao Liang", "Gauri Joshi" ], "title": "Overlap local-sgd: An algorithmic approach to hide communication delays in distributed sgd", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Weiran Wang", "Jialei Wang", "Mladen Kolar", "Nathan Srebro" ], "title": "Distributed stochastic multi-task learning with graph regularization", "venue": "arXiv preprint arXiv:1802.03830,", "year": 2018 }, { "authors": [ "Blake Woodworth", "Kumar Kshitij Patel", "Nathan Srebro" ], "title": "Minibatch vs local sgd for heterogeneous distributed learning", "venue": "arXiv preprint arXiv:2006.04735,", "year": 2020 }, { "authors": [ "Blake Woodworth", "Kumar Kshitij Patel", "Sebastian U Stich", "Zhen Dai", "Brian Bullins", "H Brendan McMahan", "Ohad Shamir", "Nathan Srebro" ], "title": "Is local sgd better than minibatch sgd? arXiv preprint arXiv:2002.07839, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Zhaoxian Wu", "Qing Ling", "Tianyi Chen", "Georgios B Giannakis" ], "title": "Federated variancereduced stochastic gradient descent with robustness to byzantine attacks", "venue": null, "year": 1912 }, { "authors": [ "Lin Xiao", "Tong Zhang" ], "title": "A proximal stochastic gradient method with progressive variance reduction", "venue": "SIAM Journal on Optimization,", "year": 2014 }, { "authors": [ "Sixin Zhang", "Anna E Choromanska", "Yann LeCun" ], "title": "Deep learning with elastic averaging sgd", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Yu Zhang", "Dit-Yan Yeung" ], "title": "A convex formulation for learning task relationships in multi-task learning", "venue": "Uncertainty in Artificial Intelligence,", "year": 2010 }, { "authors": [ "Peilin Zhao", "Tong Zhang" ], "title": "Stochastic optimization with importance sampling for regularized loss minimization", "venue": "In Proceedings of the 32nd International Conference on Machine Learning, PMLR,", "year": 2015 }, { "authors": [ "Y. Zhao", "M. Li", "L. Lai", "N. Suda", "D. Civin", "V. Chandra" ], "title": "Federated learning with non-iid data", "venue": "arXiv preprint arXiv:1806.00582,", "year": 2018 }, { "authors": [ "Kovalev" ], "title": "2020) (in such case, the algorithm is a particular case of GJS (Hanzely", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the proliferation of mobile phones, wearable devices, tablets, and smart home devices comes an increase in the volume of data captured and stored on them. This data contains a wealth of potentially useful information to the owners of these devices, and more so if appropriate machine learning models could be trained on the heterogeneous data stored across the network of such devices. The traditional approach involves moving the relevant data to a data center where centralized machine learning techniques can be efficiently applied (Dean et al., 2012; Reddi et al., 2016). However, this approach is not without issues. First, many device users are increasingly sensitive to privacy concerns and prefer their data to never leave their devices. Second, moving data from their place of origin to a centralized location is very inefficient in terms of energy and time." }, { "heading": "1.1 FEDERATED LEARNING", "text": "Federated learning (FL) (McMahan et al., 2016; Konečný et al., 2016b;a; McMahan et al., 2017) has emerged as an interdisciplinary field focused on addressing these issues by training machine learning models directly on edge devices. The currently prevalent paradigm (Li et al., 2019; Kairouz et al., 2019) casts supervised FL as an empirical risk minimization problem of the form\nmin x∈Rd\n1 n n∑ i=1 fi(x), (1)\nwhere n is the number of devices participating in training, x ∈ Rd encodes the d parameters of a global model (e.g., weights of a neural network) and\nfi(x) := Eξ∼Di [f(x, ξ)] represents the aggregate loss of model x on the local data represented by distribution Di stored on device i. One of the defining characteristics of FL is that the data distributions Di may possess very different properties across the devices. Hence, any potential FL method is explicitly required to be able to work under the heterogeneous data setting.\nThe most popular method for solving (1) in the context of FL is the FedAvg algorithm (McMahan et al., 2016). In its most simple form, when one does not employ partial participation, model compression, or stochastic approximation, FedAvg reduces to Local Gradient Descent (LGD) (Khaled\net al., 2019; 2020), which is an extension of GD performing more than a single gradient step on each device before aggregation. FedAvg has been shown to work well empirically, particularly for non-convex problems, but comes with poor convergence guarantees compared to the non-local counterparts when data are heterogeneous.\nSome issues with current approaches to FL\nThe first motivation for our research comes from the appreciation that data heterogeneity does not merely present challenges to the design of new provably efficient training methods for solving (1), but also inevitably raises questions about the utility of such a global solution to individual users. Indeed, a global model trained across all the data from all devices might be so removed from the typical data and usage patterns experienced by an individual user as to render it virtually useless. This issue has been observed before, and various approaches have been proposed to address it. For instance, the MOCHA (Smith et al., 2017) framework uses a multi-task learning approach to allow for personalization. Next, (Khodak et al., 2019) propose a generic online algorithm for gradientbased parameter-transfer meta-learning and demonstrate improved practical performance over FedAvg (McMahan et al., 2017). Approaches based on variational inference (Corinzia & Buhmann, 2019), cyclic patterns in practical FL data sampling (Eichner et al., 2019) transfer learning (Zhao et al., 2018) and explicit model mixing (Peterson et al., 2019) have been proposed.\nThe second motivation for our work is the realization that even very simple variants of FedAvg, such as LGD, which should be easier to analyze, fail to provide theoretical improvements in communication complexity over their non-local cousins, in this case, GD (Khaled et al., 2019; 2020).1\nThis observation is at odds with the practical success of local methods in FL. This leads us to ask the question: if LGD does not theoretically improve upon GD as a solver for the traditional global problem (1), perhaps LGD should not be seen as a method for solving (1) at all. In such a case, what problem does LGD solve? A good answer to this question would shed light on the workings of LGD, and by analogy, on the role local steps play in more elaborate FL methods such as local SGD (Stich, 2020; Khaled et al., 2020) and FedAvg." }, { "heading": "2 CONTRIBUTIONS", "text": "In our work we argue that the two motivations mentioned in the introduction point in the same direction, i.e., we show that a single solution can be devised addressing both problems at the same time. Our main contributions are:\nNew formulation of FL which seeks an implicit mixture of global and local models. We propose a new optimization formulation of FL. Instead of learning a single global model by solving (1), we propose to learn a mixture of the global model and the purely local models which can be trained by each device i using its data Di only. Our formulation (see Sec. 3) lifts the problem from Rd to Rnd, allowing each device i to learn a personalized model xi ∈ Rd. These personalized models are encouraged to not depart too much from their mean by the inclusion of a quadratic penalty ψ multiplied by a parameter λ ≥ 0. Admittedly, the idea of softly-enforced similarity of the local models was already introduced in the domain of the multi-task relationship learning (Zhang & Yeung, 2010; Liu et al., 2017; Wang et al., 2018) and distributed optimization (Lan et al., 2018; Gorbunov et al., 2019; Zhang et al., 2015). The mixture objective we propose (see (2)) is a special case of their setup, which justifies our approach from the modeling perspective. Note that Zhang et al. (2015); Liu et al. (2017); Wang et al. (2018) provide efficient algorithms to solve the mixture objective already. However, none of the mentioned papers consider the FL application, nor they shed a light on the communication complexity of LGD algorithms, which we do in our work.\nTheoretical properties of the new formulation. We study the properties of the optimal solution of our formulation, thus developing an algorithmic-free theory. When the penalty parameter is set to zero, then obviously, each device is allowed to train their own model without any dependence on the data stored on other devices. Such purely local models are rarely useful. We prove that the optimal\n1After our paper was completed, a lower bound on the performance of local SGD was presented that is worse than the known minibatch SGD guarantee (Woodworth et al., 2020a), confirming that the local methods do not outperform their non-local counterparts in the heterogeneous setup. Similarly, the benefit of local methods in the non-heterogeneous scenario was questioned in (Woodworth et al., 2020b).\nlocal models converge to the traditional global model characterized by (1) at the rate O(1/λ). We also show that the total loss evaluated at the local models is never higher than the total loss evaluated at the global model (see Thm. 3.1). Moreover, we prove an insightful structural result for the optimal local models: the optimal model learned by device i arises by subtracting the gradient of the loss function stored on that device evaluated at the same point (i.e., a local model) from the average of the optimal local models (see Thm. 3.2). As a byproduct, this theoretical result sheds new light on the key update step in the model agnostic meta-learning (MAML) method (Finn et al., 2017), which has a similar but subtly different structure.2 The subtle difference is that the MAML update obtains the local model by subtracting the gradient evaluated at the global model. While MAML was originally proposed as a heuristic, we provide rigorous theoretical guarantees.\nLoopless LGD: non-uniform SGD applied to our formulation. We then propose a randomized gradient-based method—Loopless Local Gradient Descent (L2GD)—for solving our new formulation (Algorithm 1). This method is, in fact, a non-standard application of SGD to our problem, and can be seen as an instance of SGD with non-uniform sampling applied to the problem of minimizing the sum of two convex functions (Zhao & Zhang, 2015; Gower et al., 2019): the average loss, and the penalty. When the loss function is selected by the randomness in our SGD method, the stochastic gradient step can be interpreted as the execution of a single local GD step on each device. Since we set the probability of the loss being sampled to be high, this step is typically repeated multiple times, resulting in multiple local GD steps. In contrast to standard LGD, the number of local steps is not fixed, but random, and follows a geometric distribution. This mechanism is similar in spirit to how the recently proposed loopless variants of SVRG (Hofmann et al., 2015; Kovalev et al., 2020) work in comparison with the original SVRG (Johnson & Zhang, 2013a; Xiao & Zhang, 2014). Once the penalty is sampled by our method, the resultant SGD step can be interpreted as the execution of an aggregation step. In contrast with standard aggregation, which performs full averaging of the local models, our method merely takes a step towards averaging. However, the step is relatively large.\nConvergence theory. By adapting the general theory from (Gower et al., 2019) to our setting, we obtain theoretical convergence guarantees assuming that each fi is L-smooth and µ-strongly convex (see Thm. 4.2). Interestingly, by optimizing the probability of sampling the penalty (we get p? = λλ+L ), which is an indirect way of fixing the expected number of local steps to 1+ L λ , we prove an 2λλ+L L µ log 1 ε bound on the expected number of communication rounds (see Cor. 4.3). We believe that this is remarkable in several ways. By choosing λ small, we tilt our goal towards pure local models: the number of communication rounds is tending to 0 as λ→ 0. If λ→∞, the solution our formulation converges to is the optimal global model, and L2GD obtains the communication bound O ( L µ log 1 ε ) , which matches the efficiency of GD.\nWhat problem do local methods solve? Noting that L2GD is a (mildly nonstandard) version of LGD,3 which is a key method most local methods for FL are based on, and noting that, as we show, L2GD solves our new formulation of FL, we offer a new and surprising interpretation of the role of local steps in FL. In particular, the role of local steps in gradient type methods, such as GD, is not to reduce communication complexity, as is generally believed. Indeed, there is no theoretical result supporting this claim in the key heterogeneous data regime. Instead, their role is to steer the method towards finding a mixture of the traditional global and the purely local models. Given that the stepsize is fixed, the more local steps are taken, the more we bias the method towards the purely local models. Our new optimization formulation of FL formalizes this as it defines the problem that local methods, in this case L2GD, solve. There is an added benefit here: the more we want our formulation to be biased towards purely local models (i.e., the smaller the penalty parameter λ is), the more local steps does L2GD take, and the better the total communication complexity of L2GD becomes. Hence, despite a lot of research on this topic, our paper provides the first proof that a local method (e.g., L2GD) can be better than its non-local counterpart (e.g., GD) in terms of total communication complexity in the heterogeneous data setting. We are able to do this by noting that local methods should better be seen as methods for solving the new FL formulation proposed here.\nGeneralizations: partial participation, local SGD and variance reduction. We further generalize and improve our method by allowing for (i) stochastic partial participation of devices in each communication round,(ii) subsampling on each device which means we can perform local SGD steps\n2The connection of FL and multi-task meta learning is discussed in (Kairouz et al., 2019), for example. 3To be specific, L2GD is equivalent to Overlap LGD (Wang et al., 2020) with random local loop size.\ninstead of local GD steps, and (iii) total variance reduction mechanism to tackle the variance coming from three sources: locality of the updates induced by non-uniform sampling (already present in L2GD), partial participation and local subsampling. Due to its level of generality, this method, which we call L2SGD++, is presented in the Appendix only, alongside the associated complexity results. In the main body of this paper, we instead present a simplified version thereof, which we call L2SGD+ (Algorithm 3). The convergence theory for it is presented in Thm. 5.1 and Cor. 5.2.\nHeterogeneous data. All our methods and convergence results allow for fully heterogeneous data and do not depend on any assumptions on data similarity across the devices.\nSuperior empirical performance. We show through ample numerical experiments that our theoretical predictions can be observed in practice." }, { "heading": "3 NEW FORMULATION OF FL", "text": "We now introduce our new formulation for training supervised FL models: min\nx1,...,xn∈Rd {F (x) := f(x) + λψ(x)}\nf(x) := 1n n∑ i=1 fi(xi), ψ(x) := 1 2n n∑ i=1 ‖xi − x̄‖2 , (2)\nwhere λ ≥ 0 is a penalty parameter, x1, . . . , xn ∈ Rd are local models, x := (x1, x2, . . . , xn) ∈ Rnd and x̄ := 1n ∑n i=1 xi is the average of the local models.\nDue to the assumptions on fi we will make in Sec. 3.1, F is strongly convex and hence (2) has a unique solution, which we denote x(λ) := (x1(λ), . . . , xn(λ)) ∈ Rnd. We further let x̄(λ) := 1 n ∑n i=1 xi(λ). We now comment on the rationale behind the new formulation.\nLocal models (λ = 0). Note that for each i, xi(0) solves the local problem minxi∈Rd fi(xi). That is, xi(0) is the local model based on data Di stored on device i only. This model can be computed by device i without any communication whatsoever. Typically, Di is not rich enough for this local model to be useful. In order to learn a better model, one has to take into account the date from other clients as well. This, however, requires communication.\nMixed models (λ ∈ (0,∞)). As λ increases, the penalty λψ(x) has an increasingly more substantial effect, and communication is needed to ensure that the models are not too dissimilar, as otherwise the penalty λψ(x) would be too large.\nGlobal model (λ = ∞). Let us now look at the limit case λ → ∞. Intuitively, this limit case should force the optimal local models to be mutually identical, while minimizing the loss f . In particular, this limit case will solve4 min { f(x) : x1, . . . , xn ∈ Rd, x1 = x2 = · · · = xn } , which is equivalent to the global formulation (2). Because of this, let us define xi(∞) for each i to be the optimal global solution of (1), and let x(∞) := (x1(∞), . . . , xn(∞))." }, { "heading": "3.1 TECHNICAL PRELIMINARIES", "text": "We make the following assumption on the functions fi:\nAssumption 3.1 For each i, the function fi : Rd → R is L-smooth and µ-strongly convex.\nFor xi, yi ∈ Rd, 〈xi, yi〉 denotes the standard inner product and ‖x‖ := 〈xi, xi〉1/2 is the standard Euclidean norm. For vectors x = (x1, . . . , xn) ∈ Rnd, y = (y1, . . . , yn) ∈ Rnd we define the standard inner product and norm via 〈x, y〉 := ∑n i=1〈xi, yi〉, ‖x‖ 2 := ∑n i=1 ‖xi‖ 2 . Note that the separable structure of f implies that (∇f(x))i = 1 n∇fi(xi), i.e., ∇f(x) = 1 n (∇f1(x1),∇f2(x2), . . . ,∇fn(xn)).\nNote that Assumption 3.1 implies that f is Lf -smooth with Lf := Ln and µf -strongly convex with µf := µ n . Clearly, ψ is convex by construction. It can be shown that ψ is Lψ-smooth with Lψ = 1 n\n4If λ = ∞ and x1 = x2 = · · · = xn does not hold, we have F (x) = ∞. Therefore, we can restrict ourselves on set x1 = x2 = · · · = xn without loss of generality.\n(see Appendix). We can also easily see that (∇ψ(x))i = 1 n (xi − x̄)(see Appendix), which implies\nψ(x) = n2 n∑ i=1 ‖(∇ψ(x))i‖2 = n2 ‖∇ψ(x)‖ 2 ." }, { "heading": "3.2 CHARACTERIZATION OF OPTIMAL SOLUTIONS", "text": "Our first result describes the behavior of f(x(λ)) and ψ(x(λ)) as a function of λ.\nTheorem 3.1 The function λ→ ψ(x(λ)) is non-increasing, and for all λ > 0 we have ψ(x(λ)) ≤ f(x(∞))−f(x(0))λ . (3) Moreover, the function λ→ f(x(λ)) is non-decreasing, and for all λ ≥ 0 we have f(x(λ)) ≤ f(x(∞)). (4)\nIneq. (3) says that the penalty decreases to zero as λ grows, and hence the optimal local models xi(λ) are increasingly similar as λ grows. The second statement suggest that the loss f(x(λ)) increases with λ, but never exceeds the optimal global loss f(x(∞)) of the standard FL formulation (1). We now characterize the optimal local models which connect our model to the MAML framework (Finn et al., 2017), as mentioned in the introduction.\nTheorem 3.2 For each λ > 0 and 1 ≤ i ≤ n we have xi(λ) = x(λ)− 1λ∇fi(xi(λ)). (5)\nFurther, we have n∑ i=1 ∇fi(xi(λ)) = 0 and ψ(x(λ)) = 12λ2 ‖∇f(x(λ))‖ 2.\nThe optimal local models (5) are obtained from the average model by subtracting a multiple of the local gradient. Observe that the local gradients always sum up to zero at optimality. This is obviously true for λ =∞, but it is a bit less obvious that this holds for any λ > 0. Next, we argue the optimal local models converge to the traditional FL solution at the rateO(1/λ).\nTheorem 3.3 Let P (z) := 1n ∑n i=1 fi(z). Then, x(∞) is the unique minimizer of P and we have\n‖∇P (x̄(λ))‖2 ≤ 2L 2(f(x(∞))− f(x(0)))\nλ . (6)" }, { "heading": "4 L2GD: LOOPLESS LOCAL GD", "text": "In this section we describe a new randomized method for solving the formulation (2). Our method is a non-uniform SGD for (2) seen as a 2-sum problem, sampling either∇f or∇ψ to estimate∇F . Letting 0 < p < 1, we define a stochastic gradient of F at x ∈ Rnd as follows\nG(x) := { ∇f(x) 1−p with probability 1− p λ∇ψ(x)\np with probability p . (7)\nClearly,G(x) is an unbiased estimator of∇F (x). This leads to the following method for minimizing F , which we call L2GD: xk+1 = xk − αG(xk). Plugging the formulas for ∇f(x) and ∇ψ(x) into (7), and writing the resulting method in a distributed manner, we arrive at Algorithm 1. In each iteration, a coin ξ is tossed and lands 1 with probability p and 0 with probability 1 − p. If ξ = 0, all Devices perform one local GD step (8), and if ξ = 1, Master shifts each local model towards the average via (9). As we shall see in Sec. 4.2, our theory limits the value of the stepsize α, which has the effect that the ratio αλnp cannot exceed 1 2 . Hence, (9) is a convex combination of x k i and x̄ k.\nNote that Algorithm 1 is only required to communicate when a two consecutive coin tosses land a different value (see the detailed explanation in Sec. C.1 of the appendix). Consequently, the expected number of communication rounds in k iterations of L2GD is p(1− p)k.\nRemark 4.1 Our algorithm statements do not take the data privacy into the consideration. While privacy is a very important aspect of FL; in this paper, we tackle different FL challenges and thus we ignore privacy issues. However, the proposed algorithms can be implemented in a private fashion as well using tricks that are used in the classical FL scenario (Bonawitz et al., 2017).\nAlgorithm 1 L2GD: Loopless Local Gradient Descent\nInput: x01 = · · · = x0n ∈ Rd, stepsize α, probability p for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ = 0 then\nAll Devices i = 1, . . . , n perform a local GD step:\nxk+1i = x k i − αn(1−p)∇fi(x k i ) (8)\nelse Master computes the average x̄k = 1n n∑ i=1 xki\nMaster for each i computes step towards aggregation xk+1i = ( 1− αλnp ) xki + αλ np x̄\nk (9) end if\nend for" }, { "heading": "4.1 THE DYNAMICS OF LOCAL GD AND AVERAGING STEPS", "text": "Notice that the average of the local models does not change during an aggregation step. Indeed, x̄k+1 is equal to 1n n∑ i=1 xk+1i (9) = 1n n∑ i=1 [( 1− αλnp ) xki + αλ np x̄ k ] = x̄k.\nIf several averaging steps take place in a sequence, the point a = x̄k in (9) remains unchanged, and each local model xki merely moves along the line joining the initial value of the local model at the start of the sequence and a, with each step pushing xki closer to the average a.\nIn summary, the more local GD steps are taken, the closer the local models get to the pure local models; and the more averaging steps are taken, the closer the local models get to their average value. The relative number of local GD vs. averaging steps is controlled by the parameter p: the expected # of local GD steps is 1p , and the expected number of consecutive aggregation steps is 1 1−p ." }, { "heading": "4.2 CONVERGENCE THEORY", "text": "We now present our convergence result for L2GD.\nTheorem 4.2 Let Assumption 3.1 hold. If α ≤ 12L , then E [∥∥xk − x(λ)∥∥2] ≤ (1− αµn )k ∥∥x0 − x(λ)∥∥2 + 2nασ2µ ,\nwhere L := 1n max { L 1−p , λ p } and σ2 := 1n2 ∑n i=1 ( 1 1−p‖∇fi(xi(λ))‖ 2 + λ 2 p ‖xi(λ)− x(λ)‖ 2 ) .\nLet us find the parameters p, α which lead to the fastest rate, to push the error within( O(ε) + 2nασ 2\nµ\n) -neighborhood of the optimum5, i.e., to achieve\nE [∥∥xk − x(λ)∥∥2] ≤ ε∥∥x0 − x(λ)∥∥2 + 2nασ2µ . (10)\nCorollary 4.3 The value p? = λL+λ minimizes both the number of iterations and the expected number of communications for achieving (10). In particular, the optimal number of iterations is 2L+λµ log 1 ε , and the optimal expected number of communications is 2λ λ+L L µ log 1 ε .\nIf we choose p = p?, then αλnp = 1 2 , and the aggregation rule (9) in Algorithm 1 becomes\nxk+1i = 1 2 ( xki + x̄ k )\n(11)\n5In Sec. 5 we propose a variance reduced algorithm which removes the ( 2nασ 2\nµ )-neighborhood from Thm. 4.2. In that setting, our goal will be to achieve E [∥∥xk − x(λ)∥∥2] ≤ ε∥∥x0 − x(λ)∥∥2.\nF (x0)−F (x∗) ≤ 10\n−5 as a function of p with\np∗ ≈ 0.09 (for L2SGD+). Logistic regression on a1a dataset with λ = 0.1.\nwhile the local GD step (8) becomes xk+1i = x k i − 12L∇fi(x k i ). Notice that while our method does not support full averaging as that is too unstable, (11) suggests that one should take a large step towards averaging. As λ get smaller, the solution to the optimization problem (2) will increasingly favour pure local models, i.e., xi(λ) → xi(0) := arg min fi for all i as λ → 0. Pure local models can be computed without any communication whatsoever and Cor. 4.3 confirms this intuition: the optimal number of communication round decreases to zero as λ → 0. On the other hand, as λ → ∞, the optimal number of communication rounds converges to 2Lµ log 1 ε , which recovers the performance of GD for finding the globally optimal model (see Fig. 1).\nIn summary, we recover the communication efficiency of GD for finding the globally optimal model as λ → ∞ (ignoring the ( 2nασ 2\nµ )-neighborhood). However, for other values of λ, the communication complexity of L2GD is better and decreases to 0 as λ → 0. Hence, our communication complexity result interpolates between the communication complexity of GD for finding the global model and the zero communication complexity for finding the pure local models." }, { "heading": "5 LOOPLESS LOCAL SGD WITH VARIANCE REDUCTION", "text": "As we have seen in Sec. 4.2, L2GD is a specific instance of SGD, thus only converges linearly to a neighborhood of the optimum. In this section, we resolve the mentioned issue by incorporating control variates to the stochastic gradient (Johnson & Zhang, 2013b; Defazio et al., 2014). We go further: we assume that each local objective has a finite-sum structure and propose an algorithm, L2SGD+, which takes local stochastic gradient steps, while maintaining (global) linear convergence rate. As a consequence, L2SGD+ is the first local SGD with linear convergence.6 For convenience, we present variance reduced local GD (i.e., no local subsampling) in the Appendix. Assumption 5.1 Assume that fi has a finite-sum structure: fi(xi) = 1m ∑m j=1 f ′ i,j(xi). Let f ′ i,j be convex, L′-smooth while fi is µ-strongly convex (for each 1 ≤ j ≤ m, 1 ≤ i ≤ n)." }, { "heading": "5.1 CONVERGENCE THEORY", "text": "We are now ready to present a convergence rate of L2SGD+ (the algorithm, along with the efficient implementation is presented in Appendix C.4). Theorem 5.1 Let Assumption 5.1 hold and chooseα = nmin {\n(1−p) 4L′+µm , p 4λ+µ\n} . Then the iteration\ncomplexity of Algorithm 3 is max {\n4L′+µm (1−p)µ , 4λ+µ pµ } log 1ε .\nNext, we find the value of p that yields both the best iteration and communication complexity.\n6We are aware that a linearly converging local SGD (with λ =∞) can be obtained as a particular instance of the decoupling method (Mishchenko & Richtárik, 2019). Other variance reduced local SGD algorithms (Liang et al., 2019; Karimireddy et al., 2019; Wu et al., 2019) do not achieve linear convergence.\nCorollary 5.2 Both communication and iteration complexity of L2SGD+ are minimized for p = 4λ+µ 4λ+4L′+(m+1)µ . The resulting iteration complexity is ( 4λµ + 4 L′ µ +m+ 1 ) log 1ε , while the com-\nmunication complexity is 4λ+µ4L′+4λ+(m+1)µ ( 4L ′ µ +m ) log 1ε . Note that with λ → ∞, the communication complexity of L2SGD+ tends to ( 4L ′ µ +m )\nlog 1ε , which is communication complexity of minibatch SAGA to find the globally optimal model (Hanzely & Richtárik, 2019). On the other hand, in the pure local setting (λ = 0), the communication complexity becomes log 1 – this is because the Lyapunov function involves a term that measures the distance of local models, which requires communication to be estimated.\nRemark 5.3 L2SGD+ is the simplest local SGD method with variance reduction. In the Appendix, we present L2SGD++ which allows for 1) an arbitrary number of data points per client and arbitrary local subsampling, 2) partial participation of clients, and 3) local SVRG-like updates of control variates (thus better memory). Lastly, L2SGD++ exploits the complex smoothness structure of the local objectives, resulting in tighter rates." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we numerically verify the theoretical claims from this paper. We only present a single experiment here, all remaining ones along with the missing details about the setup are in the Appendix. In particular, the Appendix includes two more experiments. The first one studies how p (communication) influences the convergence of L2SGD+. The second experiment aims to examine the effect of parameter λ on the convergence rate of L2SGD+.\nWe consider logistic regression problem with LibSVM data (Chang & Lin, 2011). The data were normalized so that f ′i,j is 1-smooth for each j, while the local objectives are 10\n−4-strongly convex. In order to cover a range of possible scenarios, we have chosen a different number of clients for each dataset (see the Appendix). Lastly, the stepsize was always chosen according to Thm. 5.1.\nWe compare three different methods: L2SGD+, L2GD with local subsampling (L2SGD in the Appendix), and L2GD with local subsampling and control variates constructed for ψ only (L2SGD2 in the Appendix; similar to (Liang et al., 2019)). We expect L2SGD+ to converge to the global optimum linearly, while both L2SGD and L2SGD2 to converge to certain neighborhood. Each method is applied to two objectives constructed by a different split of the data among the devices. For the homogeneous split, we randomly reshuffle the data. For heterogeneous split, we first sort the data based on the labels and then construct the local objectives according to the current order.\nFig. 3 demonstrates the importance of variance reduction – it ensures a fast global convergence of L2SGD+, while the neighborhood is slightly smaller for L2SGD2 compared to L2SGD. As predicted, data heterogeneity does not affect the convergence speed of the proposed methods." }, { "heading": "Appendix Federated Learning of a Mixture of Global and Local Models", "text": "" }, { "heading": "CONTENTS", "text": "" }, { "heading": "1 Introduction 1", "text": "1.1 Federated learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1" }, { "heading": "2 Contributions 2", "text": "" }, { "heading": "3 New Formulation of FL 4", "text": "3.1 Technical preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4\n3.2 Characterization of optimal solutions . . . . . . . . . . . . . . . . . . . . . . . . . 5" }, { "heading": "4 L2GD: Loopless Local GD 5", "text": "4.1 The dynamics of local GD and averaging steps . . . . . . . . . . . . . . . . . . . 6\n4.2 Convergence theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6" }, { "heading": "5 Loopless Local SGD with Variance Reduction 7", "text": "5.1 Convergence theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7" }, { "heading": "6 Experiments 8", "text": "" }, { "heading": "A Possible Extensions 14", "text": "" }, { "heading": "B Experimental Setup and Further Experiments 14", "text": "B.1 Comparison of the methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14\nB.2 Effect of p . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16\nB.3 Effect of λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "C Remaining Algorithms 19", "text": "C.1 Understanding communication of L2GD . . . . . . . . . . . . . . . . . . . . . . . 19\nC.2 L2GD and full averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nC.3 Local GD with variance reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nC.4 L2SGD+: Algorithm and the efficient implementation . . . . . . . . . . . . . . . 21\nC.5 Local SGD with variance reduction – general method . . . . . . . . . . . . . . . . 21\nC.6 Local stochastic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24" }, { "heading": "D Missing Lemmas and Proofs 26", "text": "D.1 Gradient and Hessian of ψ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26\nD.2 Proof of Theorem 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\nD.3 Proof of Theorem 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\nD.4 Proof of Theorem 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\nD.5 Proof of Theorem 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28\nD.6 Proof of Lemma D.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nD.7 Proof of Corollary 4.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29\nD.8 Proof of Corollary 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\nD.9 Proof of Theorems 5.1, C.6, and C.7 . . . . . . . . . . . . . . . . . . . . . . . . . 30\nD.9.1 GJS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30\nD.9.2 Variance reduced local SGD as special case of GJS . . . . . . . . . . . . . 31\nD.9.3 Proof of Theorem C.6 and Theorem C.7 . . . . . . . . . . . . . . . . . . . 31\nD.9.4 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33" }, { "heading": "A POSSIBLE EXTENSIONS", "text": "Our analysis of L2GD can be extended to cover smooth convex and non-convex loss functions fi (we do not explore these directions). Further, our methods can be extended to a decentralized regime where the devices correspond to devices of a connected network, and communication is allowed along the edges of the graph only. This can be achieved by introducing an additional randomization over the penalty ψ. Further, our approach can be accelerated in the sense of Nesterov (Nesterov, 2004) by adapting the a variant of Katyusha (Allen-Zhu, 2017; Qian et al., 2019a) to our setting, thus further reducing the number of communication rounds." }, { "heading": "B EXPERIMENTAL SETUP AND FURTHER EXPERIMENTS", "text": "In all experiments in this paper, we consider a simple binary classification model – logistic regression. In particular, suppose that device i owns data matrix Ai ∈ Rm×d along with corresponding labels bi ∈ {−1, 1}m. The local objective for client i is then given as follows\nfi(x) := 1\nm m∑ j=1 f ′i,j(x) + µ 2 ‖x‖2, where f ′im+j(x) = log (1 + exp ((Ai)j,:x · bi)) .\nThe rows of data matrix A were normalized to have length 4 so that each f ′i,j is 1-smooth for each j. At the same time, the local objective on each device is 10−4 strongly convex. Next, datasets are from LibSVM (Chang & Lin, 2011).\nIn each case, we consider the simplest locally stochastic algorithm. In particular, each dataset is evenly split among the clients, while the local stochastic method samples a single data point each iteration.\nWe have chosen a different number of clients for each dataset – so that we cover different possible scenarios. See Table 1 for details (it also includes sizes of the datasets). Lastly, the stepsize was always chosen according to Thm. 5.1." }, { "heading": "B.1 COMPARISON OF THE METHODS", "text": "In our first experiment, we verify two phenomena:\n• The effect of variance reduction on the convergence speed of local methods. We compare 3 different methods: local SGD with full variance reduction (Algorithm 3), shifted local SGD (Algorithm 7) and local SGD (Algorithm 6). Our theory predicts that a fully variance reduced algorithm converges to the global optimum linearly, while both shifted local SGD and local SGD converge to a neighborhood of the optimum. At the same time, the neighborhood should be smaller for shifted local SGD.\n• The claim that heterogeneity of the data does not influence the convergence rate. We consider two splits of the data heterogeneous and homogeneous. For the homogeneous split, we first randomly reshuffle the data and then construct the local objectives according to the current order (i.e., the first client owns the first m indices, etc.). For heterogeneous split, we first sort the data based on the labels and then construct the local objectives accordingly (thus achieving the worst-case heterogeneity). Note that the overall objective to solve is\ndifferent in homogeneous and heterogeneous case – we thus plot relative suboptimality of the objective (i.e., F (x\nk)−F (x?) F (x0)−F (x?) ) to directly compare the convergence speed.\nIn each experiment, we choose p = 0.1 and λ = 19 – such choice mean that p is very close to optimal. The other parameters (i.e., number of clients) are provided in Table 1. Fig. 4 presents the result.\nAs expected, Figure 4 clearly demonstrates the following:\n• Full variance reduction always converges to the global optima, methods with partial variance reduction only converge to a neighborhood of the optimum.\n• Partial variance reduction (i.e., shifting the local SGD) is better than not using control variates at all. Although the improvement in the performance is rather negligible.\n• Data heterogeneity does not affect the convergence speed of the proposed methods. Therefore, unlike standard local SGD, mixing the local and global models does not suffer the problems with heterogeneity.\nB.2 EFFECT OF p\nIn the second experiment, we study the effect of p on the convergence rate of variance reduced local SGD. Note that p immediately influences the number of communication rounds – on average, the clients take (p−1 − 1) local steps in between two consecutive rounds of communication (aggregation).\nIn Section 5, we argue that, it is optimal (in terms of the convergence rate) to choose p of order p? := λL′+λ . Figure 5 compares p = p\n? against other values of p and confirms its optimality (in terms of optimizing the convergence rate).\nWhile the slower convergence of Algorithm 3 with p < p? is expected (i.e., communicating more frequently yields a faster convergence), slower convergence for p > p? is rather surprising; in fact, it means that communicating less frequently yields faster convergence. This effect takes place due to the specific structure of problem (12); it would be lost when enforcing x1 = · · · = xn (corresponding to λ =∞).\nB.3 EFFECT OF λ\nIn this experiment we study how different values of λ influence the convergence rate of Algorithm 3, given that everything else (i.e., p) is fixed. Note that for each value of λwe get a different instance of problem (12); thus the optimal solution is different as well. Therefore, in order to make a fair comparison between convergence speeds, we plot the relative suboptimality (i.e., F (x\nk)−F (x?) F (x0)−F (x?) ) against\nthe data passes. Figure 6 presents the results. The complexity of Algorithm 3 is7 O ( L′\n(1−p)µ ) log 1ε as soon as λ < λ ? := Lp(1−p) ; otherwise the\ncomplexity isO ( λ pµ ) log 1ε . This perfectly consistent with what Figure 6 shows – the choice λ < λ ? resulted in comparable convergence speed than λ = λ?; while the choice λ > λ? yields noticeably worse rate than λ = λ?.\n7Given that µ is small." }, { "heading": "C REMAINING ALGORITHMS", "text": "" }, { "heading": "C.1 UNDERSTANDING COMMUNICATION OF L2GD", "text": "Example C.1 In order to better understand when communication takes place in Algorithm 1, consider the following possible sequence of coin tosses: 0, 0, 1, 0, 1, 1, 1, 0. The first two coin tosses lead to two local GD steps (8) on all devices. The third coin toss lands 1, at which point all local models xki are communicated to the master, averaged to form x̄\nk, and the step (9) towards averaging is taken. The fourth coin toss is 0, and at this point, the master communicates the updated local models back to the devices, which subsequently perform a single local GD step (8). Then come three consecutive coin tosses landing 1, which means that the local models are again communicated to the master, which performs three averaging steps (9). Finally, the eighth coin toss lands 0, which makes the master send the updated local models back to the devices, which subsequently perform a single local GD step.\nThis example illustrates that communication needs to take place whenever two consecutive coin tosses land a different value. If 0 is followed by a 1, all devices communicate to the master, and if 1 is followed by a 0, the master communicates back to the devices. It is standard to count each pair of communications, Device→Master and the subsequent Master→Device, as a single communication round.\nLemma C.2 The expected number of communication rounds in k iterations of L2GD is p(1− p)k." }, { "heading": "C.2 L2GD AND FULL AVERAGING", "text": "Is a setup such that conditions of Thm. 4.2 are satisfied and the aggregation update (9) is identical to full averaging? This is equivalent requiring 0 < p < 1 such that αλ = np. However, we have αλ ≤ λ2L ≤ np, which means that full averaging is not supported by our theory." }, { "heading": "C.3 LOCAL GD WITH VARIANCE REDUCTION", "text": "In this section, we present variance reduced local gradient descent with partial aggregation. In particular, the proposed algorithm (Algorithm 2) incorporates control variates to Algorithm 1. Therefore, the proposed method can be seen as a special case of Algorithm 3 withm = 1. We thus only present it for pedagogical purposes, as it might shed additional insights into our approach.\nIn particular, the update rule of proposed method will be xk+1 = xk − αgk where\ngk = { p−1(λ∇ψ(xk)− n−1Ψk) + n−1Jk + n−1Ψk with probability p (1− p)−1(∇f(xk)− n−1Jk) + n−1Jk + n−1Ψk with probability 1− p .\nfor some control variates vectors Jk,Ψk ∈ Rnd. A quick check gives E [ gk |xk ] = ∇f(xk) + λ∇ψ(xk) = ∇F (xk),\nthus the direction we are taking is unbiased regardless of the value of control variates Jk,Ψk. The goal is to make control variates Jk,Ψk correlated8 with n∇f(xk) and nλ∇ψ(xk). One possible solution to the problem is for Jk,Ψk to track most recently observed values of n∇f(·) and nλ∇ψ(·), which corresponds to the following update rule(\nΨk+1,Jk+1 ) =\n{( nλ∇ψ(xk),Jk ) with probability p(\nΨk, n∇f(xk) ) with probability 1− p .\nA specific, distributed implementation of the described method is presented as Algorithm 2. The only communication between the devices takes place when the average model x̄k is being computed (with probability p), which is analogous to standard local SGD. Therefore we aim to set p rather small.\nNote that Algorithm 2 is a particular special case of SAGA with importance sampling (Qian et al., 2019b); thus, we obtain convergence rate of the method for free. We state it as Thm. C.3.\n8Specifically we aim to have Corr [ Jk, n∇f(xk) ] → 1 and Corr [ n−1Ψk, λ∇ψ(xk) ] → 1 as xk → x?.\nAlgorithm 2 Variance reduced local gradient descent\nInput: x01 = · · · = x0n ∈ Rd, stepsize α, probability p J01 = · · · = J0n = Ψ01 = · · · = Ψ0n = 0 ∈ Rd for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ then\nAll Devices i = 1, . . . , n: Compute∇fi(xki ) xk+1i = x k i − α ( n−1(1− p)−1∇fi(xki )− n−1 p 1−pJ k i + n −1Ψki ) Set Jk+1i = ∇fi(xki ), Ψ k+1 i = Ψ k i\nelse Master computes the average x̄k = 1n ∑n i=1 x k i\nMaster does for all i = 1, . . . , n: Set xk+1i = x k i − α ( λ np (x k i − x̄k)− (p−1 − 1)n−1Ψki + n−1Jki ) Set Ψk+1i = λ(x k i − x̄k), J k+1 i = J k i\nend if end for\nTheorem C.3 Let Assumption 3.1 hold. Set α = nmin (\n(1−p) 4L+µ , p 4λ+µ\n) . Then, iteration complexity\nof Algorithm 2 is max ( 4L+µ µ(1−p) , 4λ+µ µp ) log 1ε .\nProof: Clearly,\nF (x) = f(x) + λψ(x) = 12 2f(x)︸ ︷︷ ︸ :=f(x) + 2λψ(x)︸ ︷︷ ︸ :=ψ(x) . Note that ψ is 2λn smooth and f is 2L n smooth. At the same time, F is µ n strongly convex. Using convergence theorem of SAGA with importance sampling from (Qian et al., 2019b; Gazagnadou et al., 2019), we get\nE [ F (xk) + α2 Υ(J k,Ψk) ] ≤ ( 1− αµn )k ( F (x0) + α2 Υ(J 0,Ψ0) ) ,\nwhere\nΥ(Jk,Ψk) := 4n2 n∑ i=1 (∥∥Ψki − λ(xi(λ)− x̄(λ))∥∥2 + ‖Jki −∇fi(xi(λ))‖2) and α = nmin ( (1−p) 4L+µ , p 4λ+µ ) , as desired.\nCorollary C.4 Iteration complexity of Algorithm 2 is minimized for p = 4λ+µ4λ+4L+2µ , which yields complexity 4 ( λ µ + L µ + 1 2 ) log 1ε . The communication complexity is minimized for any p ≤ 4λ+µ4λ+4L+2µ , in which case the total number of communication rounds to reach ε-solution is( 4λ µ + 1 ) log 1ε .\nAs a direct consequence of Corollary C.4 we see that the optimal choice of p that minimizes both communication and number of iterations to reach ε solution of problem (17) is p = 4λ+µ4λ+4L+2µ .\nRemark C.5 While both Algorithm 2 and Algorithm 3 are a special case of SAGA, the practical version of variance reduced local SGD (presented in Section C.5) is not. In particular, we wish to run the SVRG-like method locally in order to avoid storing the full gradient table.9 Therefore,\n9SAGA does not require storing a full gradient table for problems with linear models by memorizing the residuals. However, in full generality, SVRG-like methods are preferable.\nvariance reduced local SGD that will be proposed in Section C.5 is neither a special case of SAGA nor a special case of SVRG (or a variant of SVRG). However, it is still a special case of a more general algorithm from (Hanzely & Richtárik, 2019).\nAs mentioned, Algorithm 3 is a generalization of Algorithm 2 when the local subproblem is a finite sum. Note that Algorithm 2 constructs a control variates for both local subproblem and aggregation function ψ and constructs corresponding unbiased gradient estimator. In contrast, Algorithm 3 constructs extra control variates within the local subproblem in order to reduce the variance of gradient estimator coming from the local subsampling." }, { "heading": "C.4 L2SGD+: ALGORITHM AND THE EFFICIENT IMPLEMENTATION", "text": "Denote 1 ∈ Rm to be vector of ones. We are now ready to state L2SGD+ as Algorithm 3.\nAlgorithm 3 L2SGD+: Loopless Local SGD with Variance Reduction\nInput: x01 = · · · = x0n ∈ Rd, stepsize α, probability p J0i = 0 ∈ Rd×m,Ψ0i = 0 ∈ Rd (for i = 1, . . . , n) for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ = 0 then\nAll Devices i = 1, . . . , n: Sample j ∈ {1, . . . ,m} (uniformly at random) gki = 1 n(1−p) ( ∇f ′i,j(xki )− ( Jki ) :,j ) + Jki 1 nm + Ψki n\nxk+1i = x k i − αgki Set (Jk+1i ):,j = ∇f ′i,j(xki ), Ψ k+1 i = Ψ k i ,\n(Jk+1i ):,l = (J k+1 i ):,l for all l 6= jelse Master computes the average x̄k = 1n ∑n i=1 x k i\nMaster does for all i = 1, . . . , n: gki = λ np (x k i − x̄k)− p−1−1 n Ψ k i + 1 nmJ k i 1\nSet xk+1i = x k i − αgki Set Ψk+1i = λ(x k i − x̄k), J k+1 i = J k i\nend if end for\nL2SGD+ only communicates when a two consecutive coin tosses land a different value, thus, on average p(1 − p)k times per k iterations. However, L2SGD+ requires communication of control variates Ji1,Ψi as well – each communication round is thus three times more expensive. In the Appendix, we provide an implementation of L2SGD+ that does not require the communication of Ji1,Ψi.\nHere we present an efficient implementation of L2SGD+ as Algorithm 4 so that we do not have to communicate control variates. As a consequence, Algorithm 4 needs to communicate on average p(1 − p)k times per k iterations, while each communication consists of sending only local models to the master and back." }, { "heading": "C.5 LOCAL SGD WITH VARIANCE REDUCTION – GENERAL METHOD", "text": "In this section, we present a fully general variance reduced local SGD. We consider a more general instance of (2) where each local objective includes a possibly nonsmooth regularizer, which admits a cheap evaluation of proximal operator. In particular, the objective becomes\nAlgorithm 4 L2SGD+: Loopless Local SGD with Variance Reduction (communication-efficient implementation)\nInput: x01 = · · · = x0n = x̃ ∈ Rd, stepsize α, probability p Initialize control variates J0i = 0 ∈ Rd×m,Ψ0i = 0 ∈ Rd (for i = 1, . . . , n), initial coin toss ξ−1 = 0 for k = 0, 1, . . . do ξk = 1 with probability p and 0 with probability 1− p if ξk = 0 then All Devices i = 1, . . . , n:\nif ξk−1 = 1 then Receive xki , c from Master Reconstruct x̄k = x̄k−c using xki , x k−c i , c\nSet xki = x k i − cα 1nmJ k i 1, J k i = J k−c i , Ψ k i = λ(x k−c i − x̄k),\nend if Sample j ∈ {1, . . . ,m} (uniformly at random) gki = 1 n(1−p) ( ∇f ′i,j(xki )− ( Jki ) :,j ) + Jki 1 nm + Ψki n xk+1i = x k i − αgki Set (Jk+1i ):,j = ∇f ′i,j(xki ), Ψ k+1 i = Ψ k i ,\n(Jk+1i ):,l = (J k+1 i ):,l for all l 6= j\nelse Master does for all i = 1, . . . , n:\nif ξk−1 = 0 then Set c = 0 Receive xki from Device and set x̄ = 1 n ∑n i=1 x k i , x k i = x k i end if Set xk+1i = x k i − α ( λ np (x k i − x̄)− p−1−1 n λ(x̃− x̄)\n) Set x̃ = xki Set c = c+ 1\nend if end for\nmin x∈Rdn\n1 N n∑ i=1 mi∑ j=1 f ′i,j(xi) ︸ ︷︷ ︸ = N n fi(x)︸ ︷︷ ︸\n=f(x)\n+λ 12n n∑ i=1\n‖xi − x̄‖2︸ ︷︷ ︸ =ψ(x)\n︸ ︷︷ ︸ =F (x)\n+ n∑ i=1\nRi(xi)︸ ︷︷ ︸ :=R(x) , (12)\nwhere mi is the number of data points owned by client i and N = ∑n i=1mi.\nIn order to squeeze a faster convergence rate from minibatch samplings, we will assume that f ′i,j is smooth with respect to a matrix Mi,j (instead of scalar L′i,j = λmaxMi,j).\nAssumption C.1 Suppose that f ′i,j is Mi,j smooth (Mi,j ∈ Rd×d,Mi,j 0) and µ strongly convex for 1 ≤ j ≤ mi, 1 ≤ i ≤ n, i.e.\nf ′i,j(y)+ 〈 ∇f ′i,j(y), x− y 〉 ≤ f ′i,j(x) ≤ f ′i,j(y)+ 〈 ∇f ′i,j(y), x− y 〉 + 12 ‖y − x‖ 2 Mi,j\n, ∀x, y ∈ Rd. (13)\nFurthermore, assume that Ri is convex for 1 ≤ i ≤ n.\nOur method (Algotihm 5) allows for arbitrary aggregation probability (same as Algorithms 2, 3), arbitrary sampling of clients (to model the inactive clients) and arbitrary structure/sampling of the local objectives (i.e., arbitrary size of local datasets, arbitrary smoothness structure of each local\nobjective and arbitrary subsampling strategy of each client). Moreover, it allows for the SVRG-like update rule of local control variates Jk, which requires less storage given an efficient implementation.\nTo be specific, each device owns a distribution Di over subsets of mi. When the aggregation is not performed (with probability 1 − p), a subset of active devices S is selected (S follows arbitrary fixed distribution D). Each of the active clients (i ∈ S) samples a subset of local indices Si ∼ Di and observe the corresponding part of local Jacobian Gi(xk)(:,Si) (where Gi(x k) := [∇f ′i,1(xk),∇f ′i,2(xk), . . .∇f ′i,mi(x k)). When the aggregation is performed (with probability p) we evaluate x̄k and distribute it to each device; using which each device computes a corresponding component of λ∇ψ(xk). Those are the key components in constructing the unbiased gradient estimator (without control variates).\nIt remains to construct control variates and unbiased gradient estimator. If the aggregation is done, we just simply replace the last column of the gradient table. If the aggregation is not done, we have two options – either keep replacing the columns of the Jacobian table (in such case, we obtain a particular case of SAGA (Defazio et al., 2014)) or do LSVRG-like replacement (Hofmann et al., 2015; Kovalev et al., 2020) (in such case, the algorithm is a particular case of GJS (Hanzely & Richtárik, 2019), but is not a special case of neither SAGA nor LSVRG. Note that LSVRG-like replacement is preferrable in practice due to a better memory efficiency (one does not need to store the whole gradient table) for the models other than linear.\nIn order to keep the gradient estimate unbiased, it will be convenient to define vector pi ∈ Rmi such that for each j ∈ {1, . . . ,mi} we have P (j ∈ Si) = pi,j . Next, to give a tight rate for any given pair of smoothness structure and sampling strategy, we use a standard tool first proposed for the analysis of randomized coordinate descent methods (Richtárik & Takáč, 2016; Qu & Richtárik, 2016) called Expected Separable Overapproximation (ESO) assumption. ESO provides us with smoothness parameters of the objective which “account” for the given sampling strategy.\nAssumption C.2 Suppose that there is vi ∈ Rmi such for each client we have:\nE ∥∥∥∥∥∥ ∑ j∈Si M 1 2 i,jhi,j ∥∥∥∥∥∥ 2 ≤ mi∑ j=1 pi,jvi,j ‖hi,j‖2 , ∀ 1 ≤ i ≤ n, ∀hi,j ∈ Rmi , j ∈ {1, . . . ,mi}.\n(14)\nLastly, denote pi to be the probability that worker i is active and 1(mi) ∈ Rmi to be the vector of ones. The resulting algorithm is stated as Algorithm 5.\nNext, Theorems C.6 and C.7 present convergence rate of Algorithm 5 (SAGA and SVRG variant, respectively).\nTheorem C.6 Suppose that Assumptions C.1 and C.2 hold. Let\nα = min { min\nj∈{1,...,mi},1≤i≤n\nN(1−p)pi,jpi 4vj+N\nµ n\n, np4λ+µ\n} .\nThen the iteration complexity of Algorithm 5 (SAGA option) is\nmax { max\nj∈{1,...,mi},1≤i≤n\n( 4vj\nn N +µ\nµ(1−p)pi,jpi ) , 4λ+µpµ } log 1ε .\nTheorem C.7 Suppose that Assumptions C.1 and C.2 hold. Let\nα = min { min\nj∈{1,...,mi},1≤i≤n N(1−p)pi\n4 vj pi,j +N µ np −1 i\n, pn4λ+µ\n} .\nThen the iteration complexity of Algorithm 5 (LSVRG option) is\nmax { max\nj∈{1,...,mi},1≤i≤n\n( 4vj\nn Npi,j +µp−1i piµ(1−p) ) , 4λ+µpµ } log 1ε .\nAlgorithm 5 L2SGD++: Loopless Local SGD with Variance Reduction and Partial Participation\nInput: x01, . . . x0n ∈ Rd, # parallel units n, each of them owns mi data points (for 1 ≤ i ≤ n), distributions Dt over subsets of {1, . . . ,mi}, distribution D over subsets of {1, 2, . . . n}, aggregation probability p, stepsize α J0i = 0 ∈ Rd×mi ,Ψ0i = 0 ∈ Rd (for i = 1, . . . , n) for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ = 0 then\nSample S ∼ D All Devices i ∈ S:\nSample Si ∼ Di; Si ⊆ {1, . . . ,mi} (independently on each machine) Observe ∇f ′i,j(xki ) for all j ∈ Si gki = 1 N(1−p)pi (∑ j∈Si p −1 i,j ( ∇f ′i,j(xki )− ( Jki ) :,j )) + 1N J k i 1 (mi) + n−1Ψki xk+1i = proxαRi(x k i − αgki )\nFor all j ∈ {1, . . . ,mi} set Jk+1:,j = { ∇f ′i,j(xki ) if j ∈ Si Jk:,j otherwise\nif SAGA{ ∇f ′i,j(xki ); w. p. pi Jk:,j otherwise if L− SVRG\nSet Ψk+1i = Ψ k i All Devices i 6∈ S: gki = 1 N J k i 1\n(mi) + n−1Ψki xk+1i = proxαRi(x k i − αgki ) Set Jk+1i = J k i ,Ψ k+1 i = Ψ k i\nelse Master computes the average x̄k = 1n ∑n i=1 x k i\nMaster does for all i = 1, . . . , n: gki = p −1λ(xki − x̄k)− (p−1 − 1)n−1Ψki + 1N J k i 1 (mi) Set xk+1i = proxαRi ( xki − αgki ) Set Ψk+1i = λ(x k i − x̄k), J k+1 i = J k i\nend if end for\nRemark C.8 Algotihm 2 is a special case of Algorithm 3 which is in turn special case of Algorithm 5. Similarly, Theorem 2 is a special case of Theorem 5.1 which is again special case of Theorem C.6." }, { "heading": "C.6 LOCAL STOCHASTIC ALGORITHMS", "text": "In this section, we present two more algorithms – Local SGD with partial variance reduction (Algorithm 7) and Local SGD without variance reduction (Algorithm 6). While Algorithm 6 uses no control variates at all (thus is essentially Algorithm 1 where local gradient descent steps are replaced with local SGD steps), Algorithm 7 constructs control variates for ψ only, resulting in locally drifted SGD algorithm (with the constant drift between each consecutive rounds of communication). While we do not present the convergence rates of the methods here, we shall notice they can be easily obtained using the framework from (Gorbunov et al., 2020).\nAlgorithm 6 Loopless Local SGD (L2SGD)\nInput: x01 = · · · = x0n ∈ Rd, stepsize α, probability p for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ = 0 then\nAll Devices i = 1, . . . , n: Sample j ∈ {1, . . . ,m} (uniformly at random) gki = 1 n(1−p) ( ∇f ′i,j(xki ) ) xk+1i = x k i − αgki\nelse Master computes the average x̄k = 1n ∑n i=1 x k i\nMaster does for all i = 1, . . . , n: gki = λ np (x k i − x̄k)\nSet xk+1i = x k i − αgki\nend if end for\nAlgorithm 7 Loopless Local SGD with partial variance reduction (L2SGD2)\nInput: x01 = · · · = x0n ∈ Rd, stepsize α, probability p Ψ0i = 0 ∈ Rd (for i = 1, . . . , n) for k = 0, 1, . . . do ξ = 1 with probability p and 0 with probability 1− p if ξ = 0 then\nAll Devices i = 1, . . . , n: Sample j ∈ {1, . . . ,m} (uniformly at random) gki = 1 n(1−p) ( ∇f ′i,j(xki ) ) + 1nΨ k i\nxk+1i = x k i − αgki Set Ψk+1i = Ψ k i\nelse Master computes the average x̄k = 1n ∑n i=1 x k i\nMaster does for all i = 1, . . . , n: gki = λ np (x k i − x̄k)− p−1−1 n Ψ k i\nSet xk+1i = x k i − αgki Set Ψk+1i = λ(x k i − x̄k)\nend if end for" }, { "heading": "D MISSING LEMMAS AND PROOFS", "text": "D.1 GRADIENT AND HESSIAN OF ψ\nLemma D.1 Let I be the d× d identity matrix and In be n× n identity matrix. Then, we have\n∇2ψ(x) = 1n ( In − 1nee >)⊗ I and ∇ψ(x) = 1n x− x̄ ... x̄ x̄ x̄ ... x̄ .\nFurthermore, Lψ = 1n ." }, { "heading": "Proof:", "text": "Let O the d× d zero matrix and let Qi := [O, . . . ,O︸ ︷︷ ︸\ni−1 , I,O, . . . ,O︸ ︷︷ ︸ n−i ] ∈ Rd×dn\nand Q := [I, . . . , I] ∈ Rd×dn. Note that xi = Qix, and x̄ = 1nQx. So,\nψ(x) = 12n n∑ i=1 ∥∥Qix− 1nQx∥∥2 = 12n n∑ i=1 ∥∥(Qi − 1nQ)x∥∥2 . The Hessian of ψ is\n∇2ψ(x) = 1n n∑ i=1 ( Qi − 1nQ )> ( Qi − 1nQ ) = 1n\nn∑ i=1 ( Q>i Qi − 1nQ > i Q− 1nQ >Qi + 1 n2 Q >Q )\n= 1n n∑ i=1 Q>i Qi − 1n n∑ i=1 1 nQ > i Q− 1n n∑ i=1 1 nQ >Qi + 1 n n∑ i=1 1 n2 Q >Q\n= 1n n∑ i=1 Q>i Qi − 1n2 Q >Q\nand by plugging in for Q and Qi, we get\n∇2ψ(x) = 1n ( 1− 1n ) I − 1nI − 1 nI · · · − 1 nI − 1nI ( 1− 1n ) I − 1nI · · · − 1 nI − 1nI − 1 nI ( 1− 1n ) I · · · − 1nI ... ... ... ...\n− 1nI − 1 nI − 1 nI · · · ( 1− 1n ) I\n\n= 1n ( 1− 1n ) − 1n − 1 n · · · − 1 n − 1n ( 1− 1n ) − 1n · · · − 1 n − 1n − 1 n ( 1− 1n ) · · · − 1n ... ... ... ...\n− 1n − 1 n − 1 n · · · ( 1− 1n ) ⊗ I = 1n ( In − 1nee\n>)⊗ I. Notice that In − 1nee\n> is a circulant matrix, with eigenvalues 1 (multiplicity n − 1) and 0 (multiplicity 1). Since the eigenvalues of a Kronecker product of two matrices are the products of pairs of eigenvalues of the these matrices, we have\nλmax(∇2ψ(x)) = λmax ( 1 n ( In − 1nee >)⊗ I) = 1nλmax (In − 1nee>) = 1n .\nSo, Lψ = 1n .\nThe gradient of ψ is given by\n∇ψ(x) = 1n n∑ i=1 ( Qi − 1nQ )> ( Qi − 1nQ ) x\n= 1n n∑ i=1 ( Q>i Qi − 1nQ > i Q− 1nQ >Qi + 1 n2 Q >Q ) x\n= 1n n∑ i=1 0 ... 0 xi 0 ... 0 − 0 ... 0 x̄ 0 ... 0 − xi/n ... xi/n xi/n xi/n ... xi/n + x̄/n ... x̄/n x̄/n x̄/n ... x̄/n \n= 1n n∑ i=1 0 ... 0 xi 0 ... 0 − n∑ i=1 0 ... 0 x̄ 0 ... 0 − n∑ i=1 xi/n ... xi/n xi/n xi/n ... xi/n + n∑ i=1 x̄/n ... x̄/n x̄/n x̄/n ... x̄/n \n= 1n x− x̄ ... x̄ x̄ x̄ ... x̄ − x̄ ... x̄ x̄ x̄ ... x̄ + x̄ ... x̄ x̄ x̄ ... x̄ \n= 1n x− x̄ ... x̄ x̄ x̄ ... x̄ ." }, { "heading": "D.2 PROOF OF THEOREM 3.1", "text": "For any λ, θ ≥ 0 we have f(x(λ)) + λψ(x(λ)) ≤ f(x(θ)) + λψ(x(θ)) (15) f(x(θ)) + θψ(x(θ)) ≤ f(x(λ)) + θψ(x(λ)). (16)\nBy adding inequalities (15) and (16), we get\n(θ − λ)(ψ(x(λ))− ψ(x(θ))) ≥ 0, which means that ψ(x(λ)) is decreasing in λ. Assume λ ≥ θ. From the (16) we get\nf(x(λ)) ≥ f(x(θ)) + θ(ψ(x(θ))− ψ(x(λ))) ≥ f(x(θ)), where the last inequality follows since θ ≥ 0 and since ψ(x(θ)) ≥ ψ(x(λ)). So, f(x(λ)) is increasing.\nNotice that since ψ is a non-negative function and since x(λ) minimizes F and ψ(x(∞)) = 0, we have\nf(x(0)) ≤ f(x(λ)) ≤ f(x(λ)) + λψ(x(λ)) ≤ f(x(∞)), which implies (3) and (4)." }, { "heading": "D.3 PROOF OF THEOREM 3.2", "text": "The equation∇F (x(λ)) = 0 can be equivalently written as\n∇fi(xi(λ)) + λ(xi(λ)− x(λ)) = 0, i = 1, 2, . . . , n,\nwhich is identical to (5). Averaging these identities over i, we get\nx(λ) = x(λ)− 1λ 1 n n∑ i=1 ∇fi(xi(λ)),\nwhich implies n∑ i=1 ∇fi(xi(λ)) = 0.\nFurther, we have\nψ(x(λ)) = 12n n∑ i=1 ‖xi(λ)− x(λ)‖2 = 12nλ2 n∑ i=1 ‖∇fi(xi(λ))‖2 = 12λ2 ‖∇f(x(λ))‖ 2 ,\nas desired." }, { "heading": "D.4 PROOF OF THEOREM 3.3", "text": "First, observe that\n||∇P (x̄(λ))||2 = ∥∥∥∥∥ 1n∑ i ∇fi(x̄(λ)) ∥∥∥∥∥ 2 = ∥∥∥∥∥ 1n∑ i ∇fi(x̄(λ))− 1 n ∑ i ∇fi(xi(λ)) ∥∥∥∥∥ 2 ,\nwhere the second identity is due to Theorem 3.2 which says that 1n ∑ i∇fi(xi(λ)) = 0. By applying Jensen’s inequality and Lipschitz continuity of functions fi, we get\n||∇P (x̄(λ))||2 ≤ 1 n ∑ i ||∇fi(x̄(λ))−∇fi(xi(λ))||2 ≤ L2 n ∑ i ||x̄(λ)− xi(λ)||2 = 2L2ψ(x(λ)).\nIt remains to apply (3) and notice that P is strongly convex and thus x(∞) is indeed the unique minimizer." }, { "heading": "D.5 PROOF OF THEOREM 4.2", "text": "We first show that our gradient estimator G(x) satisfies the expected smoothness property (Gower et al., 2018; 2019). Lemma D.2 LetL := 1n max { L 1−p , λ p } and σ2 := 1n2 ∑n i=1 ( 1 1−p‖∇fi(xi(λ))‖ 2 + λ 2 p ‖xi(λ)− x(λ)‖ 2 ) .\nThen for all x ∈ Rd we have the inequalities E [ ‖G(x)−G(x(λ))‖2 ] ≤ 2L (F (x)− F (x(λ))) and E [ ‖G(x)‖2 ] ≤ 4L(F (x)− F (x(λ))) + 2σ2.\nNext, Theorem 4.2 from Lemma D.2 by applying Theorem 3.1 from (Gower et al., 2019)." }, { "heading": "D.6 PROOF OF LEMMA D.2", "text": "We first have E [ ‖G(x)−G(x(λ))‖2 ] = (1− p) ∥∥∥∇f(x)1−p − ∇f(x(λ))1−p ∥∥∥2 + p ∥∥∥λ∇ψ(x)p − λ∇ψ(x(λ))p ∥∥∥2 = 11−p ‖∇f(x)−∇f(x(λ))‖ 2 + λ 2 p ‖∇ψ(x)−∇ψ(x(λ))‖ 2\n≤ 2Lf1−pDf (x, x(λ)) + 2λ2Lψ p Dψ(x, x(λ))\n= 2Ln(1−p)Df (x, x(λ)) + 2λ2 np Dψ(x, x(λ)).\nSince Df + λDψ = DF and ∇F (x(λ)) = 0, we can continue:\nE [ ‖G(x)−G(x(λ))‖2 ] ≤ 2n max { L 1−p , λ p } DF (x, x(λ))\n= 2n max { L 1−p , λ p } (F (x)− F (x(λ))) .\nNext, note that\nσ2 = 1n2 n∑ i=1 ( 1 1−p‖∇fi(xi(λ))‖ 2 + λ 2 p ‖xi(λ)− x(λ)‖ 2 )\n= 11−p ‖∇f(x(λ))‖ 2 + λ 2 p ‖∇ψ(x(λ))‖ 2 = (1− p) ∥∥∥∇f(x(λ))1−p ∥∥∥2 + p ∥∥∥λ∇ψ(x(λ))p ∥∥∥2\n= E [ ‖G(x(λ))‖2 ] .\nTherefore, we have E [ ‖G(x)‖2 ] ≤ E [ ‖G(x)−G(x(λ))‖2 ] + 2E [ ‖G(x(λ))‖2 ] Lemma D.2+(17)\n≤ 4L(F (x)− F (x(λ))) + 2σ2,\nas desired." }, { "heading": "D.7 PROOF OF COROLLARY 4.3", "text": "Firstly, to minimize the total number of iterations, it suffices to minimize L which is achieved with p? = λL+λ . Let us look at the communication. Fix ε > 0, choose α = 1 2L and let k = 2nL µ log 1 ε , so that ( 1− µ2nL\n)k ≤ ε. The expected number of communications to achieve this goal is equal to\nCommp := p(1− p)k\n= p(1− p) 2 max\n{ L\n1−p , λ p } µ log 1 ε\n= 2 max{pL,(1−p)λ}µ log 1 ε .\nThe quantity Commp is minimized by choosing any p such that pL = (1−p)λ, i.e., for p = λλ+L = p?, as desired. The optimal expected number of communications is therefore equal to\nCommp? = 2λ λ+L L µ log 1 ε ." }, { "heading": "D.8 PROOF OF COROLLARY 5.2", "text": "Firstly, to minimize the total number of iterations, it suffices to solve min max {\n4L′+µm (1−p)µ , 4λ+µ pµ\n} ,\nwhich is achieved with p = p? = 4λ+µ4L′+4λ+(m+1)µ .\nThe expected number of communications to reach ε-solution is Commp = p(1− p) max { 4L′+µm (1−p)µ , 4λ+µ pµ } log 1ε\n= max{p(4L′+µm),(1−p)(4λ+µ)}\nµ log 1 ε .\nMinimizing the above in p yield p = p? = 4λ+µ4L′+4λ+(m+1)µ , as desired. The optimal expected number of communications is therefore equal to\nCommp? = 4λ+µ\n4L′+4λ+(m+1)µ\n( 4L ′ µ +m ) log 1ε ." }, { "heading": "D.9 PROOF OF THEOREMS 5.1, C.6, AND C.7", "text": "Note first that Algorithm 3 is a special case of Algorithm 5, and Theorem 5.1 immediately follows from Theorem C.6. Therefore it suffices to show Theorems C.6, and C.7. In order to do so, we will cast Algorithm 5 as a special case of GJS from (Hanzely & Richtárik, 2019). As a consequence, Theorem C.6 will be a special cases of Theorem 5.2 from (Hanzely & Richtárik, 2019).\nD.9.1 GJS\nIn this section, we quickly summarize results from (Hanzely & Richtárik, 2019), which we cast to sho convergence rate of Algorithm 3. GJS (Hanzely & Richtárik, 2019) is a method to solve regularized empirical risk minimization objective, i.e.,\nmin x∈Rd\n1 n n∑ j=1 fj(x) +R(x). (17)\nDefining G(x) := [∇f1(x), . . . ,∇fn(x)], we observe SG(x),UG(x) every iteration where S is random linear projection operator and U is random linear operator which is identity on expectation. Based on this random gradient information, GJS (Algorithm 8) constructs variance reduced gradient estimator g and takes a proximal step in that direction.\nAlgorithm 8 Generalized JacSketch (GJS) (Hanzely & Richtárik, 2019) 1: Parameters: Stepsize α > 0, random projector S and unbiased sketch U 2: Initialization: Choose solution estimate x0 ∈ Rd and Jacobian estimate J0 ∈ Rd×n 3: for k = 0, 1, . . . do 4: Sample realizations of S and U , and perform sketches SG(xk) and UG(xk) 5: Jk+1 = Jk − S(Jk −G(xk)) update the Jacobian estimate 6: gk = 1nJ ke+ 1nU ( G(xk)− Jk ) e construct the gradient estimator\n7: xk+1 = proxαR(x k − αgk) perform the proximal SGD step 8: end for\nNext we quickly summarize theory of GJS.\nAssumption D.1 Problem (17) has a unique minimizer x?, and f is µ-quasi strongly convex, i.e.,\nf(x?) ≥ f(y) + 〈∇f(y), x? − y〉+ µ2 ‖y − x ?‖2 , ∀y ∈ Rd, (18)\nFunctions fj are convex and Mj-smooth for some Mj 0, i.e.,\nfj(y)+〈∇fj(y), x− y〉 ≤ fj(x) ≤ fj(y)+〈∇fj(y), x− y〉+ 12 ‖y − x‖ 2 Mj , ∀x, y ∈ Rd. (19)\nTheorem D.3 (Slight simplification of Theorem 5.2 from (Hanzely & Richtárik, 2019)) Let Assumption D.1 hold. DefineM(X) := [M1X:,1, . . . ,MnX:,n] Let B be any linear operator commuting with S, and assumeM† 1 2 commutes with S. Define the Lyapunov function\nΨk := ∥∥xk − x?∥∥2 + α ∥∥∥∥BM† 12 (Jk −G(x?))∥∥∥∥2 , (20)\nwhere {xk} and {Jk} are the random iterates produced by Algorithm 8 with stepsizeα > 0. Suppose that α and B are chosen so that\n2α n2 E\n[ ‖UXe‖2 ] + ∥∥∥∥(I − E [S]) 12 BM† 12 X∥∥∥∥2 ≤ (1− αµ)∥∥∥∥BM† 12 X∥∥∥∥2 (21) and\n2α n2 E\n[ ‖UXe‖2 ] + ∥∥∥∥(E [S]) 12 BM† 12 X∥∥∥∥2 ≤ 1n ∥∥∥∥M† 12 X∥∥∥∥2 . (22) for all X ∈ Rd×n. Then for all k ≥ 0, we have E [ Ψk ] ≤ (1− αµ)k Ψ0.\nD.9.2 VARIANCE REDUCED LOCAL SGD AS SPECIAL CASE OF GJS Let Ω(i, j) := j+ ∑i−1 l=1 mi In order to case problem (12) as a special case of 17, denote n := N+1, fΩ(i,j)(x) := N+1 N f ′ i,j(xi) and fn := (N + 1)ψ. Therefore the objective (12) becomes\nmin x∈RNd\nΥ(x) := 1n n∑ j=1 f j(x) +R(x). (23)\nLet v ∈ Rn−1 be such that vΩ(i,j) = N+1N vi,j and as a consequence of (14) we have\nE ∥∥∥∥∥∥ ∑ j∈Si M 1 2 i,jhi,j ∥∥∥∥∥∥ 2 ≤ mi∑ j=1 pi,jvΩ(i,j) ‖hi,j‖ 2 , ∀ 1 ≤ i ≤ n, ∀hi,j ∈ Rd, j ∈ {1, . . . ,mi}.\n(24) At the same time, Υ is µ := µn strongly convex." }, { "heading": "D.9.3 PROOF OF THEOREM C.6 AND THEOREM C.7", "text": "Let e ∈ Rd be a vector of ones and pi ∈ RN is such that pij = pi,j if j ∈ {1, . . . ,mi}, otherwise pij = 0. Given the notation, random operator U is chosen as\nUX = (1− p)−1 ∑n i=1 ( p−1i e (( pi )−1)>) ◦ (X:mi (∑j∈Si ejej>)) w.p. (1− p)\np−1X:,n w.p. p\nWe next give two options on how to update Jacobian – first one is SAGA-like, second one is SVRG like.\nSAGA-like: (SX):,mi =\n{ X:,Si = X:mi (∑ j∈Si ejej > ) , w.p. (1− p)pi,\n0 w.p. (1− p)(1− pi) + p (SX):,n = {\nX:,n w.p. p 0 w.p. 1− p\nSVRG-like: (SX):,mi = X:mibi; bi = { 1 w.p. pi 0 w.p. 1− pi w.p. (1− p)pi\n0 w.p. (1− p)(1− pi) + p (SX):,n = {\nX:,n w.p. p 0 w.p. 1− p .\nWe can now proceed with the proof of Theorem C.6 and Theorem C.7. As ∇fi(x) − ∇fi(y) ∈ Range (Mi), we must have\nG(xk)−G(x?) =M†M ( G(xk)−G(x?) ) (25)\nand Jk −G(x?) =M†M ( Jk −G(x?) ) . (26)\nDue to (26), (25), inequalities (21) and (22) with choice Y =M† 1 2 X become respectively:\n2α n2 p −1‖M 1 2 nY:,n‖2 + 2α 2 n2 (1− p) −1 n∑ i=1 E ∥∥∥∥∥∥p−1i ∑ j∈Si p−1i,j M 1 2 i,jY:j ∥∥∥∥∥∥ 2 + ∥∥∥∥(I − E [S]) 12 B(Y)∥∥∥∥2\n≤ (1− αµ)‖B(Y)‖2 (27)\n2α n2 p −1‖M 1 2 nY:,n‖2+ 2α 2 n2 (1−p) −1 n∑ i=1 E ∥∥∥∥∥∥p−1i ∑ j∈Si p−1i,j M 1 2 i,jY:j ∥∥∥∥∥∥ 2 +∥∥∥∥(E [S]) 12 B(Y)∥∥∥∥2 ≤ 1n‖Y‖2\n(28)\nAbove, we have used\nE‖UXe‖2 = E [ ‖UM 1 2 Ye‖2 ] = p−1‖M 1 2 nY:,n‖2+(1−p)−1 n∑ i=1 E ∥∥∥∥∥∥p−1i ∑ j∈Si p−1i,j M 1 2 i,jY:j ∥∥∥∥∥∥ 2 .\nNote that E [S(X)] = X · Diag ((1− p)(p ◦ p), p) where p ∈ Rn−1 such that pΩ(i,j) = pi,j . Using (24), setting B to be right multiplication with Diag(b) and noticing that λmaxMn = nλ it suffices to have\n2α n p −1λ+ (1− p)b2n ≤ (1− αµ)b2n\n2α n2 (1− p) −1p−1i,j p −1 i vΩ(i,j) + (1− (1− p)pi,jpi)b 2 j ≤ (1− αµ)b2j ∀j ∈ {1, . . . ,mi}, i ≤ n\n2α n p −1λ+ pb2n ≤ 1n\n2α n2 (1− p) −1p−1i,j p −1 i vΩ(i,j) + (1− p)pi,jpib 2 j ≤ 1n ∀j ∈ {1, . . . ,mi}, i ≤ n\nfor SAGA case and 2α n p −1λ+ (1− p)b2n ≤ (1− αµ)b2n\n2α n2 (1− p) −1p−1i,j p −1 i vΩ(i,j) + (1− (1− p)pipi)b 2 j ≤ (1− αµ)b2j ∀j ∈ {1, . . . ,mi}, i ≤ n\n2α n p −1λ+ pb2n ≤ 1n\n2α n2 (1− p) −1p−1i,j p −1 i vΩ(i,j) + (1− p)pipib 2 j ≤ 1n ∀j ∈ {1, . . . ,mi}, i ≤ n\nfor LSVRG case.\nIt remains to notice that to satisfy the SAGA case, it suffices to set b2n = 1 2np , b 2 Ω(i,j) = 1 2n(1−p)pi,jpi (for j ∈ {1, . . . ,mi}, i ≤ n) and α = min {\nminj∈{1,...,mi},1≤i≤n n(1−p)pi,jpi 4vΩ(i,j)+nµ , p4λ+µ\n} .\nTo satisfy LSVRG case, it remains to set b2n = 1 2np , b 2 Ω(i,j) = 1 2n(1−p)pipi (for j ∈ {1, . . . ,mi}, i ≤\nn) and α = min { minj∈{1,...,mi},1≤i≤n\nn(1−p)pi 4 vΩ(i,j) pi,j +nµp−1i , p4λ+µ\n} .\nThe last step to establish is to recall that n = N + 1, vΩ(i,j) = N+1N vi,j and µ = µ n and note that the iteration complexity is 1αµ log 1 ε = n αµ log 1 ε ." }, { "heading": "D.9.4 PROOF OF THEOREM 5.1", "text": "To obtain convergence rate of Theorem 5.1, it remains to use Theorem C.6 with pi = 1,mi = m (∀i ≤ n), where each machine samples (when the aggregation is not performed) individual data points with probability 1m and thus pj = 1 m (for all j ≤ N ). The last remaining thing is to realize that vj = L′ for all j ≤ N ." } ]
2,020
null
SP:df2c981f4cfc3734e8f42c4e84368e90d931529b
[ "The paper presents a technique to compare networks trained to solve similar tasks trained in different context. The considered task is reaching with a robotic planar arm; the considered context is varied varying the robot degrees of freedom. The goal of the paper is to find correlations across neural activity patterns across networks trained to solve the same task in different contexts." ]
Recent experiments indicate that pre-training of end-to-end Reinforcement Learning neural networks on general tasks can speed up the training process for specific robotic applications. However, it remains open if these networks form general feature extractors and a hierarchical organization that are reused as apparent e.g. in Convolutional Neural Networks. In this paper we analyze the intrinsic neuron activation in networks trained for target reaching of robot manipulators with increasing joint number in a vertical plane. We analyze the individual neuron activity distribution in the network, introduce a pruning algorithm to reduce network size keeping the performance, and with these dense network representations we spot correlations of neuron activity patterns among networks trained for robot manipulators with different joint number. We show that the input and output network layers have more distinct neuron activation in contrast to inner layers. Our pruning algorithm reduces the network size significantly, increases the distance of neuron activation while keeping a high performance in training and evaluation. Our results demonstrate that neuron activity can be mapped among networks trained for robots with different complexity. Hereby, robots with small joint difference show higher layer-wise projection accuracy whereas more different robots mostly show projections to the first layer.
[]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Learning to compose neural networks for question answering. 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016", "venue": "Proceedings of the Conference,", "year": 2016 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Michael Eickenberg", "Alexandre Gramfort", "Gaël Varoquaux", "Bertrand Thirion" ], "title": "Seeing it all: Convolutional network layers map the function of the human visual", "venue": "system. NeuroImage,", "year": 2016 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Trevor Hastie", "Robert Tibshirani", "Jerome Friedman" ], "title": "The elements of statistical learning: data mining, inference, and prediction", "venue": "Springer Science & Business Media,", "year": 2009 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Łukasz Kidziński", "Sharada Prasanna Mohanty", "Carmichael F Ong", "Zhewei Huang", "Shuchang Zhou", "Anton Pechenko", "Adam Stelmaszczyk", "Piotr Jarosik", "Mikhail Pavlov", "Sergey Kolesnikov" ], "title": "Learning to run challenge solutions: Adapting reinforcement learning methods for neuromusculoskeletal environments", "venue": "In The NIPS’17 Competition: Building Intelligent Systems,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Yann LeCun", "John S Denker", "Sara A. Solla" ], "title": "Optimal Brain Damage (Pruning)", "venue": "Advances in neural information processing systems,", "year": 1990 }, { "authors": [ "Chunyuan Li", "Heerad Farkhoor", "Rosanne Liu", "Jason Yosinski" ], "title": "Measuring the intrinsic dimension of objective landscapes, 2018", "venue": null, "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Dor Livne", "Kobi Cohen" ], "title": "PoPS: Policy Pruning and Shrinking for Deep Reinforcement Learning", "venue": "pp. 1–14,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Emilio Parisotto", "Jimmy Ba", "Ruslan Salakhutdinov" ], "title": "Actor-mimic deep multitask and transfer reinforcement learning", "venue": "4th International Conference on Learning Representations, ICLR 2016 Conference Track Proceedings,", "year": 2016 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel" ], "title": "Mastering chess and shogi by self-play with a general reinforcement learning algorithm", "venue": "arXiv preprint arXiv:1712.01815,", "year": 2017 }, { "authors": [ "Jost Tobias Springenberg", "Alexey Dosovitskiy", "Thomas Brox", "Martin Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "arXiv preprint arXiv:1412.6806,", "year": 2014 }, { "authors": [ "Tung Long Vuong", "Do Van Nguyen", "Tai Long Nguyen", "Cong Minh Bui", "Hai Dang Kieu", "Viet Cuong Ta", "Quoc Long Tran", "Thanh Ha Le" ], "title": "Sharing experience in multitask reinforcement learning", "venue": "IJCAI International Joint Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Matthew D Zeiler", "Rob Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In European conference on computer vision,", "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional Neural Networks (CNN) are well known to demonstrate a strong general feature extraction capability in lower network layers. In these networks feature kernels can not only be visualized, pre-trained general feature extractors can also be reused for efficient network learning. Recent examples propose efficient reusability experimentally for Reinforcement Learning neural networks as well: Networks are pre-trained on similar tasks and continued learning for the goal application. Reusing (sub)networks that can be re-assembled for an application never seen before can reduce network training time drastically. A better understanding of uniform or inhomogeneous network structures also improves the evaluation of network performance as well unveils opportunities for the interpretability of networks which is crucial for the application of machine learning algorithms e.g. in industrial scenarios. Finally, methodologies and metrics estimating network intrinsic and inter correlations in artificial neural networks may also enhance the understanding of biological learning. Eickenberg et al. (2017) could recently demonstrate that layers serving as feature extractors in CNNs could actually be found in the Human Visual Cortex by correlating artificial networks to biological recordings. Successful experiments to re-use end-to-end learned networks for similar tasks leave open whether such networks also self-organize feature extractors or in a dynamical domain motion primitives. Here, we analyze neuron activation in networks in order to investigate activation distribution and mapping between different networks trained on similar robot reaching tasks. In this paper we consider a standard vertical space robot manipulator with variable number of revolute joints as the test setup for target reaching end-to-end Reinforcement Learning (RL) experiments. We introduce metrics applied to evaluate individual neuron activation over time and compare activity within individual networks all-to-all (every neuron is correlated to any other neurons in the network) and layer wise (only correlations between networks on the same layer are inspected). These metrics are utilized to set up a pruning procedure to maximize the information density in learned neural networks and reduce redundancy as well as unused network nodes. Exploiting these optimization\nprocedure we learn various neural networks with variable dimensions on robot manipulators with two to four joints, representing two to four Degrees of Freedom (DOF). in order to analyze similarities between network activation patterns. As a result we demonstrate experimentally that the introduced pruning process reduces the network size efficiently keeping performance loss in bounds and hereby builds a valid basis for network analysis. We show that the networks trained and iteratively pruned on the robot manipulators form distinct neuron activation. Analyzing neuron activation correlations between different networks of various sizes, mappings between neurons trained on different manipulators can be found. A layer wise interpretation reveals that networks trained for same tasks build similar structures, but we can also discover partially similar structures between networks trained on 3 and 4 joint manipulators." }, { "heading": "2 RELATED WORK", "text": "The apability of feature extraction in CNNs, alongside with a variety of analysis and visualization tools, serves as a motivation for this work on training, analysis and pruning for networks trained with RL. Analysis methods for CNNs reach from regional based methods, e.g. image occlusion Zeiler & Fergus (2014), that aim to expose the region of an image most relevant for classification, to feature based methods, e.g. deconvolution Zeiler & Fergus (2014) or guided backpropagation Selvaraju et al. (2017). Methods combining the described techniques are for example introduced as Grad-CAM in Springenberg et al. (2014). These networks demonstrate class discrimination for features of deeper network layers (Zeiler & Fergus (2014)) as a basis to apply such general feature extractors to different applications after pre-training. Pre-trained networks such as ResNet He et al. (2016), which has been trained on the ImageNet1 data set, speed up training drastically by initializing CNNs applied for similar tasks. Kopuklu2019 demonstrated that even reusing individual layers in the same network can lead to a performance increase. Recent advances pushed RL agents to reach super human performance in playing Atari video games Bellemare et al. (2013) Mnih et al. (2015), Chess Silver et al. (2017) and Go Silver et al. (2016). These results were extended to cope with continuous action spaces in e.g. Lillicrap et al. (2015) and demonstrated great performance on highly dynamic multi-actuated locomotion learning tasks such as demonstrated in the NIPS 2017 Learning to Run challenge Kidziński et al. (2018). Vuong et al. (2019) and Eramo et al. (2020) demonstrate experimentally that knowledge learned by a neural network can be reused for other tasks in order to speed up training and hereby translate modularity concepts from CNNs to RL frameworks. Hierarchical Reinforcement Learning incorporates these ideas, utilizing the concept of subtask solving into neural networks e.g. in Andreas et al. (2016) for question answering. A successful example of transfer learning to build up a general knowledge base could be demonstrated with RL in Atari games in Parisotto et al. (2016). Gaier & Ha (2019) emphasizes the importance of neural architectures that can perform well even without weight learning. With a main motivation to improve learning efficiency and reduce computational requirements, network pruning is introduced for various network architectures. Early work in LeCun et al. (1990) utilizes second derivative information as a heuristic to decrease network size, recent work in Livne & Cohen (2020) introduces network pruning for Deep Reinforcement Learning based on redundancy detection in an iterative process.\nLi et al. (2018)" }, { "heading": "3 EXPERIMENTAL SETUP", "text": "In this paper we focus on a robot manipulator with operation limited to a vertical plane. A neural network is trained with end-to-end Reinforcement Learning in order to reach predefined locations in 2D space without prior knowledge of neither robot dynamics nor the environment. Hereby, end-toend refers to a mapping from sensory feedback in terms actual joint positions in cartesian space and the desired goal location to output actions as joint position commands. We apply Deep q-learning, as proposed in Mnih et al. (2015), to predict q-values, an action is selected by means to the softmax exploration policy and Gradient descent of the networks weights is handled by the Adam Solver Kingma & Ba (2014).\nFor performance reasons our experiments are executed within a simplified simulation environment as shown conceptually in Figure 1 (right), but exemplary behaviors have been successfully trans-\nferred to a realistic robotic simulation (Figure 1 left). We simulate robots with 2 to 4 DOF that are implemented as revolute joints restricted to vertical space motions and actuated with PID position controllers. For all experiments, the neural networks originally consist of 6 fully connected hidden layers with ReLU activation functions, but may be reduced in the pruning process we introduce. The network input vector x encodes actual robot joint angles θ̂i as their sine and cosine contribution for every control step t (control cycle time of 50ms) as well as the desired goal position in cartesian coordinates [x∗, y∗] as\nx(t) = [ sin ( θ̂ (t) 1 ) cos ( θ̂ (t) 1 ) ... sin ( θ̂ (t) n ) cos ( θ̂ (t) n ) x∗ y∗ ]T . (1)\nThe output layer contains 3n neurons as the action of every individual joint i is quantized into the three change of motion states {1,−1, 0} as forward, backward and no motion for each joint with joint angle changes of±0.05rad. The goal state of an agent is a randomly instantiated 2D location to be reached with the robot finger tip in max. 60 control steps, each representing 50ms. The distance between the goal position p∗ and the tip p̂ is mapped into [0, 1] and squared to serve as the reward\nfunction r (ti) := (\n1 |p̂(ti)−p∗|L2+1\n)2 . All network results that are presented passed a validation test\nconsisting of 300 test episodes. This test also serves as the pruning baseline: The probability of a type two error for reaching the final reward threshold r̄ = 0.9 with an accuracy ρ̄ = 0.9 lies bellow significance α = 0.05 on the test data." }, { "heading": "4 NEURON ACTIVATION ANALYSIS", "text": "We first analyze individual neuron activation inside multiple neural networks trained on the introduced target reaching robotic manipulator. This initial analysis servers as a baseline for pruning and projection evaluation, therefore we study only 3 joint robotic manipulators in depth before we investigate a comparison of different kinematic structures. We define a distance metric between neurons that is based on the neuron activation history in scope of every episode in order to account for the dynamics in motion trajectory learning. All neuron activation values over the course of an episode are collected in a vector z(E)ni for every neuron ni of the network in Episode E. Utilizing the linearity of applied ReLU activation functions we normalize this activation in range [0, 1] in reference to the maximum value attained. For a set of sample episodes E , representing a set of potential robot actions, we define the distance of neurons ni and nj as\nd(ni, nj) := 1 |E| ∑ E∈E ∣∣∣∣∣z(E)niZni − z (E) nj Znj ∣∣∣∣∣ L2 , (2)\nwith z(E)ni ∈ RT≥0 denoting the vector containing activation series of neuron ni in episode E and Zni ∈ R>0 the maximum activation of ni in all episodes E . For a layer wise analysis Equation 2 is adapted accordingly, only considering distances to neurons that belong to the same layer. The\nupper triangular matrix of a distance matrix D holds all values d(ni, nj) with i? = j. The density distribution of neuron distances can be approximated by collecting all values in the upper triangular matrices of D. Additionally, hierarchical clustering as described in Hastie et al. (2009) is applied to individual network layers in order to reveal neuron groups that show similar activation behavior. We form groups that minimize the mean cluster distance D(Cl) of contained neurons as\nD(Cl) := 1 |Cl| (|Cl| − 1) ∑ nil∈Cl ∑ njl∈Cl\\{nil} d(nil, njl). (3)\nfor neuron cluster C of layer l. We conduct an experiment with a set of M = 20 networks (48 neurons per hidden layer), for the three joint manipulation task. A reference set of untrained networks with identical structure is initialized by Xavier initialization Glorot & Bengio (2010). Neuron distances are averaged from a set ofm = 500 sample episodes. The distance distribution in randomized networks forms a bell-shaped distribution globally as well as layer wise (Figure 2, top). However, the all-to-all distribution of trained networks primarily indicate a lower standard deviation and mean compared to random networks, with a slight distortion at high distances. Layer-wise analysis reveals that these higher distance scores occur increasingly on network layers closer to the output, in particular in the second half of layers. In contrast, lower layers demonstrate close to normal distributions. Clustering reveals a variety in distances for all layers in untrained randomly initialized networks (Figure 2 bottom) which is kept on the first layer only in trained networks. In particular on the middle layers clusters with low distances emerge during training. The intrinsic network analysis depicts successful training that visibly changes the neuron activation characteristics which highly depends on the location inside the network." }, { "heading": "5 HEURISTIC NETWORK PRUNING", "text": "Non-uniform density distributions and low cluster differences in the inspected neuron activation indicate potential for network pruning. Dense information representation is a requirement for the comparison of different networks. For this purpose we propose a pruning procedure that iteratively unifies neurons with similar activation, identified as small cluster distances, and retrains the network. Hereby a trade-off between reduced network size and maintaining high-performance learning is aspired. We apply Breadth First Search on the resulting cluster tree of every network layer. The first encountered clusters with distance (3) below threshold d̄τ , which is defined as a fraction τ of the maximum cluster distance, are selected to form the layer segmentation C. Based on this neuron segmentation C(l) of layer l, a reduced network is constructed that represents every cluster as a single neuron. Original network weights are reused for the initialization of the reduced network. We exploit the linearity of ReLU activation functions and assume identical neuron behavior only altered by linear scaling inside every cluster. W.l.o.g. cluster activation ζC are defined such that scaling factors γn > 0 of contained neurons sum to one and ∀n ∈ C : ζC = znγn , with zn denoting the activation of neuron n, holds. For cluster C ∈ C(l) and arbitrary neuron n ∈ C the forward propagation of zn can be rearranged to form the forward propagation of the cluster activation as\nζC = ReLU ∑ D∈C(l−1) ζD 1 γn ∑ m∈D wnmγm , (4) with wnm denoting the weight from neuron m to n. (4) acts as an approximation that in practice is only achieved by clusters of dead neurons that are not activated at all. Therefore, in order to improve stability all neurons of a cluster contribute to the reduced network weights ω as ωCD = 1 γn ∑ m∈D wnmγm. Scaling factors γn are generated from the maximum activation Zn (2) of the respective neuron n. In order to evaluate the introduced pruning procedure, we conduct experiments with a set of M = 20 neural networks (6 hidden layers, 48 neurons each) trained for the 3 joint manipulation task. Network reduction is applied with a set of m = 300 sample episodes, presented results are averaged over the set of networks which reached sufficient performance. The results presented in Figure 3 (left) show a nearly linear correlation between cluster threshold and resulting pruned network size if networks had an identical initial layer size of 48 neurons. In case of τ = 0 only dead neurons\nare reduced, which does not affect the performance of the network, though reduces the network size significantly (initial size of all networks: 323 neurons). For values of τ ∈ (0, 0.1] the network is reduced, but no strong effect is apparent on initial accuracy [%] and training duration (number of episodes executed until the validation set is passed). We observe interesting behavior in range of τ ∈ (0.1, 0.22], as the initial accuracy decreases significantly, whereas the duration for retraining the networks barely increases. This implicates that the main processes in the network remain in tact after reduction, whereas for τ > 0.2 a strong increase in training duration indicates a loss of relevant information. As a trade off between minimal network size and efficient training τ = 0.2 has been selected as the optimal cluster threshold and was applied for all further experiments. As the pruning process highly depends on the initial network size, we analyze networks of initial hidden layer sizes of 32, 48, 128, 256 and 512 within the same test setup. The results shown in Figure 3 (right) emphasize the first reduction step as the most dominant. Noticeably large networks of initial layer neuron count of 128, 256 and 512 reach similar pruned network size already in the first iteration step. For subsequent reduction steps the network size plateaus. Inspection of neuron per layer counts reveal that small initial networks (32, 48) taper with depth, compared to bigger initial networks that form an hourglass shape. The average network shape of 256 and 512 neuron networks after three\nreduction steps turns out as s̄ = [51.6 21. 15.9 12.6 10.2 16.7] Network intrinsic neuron distance densities of pruned networks (Figure 2) implicate an increased homogeneous information representation compared to networks trained straight away. The bell-shaped distribution with higher mean shows lower variance, and outliers of high distance scores are reduced. While clusters remain rather similar on the first and last layer, in particular the cluster distances on middle layers are drastically increased along with the reduced cluster number. Overall, we find that our pruning process reduces network size efficiently and hereby shows a visible effect on neuron activation towards a rather uniform distribution and distinct cluster structure." }, { "heading": "6 CORRELATIONS IN NETWORKS TRAINED FOR MULTI-JOINT ROBOTS", "text": "Based on the both the individual neuron activation analysis and heuristic network pruning, we now investigate mappings of neuron activation between different networks learned on robot manipulators with 2 to 4 joints. Here, the goal is to estimate whether activation patterns are similar in networks trained for the different robot kinematics. For this purpose we construct an unidirectional linear projection between source and target network and analyze its accuracy and structure. Based on the source network neuron activation b ∈ RK≥0, resulting from input x, a prediction â = bTP of the target activation a ∈ RM≥0 for the same input x is given by projection matrix P ∈ R K×M ≥0 (Figure 4). The projection is constructed based on a set of N training inputs X that yield activation matrices\nA ∈ RN×M≥0 and B ∈ R N×K ≥0 of the target and source network, respectively. In order to obtain a procedure invariant to neuron scaling, individual columns ofA andB, are normalized to the interval [0, 1] dividing by the maximal values contained. The resulting projection P̄ can be adjusted to fit the original training data by Pkm = αmβk P̄km. Two approaches for projection construction are considered. Greedy mapping predicts each target neuron from the source neuron with minimal distance (2), every entry of the greedy projection matrix P̄ gkm is 1 if k = arg mini∈[K]{d(m, i)} and 0 otherwise. Linear mapping incorporates all source neurons into the prediction of a target neuron by linear combination. Projection vectors pm, predicting the behavior of neuron m, are given by the solution of quadratic optimization with linear boundary constraints for each target neuron individually. Hereby, the mean squared error plus lasso regularization, to enforce sparsity of solution vectors, is minimized finding the best projection p, i.e.\nminimize 1\n2 ∣∣∣∣B̄p− a↓mαm ∣∣∣∣2 L2 + λ|p|L1 subject to p ≥ 0. (5)\nB̄ denotes the matrix of source activations scaled by βk, am the target activations and λ ∈ R≥0 the regularization strength. As mapping of two networks should be invariant to neuron scaling, all individual neuron activations are projected into the interval [0, 1] with neuron specific scaling factors βk and αm for the source and target network neurons, respectively. 30 The solution vectors p̄∗m are stacked to form the linear projection matrix P̄ l := [p̄∗1 ... p̄ ∗ M ] . Input samples X are deduced from a set of sample episodes of the target network without duplicates. In put vectors of robot manipulators with different joint count are transformed by either duplicating best aligning joints or unspecified joints being set to zero, for a more or less complex source network, respectively (Figure 5 middle right)." }, { "heading": "6.1 EVALUATION METRICS", "text": "Projections are evaluated with regard to their goodness to fit a set of validation samples XV and according to heuristic metrics that directly analyze a projection structure. The mean absolute prediction error is normalized by the prediction error of the zero projection P0 ∈ {0}K×M to construct the normalized error E(P,X) that is invariant to weight scaling and adding dead neurons:\nĒ(P,X) := E(P,X)\nE(P0, X) =\n1\n|A|1 M∑ m=1 ∣∣a↓m −Bpm∣∣1 (6) The entropy of a target neuron’s projection pm is referred to as the saturation of neuronm, projection P is the mean of all neuron saturations. A low saturation implies that few neurons suffice to describe the behavior of m. We calculate the overall projection saturation S(P ) according to Equation 7.\nS(P ) := − 1 M M∑ m=1 K∑ k=1 Pkm logK(Pkm) ∈ [0, 1]. (7)\nThe utilization of the source network neurons to describe the target network is indicated by the coverage C. It is defined as the entropy of the stochastic process that picks a target neuron m uniformly at random and passes it on to the source network according to the distribution pm|pm|L1 . A low coverage value implies low utilization of the source network.\nC(P ) :=− 1 K K∑ k=1 κklogK(κk), with κk = 1 M M∑ m=1 Pkm |pm|L1 . (8)\nThe same statistical process is applied to construct a layer-wise projection Pij . It describes the probability of reaching the ith layer L(K)i of the source network when starting in some uniformly random neuron in the jth layer L(M)j of the target network.\nPij := 1\n|L(M)j | ∑ k∈L(K)i ∑ m∈L(M)j Pkm |pm|L1 . (9)" }, { "heading": "6.2 RESULTS", "text": "For each robot manipulator with 2, 3 and 4 joints, M = 5 networks are trained, pruned in three steps and we analyze all possible mappings “a-b” between the respective sets. A set of validation inputs XV , is generated for m = 300 sample episodes of the target network and metrics evaluated. As a baseline we map all 3 joint manipulator agent networks with an initial neuron count of 256 for each of the 6 hidden fully connected layers, among each other. As expected, as a baseline mappings of networks to themselves (referred to as reflexive mapping) show zero error and saturation and coverage of 1 (Figure 5 top left). However, greedy mapping shows a high normalized error and low coverage when compared to the linear mapping and thus is considered an inferior approach. In this baseline we extract linear mapping with regularization strength of λ = 50 as the best metric as it indicates coverage and normalized error most significant on trained in contrast to random networks. Layer-wise linear projection (λ = 50) is not optimal but we observe the best mapping to the respective layers, shown on the diagonal axis in the table of Figure 5. Hereby, layer one and\nsix demonstrate the strongest correlation potentially due to increasingly specialized neurons at the input and output of the network.\nLinear mapping (λ = 50) has been applied between sets of 2, 3 and 4 joint robot manipulators (Figure 5 middle left), Random networks are initialized by the average network size of the respective joint count as evaluated with pruning. Scenarios 3-4 and 4-3 show similar prediction errors but indicate a higher mean error compared to 4-2 mappings. Latter mapping performs similar to the baseline, which might be induced by the fact that we transform inputs in a balanced way so that the 4 joint arm can act like a 2 joint arm (figure on the right, we choose the transformation 4b). It shows lower coverage of the source network, which is partially related to the fixed input channels for the source networks after input transformation. The worst performance according to the prediction error is shown by scenario 2-4 as the two joint manipulator networks are barely able to replicate the behavior of the four joint networks. Generally, the more distinct the robots the worse the mapping, except input transformation is implemented in a meaningful way. More complex networks map slightly better into less complex one, as compared to the opposite way round. A deeper insight to the source network utilization is drawn from mean layer-wise projections (Figure 5 bottom). The baseline scenario 3-3 shows more significant correlation to its respective layer the closer it is to the input or output. The first layers of 3-4 and 4-3 mappings seem to follow the behavior of the baseline, whereas the deeper layers show no significant correlation. Contrary to the performance of the overall metrics, scenario 4-2 shows no strong layerwise correlation, which is even worse in the inverted 2-4 mapping. If layers do not map well, all target layers tend to map to the lower layers especially the first layer (most prominent in 2-4 mappings) of the source network, only a small tendency is visible of the output layer mapping to other output layers. We hypothesize this phenomena is credited to first layers having the highest neuron count and activation variance. Overall, we do find that a good mapping correlation when the source network is able to imitate the behavior of the target network, a suitable input transformation turned out to be crucial here. 4-2 mappings showed the lowest error, but networks trained on three and four joint networks map better into their respective layer." }, { "heading": "7 CONCLUSION", "text": "In this paper we analyzed individual neuron activation and correlations between neural networks trained for goal reaching of vertical space robot manipulators with two, three and four joints. We analyzed and classified the activation in order to implement a pruning algorithm that removes redundant neurons and increases information density in the network. Finally, we analyzed correlations between the overall and layerwise neuron activation of networks trained on robots with different joint number by projection mapping. Our results demonstrate that networks develop distinct activation patterns on individual neuron layers with bell-shaped distribution of activation densities. This distribution is compressed by our pruning algorithm that merges similar neuron activation classes mostly on the inner network layers. Networks trained for robots with only small joint number difference show a good correlation of neuron activation, for small differences this correlation can be found layer-wise. The more distinct the robot kinematic is in terms of joint number, the more important is a proper input transformation that fits the different network input layers. All experiments are benchmarked by comparison against untrained networks and self-correlations for multiple networks trained for the same task. Our results help to improve explainability of reinforcement learning in neural networks for robot motion learning and highlight network structures that can be reused on similar tasks after pre-training. The experiments conducted are limited to robot manipulators of 2 to 4 joints acting in vertical space, however the underlying introduced methodologies could be transferred to other Reinforcement Learning tasks as well. Analysis of neuron activation has been introduced in other contexts, here here we utilize it for the analysis of the specific use case of vertical space robot manipulation. In future work our pruning algorithm can be extended to also reduce the number of overall layers, analyze additional network parameters and we will examine reusing network structures with good correlation experimentally." } ]
2,020
null
SP:bdb67d79c8c71ff8761649620a110bd8ff2353fe
[ "This paper studies the personalization aspect of the federated learning problem. The authors propose a new framework in which they replace the common global model in the original federated learning formulation with a convex combination of the global model and a local model. They later introduce an adaptive optimization algorithm for their formulation and provide generalization bounds as well as convergence guarantees for both strongly-convex and nonconvex settings. Finally, numerical experiments and comparison with other methods in the literature are provided to support the theoretical results." ]
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize. In this paper, we advocate an adaptive personalized federated learning (APFL) algorithm, where each client will train their local models while contributing to the global model. We derive the generalization bound of mixture of local and global models, and find the optimal mixing parameter. We also propose a communication-efficient optimization method to collaboratively learn the personalized models and analyze its convergence in both smooth strongly convex and nonconvex settings. The extensive experiments demonstrate the effectiveness of our personalization schema, as well as the correctness of established generalization theories.
[]
[ { "authors": [ "Inês Almeida", "Joao Xavier" ], "title": "Djam: distributed jacobi asynchronous method for learning personal models", "venue": "IEEE Signal Processing Letters,", "year": 2018 }, { "authors": [ "Manoj Ghuhan Arivazhagan", "Vinay Aggarwal", "Aaditya Kumar Singh", "Sunav Choudhary" ], "title": "Federated learning with personalization layers", "venue": "arXiv preprint arXiv:1912.00818,", "year": 1912 }, { "authors": [ "Aurélien Bellet", "Rachid Guerraoui", "Mahsa Taziki", "Marc Tommasi" ], "title": "Personalized and private peer-to-peer machine learning", "venue": "arXiv preprint arXiv:1705.08435,", "year": 2017 }, { "authors": [ "Shai Ben-David", "John Blitzer", "Koby Crammer", "Alex Kulesza", "Fernando Pereira", "Jennifer Wortman Vaughan" ], "title": "A theory of learning from different domains", "venue": "Machine learning,", "year": 2010 }, { "authors": [ "Léon Bottou" ], "title": "Stochastic gradient descent tricks", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Sebastian Caldas", "Peter Wu", "Tian Li", "Jakub Konečnỳ", "H Brendan McMahan", "Virginia Smith", "Ameet Talwalkar" ], "title": "Leaf: A benchmark for federated settings", "venue": "arXiv preprint arXiv:1812.01097,", "year": 2018 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "Andre Van Schaik" ], "title": "Emnist: Extending mnist to handwritten letters", "venue": "In 2017 International Joint Conference on Neural Networks (IJCNN),", "year": 2017 }, { "authors": [ "Canh T Dinh", "Nguyen H Tran", "Tuan Dung Nguyen" ], "title": "Personalized federated learning with moreau envelopes", "venue": "arXiv preprint arXiv:2006.08848,", "year": 2020 }, { "authors": [ "Hubert Eichner", "Tomer Koren", "Brendan Mcmahan", "Nathan Srebro", "Kunal Talwar" ], "title": "Semi-cyclic stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "Personalized federated learning: A metalearning approach", "venue": "arXiv preprint arXiv:2002.07948,", "year": 2020 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML, pp. 1126–1135. JMLR. org,", "year": 2017 }, { "authors": [ "Saeed Ghadimi", "Guanghui Lan", "Hongchao Zhang" ], "title": "Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization", "venue": "Mathematical Programming,", "year": 2016 }, { "authors": [ "Farzin Haddadpour", "Mehrdad Mahdavi" ], "title": "On the convergence of local descent methods in federated learning", "venue": "arXiv preprint arXiv:1910.14425,", "year": 1910 }, { "authors": [ "Farzin Haddadpour", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi", "Viveck Cadambe" ], "title": "Local sgd with periodic averaging: Tighter analysis and adaptive synchronization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Farzin Haddadpour", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi", "Viveck Cadambe" ], "title": "Trading redundancy for communication: Speeding up distributed sgd for non-convex optimization", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Filip Hanzely", "Peter Richtárik" ], "title": "Federated learning of a mixture of global and local models", "venue": "arXiv preprint arXiv:2002.05516,", "year": 2020 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Swaroop Ramaswamy", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Tzu-Ming Harry Hsu", "Hang Qi", "Matthew Brown" ], "title": "Measuring the effects of non-identical data distribution for federated visual classification", "venue": "arXiv preprint arXiv:1909.06335,", "year": 1909 }, { "authors": [ "Yutao Huang", "Lingyang Chu", "Zirui Zhou", "Lanjun Wang", "Jiangchuan Liu", "Jian Pei", "Yong Zhang" ], "title": "Personalized federated learning: An attentive collaboration approach", "venue": "arXiv preprint arXiv:2007.03797,", "year": 2020 }, { "authors": [ "Yihan Jiang", "Jakub Konečnỳ", "Keith Rush", "Sreeram Kannan" ], "title": "Improving federated learning personalization via model agnostic meta learning", "venue": "arXiv preprint arXiv:1909.12488,", "year": 1909 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 1912 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "Scaffold: Stochastic controlled averaging for on-device federated learning", "venue": "arXiv preprint arXiv:1910.06378,", "year": 1910 }, { "authors": [ "Mikhail Khodak", "Maria-Florina F Balcan", "Ameet S Talwalkar" ], "title": "Adaptive gradient-based metalearning methods", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Daniel Kifer", "Shai Ben-David", "Johannes Gehrke" ], "title": "Detecting change in data streams", "venue": null, "year": 2004 }, { "authors": [ "Nikola Konstantinov", "Elias Frantar", "Dan Alistarh", "Christoph H Lampert" ], "title": "On the sample complexity of adversarial multi-source pac learning", "venue": "arXiv preprint arXiv:2002.10384,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Ang Li", "Jingwei Sun", "Binghui Wang", "Lin Duan", "Sicheng Li", "Yiran Chen", "Hai Li" ], "title": "Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets", "venue": "arXiv preprint arXiv:2008.03371,", "year": 2020 }, { "authors": [ "Daliang Li", "Junpu Wang" ], "title": "Fedmd: Heterogenous federated learning via model distillation", "venue": "arXiv preprint arXiv:1910.03581,", "year": 1910 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "arXiv preprint arXiv:1812.06127,", "year": 2018 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated learning: Challenges, methods, and future directions", "venue": "IEEE Signal Processing Magazine,", "year": 2020 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Feddane: A federated newton-type method", "venue": "arXiv preprint arXiv:2001.01920,", "year": 2020 }, { "authors": [ "Paul Pu Liang", "Terrance Liu", "Liu Ziyin", "Ruslan Salakhutdinov", "Louis-Philippe Morency" ], "title": "Think locally, act globally: Federated learning with local and global representations", "venue": "arXiv preprint arXiv:2001.01523,", "year": 2020 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Afshin Rostamizadeh" ], "title": "Domain adaptation: Learning bounds and algorithms", "venue": "arXiv preprint arXiv:0902.3430,", "year": 2009 }, { "authors": [ "Yishay Mansour", "Mehryar Mohri", "Jae Ro", "Ananda Theertha Suresh" ], "title": "Three approaches for personalization with applications to federated learning", "venue": "arXiv preprint arXiv:2002.10619,", "year": 2020 }, { "authors": [ "Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson", "Blaise Aguera y Arcas" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "In AISTAT,", "year": 2017 }, { "authors": [ "Mehryar Mohri", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Foundations of machine learning", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Mehryar Mohri", "Gary Sivek", "Ananda Theertha Suresh" ], "title": "Agnostic federated learning", "venue": "arXiv preprint arXiv:1902.00146,", "year": 2019 }, { "authors": [ "Alex Nichol", "Joshua Achiam", "John Schulman" ], "title": "On first-order meta-learning algorithms", "venue": "arXiv preprint arXiv:1803.02999,", "year": 2018 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Krishna Pillutla", "Sham M Kakade", "Zaid Harchaoui" ], "title": "Robust aggregation for federated learning", "venue": "arXiv preprint arXiv:1912.13445,", "year": 1912 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding machine learning: From theory to algorithms", "venue": null, "year": 2014 }, { "authors": [ "Tao Shen", "Jie Zhang", "Xinkang Jia", "Fengda Zhang", "Gang Huang", "Pan Zhou", "Fei Wu", "Chao Wu" ], "title": "Federated mutual learning", "venue": "arXiv preprint arXiv:2006.16765,", "year": 2020 }, { "authors": [ "Virginia Smith", "Chao-Kai Chiang", "Maziar Sanjabi", "Ameet S Talwalkar" ], "title": "Federated multi-task learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sebastian U Stich" ], "title": "Local sgd converges fast and communicates little", "venue": "arXiv preprint arXiv:1805.09767,", "year": 2018 }, { "authors": [ "Paul Vanhaesebrouck", "Aurélien Bellet", "Marc Tommasi" ], "title": "Decentralized collaborative learning of personalized models over networks. 2017", "venue": null, "year": 2017 }, { "authors": [ "Kangkang Wang", "Rajiv Mathews", "Chloé Kiddon", "Hubert Eichner", "Françoise Beaufays", "Daniel Ramage" ], "title": "Federated evaluation of on-device personalization", "venue": "arXiv preprint arXiv:1910.10252,", "year": 1910 }, { "authors": [ "Blake Woodworth", "Kumar Kshitij Patel", "Nathan Srebro" ], "title": "Minibatch vs local sgd for heterogeneous distributed learning", "venue": "arXiv preprint arXiv:2006.04735,", "year": 2006 }, { "authors": [ "Blake Woodworth", "Kumar Kshitij Patel", "Sebastian U Stich", "Zhen Dai", "Brian Bullins", "H Brendan McMahan", "Ohad Shamir", "Nathan Srebro" ], "title": "Is local sgd better than minibatch sgd", "venue": "arXiv preprint arXiv:2002.07839,", "year": 2020 }, { "authors": [ "Tao Yu", "Eugene Bagdasaryan", "Vitaly Shmatikov" ], "title": "Salvaging federated learning by local adaptation", "venue": "arXiv preprint arXiv:2002.04758,", "year": 2020 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Trong Nghia Hoang", "Yasaman Khazaeni" ], "title": "Bayesian nonparametric federated learning of neural networks", "venue": "arXiv preprint arXiv:1905.12022,", "year": 2022 }, { "authors": [ "Valentina Zantedeschi", "Aurélien Bellet", "Marc Tommasi" ], "title": "Fully decentralized joint learning of personalized models and collaboration", "venue": "graphs. 2019", "year": 2019 }, { "authors": [ "Yuchen Zhang", "Mingsheng Long", "Jianmin Wang", "Michael I Jordan" ], "title": "On localized discrepancy for domain adaptation", "venue": "arXiv preprint arXiv:2008.06242,", "year": 2008 }, { "authors": [ "Kairouz" ], "title": "2019), there are three significant categories of personalization methods in federated learning, namely, local fine-tuning, multi-task learning, and contextualization", "venue": "Zantedeschi et al.,", "year": 2019 }, { "authors": [ "Pan", "Yang", "2009). Jiang" ], "title": "2019) discuss the similarity between federated learning and meta-learning approaches, notably the Reptile algorithm by Nichol et al. (2018) and FedAvg, and combine them to personalize local models. They observed that federated learning with a single objective of performance of the global model could limit the capacity of the learned model for personalization", "venue": "In Khodak et al", "year": 2019 }, { "authors": [ "personalization. Fallah" ], "title": "meta-learning approach that can be used in federated learning", "venue": null, "year": 2019 }, { "authors": [ "Smith" ], "title": "setting, optimization on each client can be considered as a new task; hence, the approaches of multi-task learning can be applied. One other approach, discussed as an open problem in Kairouz et al. (2019), is to cluster groups of clients based on some features", "venue": null, "year": 2020 }, { "authors": [ "Wang" ], "title": "2019), which is in line with our approach in experimental results in Section 5. Liang et al. (2020) propose to directly learn the feature representation locally, and train the discriminator globally, which reduces the effect of data heterogeneity and ensures the fair learning. Personalization via model regularization: Another significant trial for personalization is model", "venue": null, "year": 2020 }, { "authors": [ "Shen" ], "title": "dAvg (McMahan et al., 2017) can be considered a special case of this approach. They show that the learned model is in the convex haul of both local and global models, and at each iteration, depend on the local models’ optimization parameters, the global model is getting closer to the global model learned by FedAvg", "venue": "Dinh et al", "year": 2020 }, { "authors": [ "Mansour" ], "title": "propose a knowledge distillation way to achieve personalization, where they apply the regularization on the predictions between local model and global model. Personalization via model interpolation: Parallel to our work, there are other studies to introduce different personalization approaches for federated learning by mixing the global and local models. The closest approach for personalization", "venue": null, "year": 2020 }, { "authors": [ "Li" ], "title": "on a real-world heterogeneous dataset, which is an extension to MNIST dataset. The EMNIST dataset includes images of characters divided by authors, where each author has a different style, make their distributions different Caldas et al. (2018). We use only digit characters and 1000 authors’ data to train our models on", "venue": null, "year": 2018 }, { "authors": [ "Hsu" ], "title": "alized model of APFL and the localized model of the FedAvg. APFL with adaptive α can reach to the same training loss of the local FedAvg, while greatly outperforms the local FedAvg model in generalization on local validation data. Data distribution using Dirichlet distribution Another approach to distribute data in a non-IID way is to use the Dirichlet distribution", "venue": "Yurochkin et al. (2019)", "year": 2019 }, { "authors": [ "LDi(ĥi" ], "title": "The fundamental theorem of statistical learning", "venue": null, "year": 2014 }, { "authors": [ "Ben-David" ], "title": "Di after incorporating a model learned on a different domain (i.e., global distribution), one might argue that generalization techniques established in multi-domain learning theory (Ben-David", "venue": null, "year": 2020 }, { "authors": [ "As Mansour" ], "title": "bound based on the divergence of source and target domains but measured in absolute distance", "venue": null, "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the massive amount of data generated by the proliferation of mobile devices and the internet of things (IoT), coupled with concerns over sharing private information, collaborative machine learning and the use of federated optimization (FO) is often crucial for the deployment of large-scale machine learning (McMahan et al., 2017; Kairouz et al., 2019; Li et al., 2020b). In FO, the ultimate goal is to learn a global model that achieves uniformly good performance over almost all participating clients without sharing raw data. To achieve this goal, most of the existing methods pursue the following procedure to learn a global model: (i) a subset of clients participating in the training is chosen at each round and receive the current copy of the global model; (ii) each chosen client updates the local version of the global model using its own local data, (iii) the server aggregates over the obtained local models to update the global model, and this process continues until convergence (McMahan et al., 2017; Mohri et al., 2019; Karimireddy et al., 2019; Pillutla et al., 2019). Most notably, FedAvg by McMahan et al. (2017) uses averaging as its aggregation method over local models.\nDue to inherent diversity among local data shards and highly non-IID distribution of the data across clients, FedAvg is hugely sensitive to its hyperparameters, and as a result, does not benefit from a favorable convergence guarantee (Li et al., 2020c). In Karimireddy et al. (2019), authors argue that if these hyperparameters are not carefully tuned, it will result in the divergence of FedAvg, as local models may drift significantly from each other. Therefore, in the presence of statistical data heterogeneity, the global model might not generalize well on the local data of each client individually (Jiang et al., 2019). This is even more crucial in fairness-critical systems such as medical diagnosis (Li & Wang, 2019), where poor performance on local clients could result in damaging consequences. This problem is exacerbated even further as the diversity among local data of different clients is growing. To better illustrate this fact, we ran a simple experiment on MNIST dataset where each client’s local training data is sampled from a subset of classes to simulate heterogeneity. Obviously, when each client has samples from less number of classes\nof training data, the heterogeneity among them will be high and if each of them has samples from all\nclasses, the distribution of their local training data becomes almost identical, and thus heterogeneity will be low. The results of this experiment are depicted in Figure 1, where the generalization and training losses of the global models of the FedAvg (McMahan et al., 2017) and SCAFFOLD (Karimireddy et al., 2019) on local data diverge when the diversity among different clients’ data increases. This observation illustrates that solely optimizing for the global model’s accuracy leads to a poor generalization of local clients. To embrace statistical heterogeneity and mitigate the effect of negative transfer, it is necessary to integrate the personalization into learning instead of finding a single consensus predictor. This pluralistic solution for FO has recently resulted in significant research in personalized learning schemes (Eichner et al., 2019; Smith et al., 2017; Dinh et al., 2020; Mansour et al., 2020; Fallah et al., 2020; Li et al., 2020a).\nTo balance the trade-off between the benefit from collaboration with other users and the disadvantage from the statistical heterogeneity among different users’ domains, in this paper, we propose an adaptive personalized federated learning (APFL) algorithm which aims to learn a personalized model for each device that is a mixture of optimal local and global models. We theoretically analyze the generalization ability of the personalized model on local distributions, with dependency on mixing parameter, the divergence between local and global distributions, as well as the number of local and global training data. To learn the personalized model, we propose a communication efficient optimization algorithm that adaptively learns the model by leveraging the relatedness between local and global models as learning proceeds. As it is shown in Figure 1, by progressively increasing the diversity, the personalized model found by the proposed algorithm demonstrates a better generalization compared to the global models learned by FedAvg and SCAFFOLD. We supplement our theoretical findings with extensive corroborating experimental results that demonstrate the superiority of the proposed personalization schema over the global and localized models of commonly used federated learning algorithms." }, { "heading": "2 PERSONALIZED FEDERATED LEARNING", "text": "In this section, we propose a personalization approach for federated learning and analyze its statistical properties. Following the statistical learning theory, in a federated learning setting each client has access to its own data distributionDi on domain Ξ := X×Y , whereX ∈ Rd is the input domain and Y is the label domain. For any hypothesis h ∈ H the loss function is defined as ` : H×Ξ→ R+. The true risk at local distribution is denoted by LDi(h) = E(x,y)∼Di [` (h(x), y)]. We use L̂Di(h) to denote the empirical risk of h on distribution Di. We use D̄ = (1/n) ∑n i=1Di to denote the average distribution over all clients." }, { "heading": "2.1 PERSONALIZED MODEL", "text": "In a standard federated learning scenario, where the goal is to learn a global model for all devices cooperatively, the learned global model obtained by minimizing the joint empirical distribution D̄, i.e., minh∈H L̂D̄(h) by proper weighting. However, as alluded to before, a single consensus predictor may not perfectly generalize on local distributions when the heterogeneity among local data shards is high (i.e., the global and local optimal models drift significantly). Meanwhile, from the local user perspective, the key incentive to participate in “federated” learning is the desire to seek a reduction in the local generalization error with the help of other users’ data. In this case, the ideal situation would be that the user can utilize the information from the global model to compensate for the small number of local training data while minimizing the negative transfer induced by heterogeneity among distributions. This motivates us to mix the global model and local model with a controllable weight as a joint prediction model, namely, the personalized model.\nHere we formally introduce our proposed adaptive personalized learning schema, where the goal is to find the optimal combination of the global and the local models, in order to achieve a better client-specific model. In this setting, global server still tries to train the global model by minimizing the empirical risk on the aggregated domain D̄, i.e., h̄∗ = arg minh∈H L̂D̄(h), while each user trains a local model while partially incorporating the global model, with some mixing weight αi, i.e., ĥ∗loc,i = arg minh∈H L̂Di(αih + (1 − αi)h̄∗). Finally, the personalized model for ith client is\na convex combination of h̄∗ and ĥ∗loc,i:\nhαi = αiĥ ∗ loc,i + (1− αi)h̄∗, (1)\nIt is worth mentioning that, hαi is not necessarily the minimizer of empirical risk L̂Di(·), because we optimize ĥ∗loc,i with partially incorporating the global model.\nExample 1. Let us illustrate a simple situation where mixed model does not necessarily coincide with local ERM model. To this end, consider a setting where the hypothesis class H is the set of all vectors in R2, lying in `2 unit ball: H = {h ∈ R2 : ‖h‖2 ≤ 1}. Assume the local empirical minimizer is known to be [1, 0]>, and h̄∗ = [−1, 0]>, and α is set to be 0.5. Now, if we wish to find a ĥ ∗ loc,i, such that hαi = α ∗ ĥ ∗ loc,i + (1 − α) ∗ h̄\n∗ coincides with local empirical minimizer, we have to solve: 0.5∗h+0.5∗ [−1, 0]> = [1, 0]>, subject to ‖h‖2 ≤ 1. This equation has no feasible solution, implying that it is not necessarily true that hαi coincides with local empirical minimizer.\nIn fact, in most cases, as we will show in the convergence of the proposed algorithm, hαi will incur a residual risk if evaluated on the training set drawn from Di." }, { "heading": "2.2 GENERALIZATION GUARANTEES", "text": "We now characterize the generalization of the mixed model. We present the learning bounds for classification and regression tasks. For classification, we consider a binary classification task, with squared hinge loss `(h(x), y) = (max{0, 1− yh(x)})2. In the regression task, we consider the MSE loss `(h(x), y) = (h(x)− y)2. Even though we present learning bounds under these two loss functions, our analysis can be generalized to any convex smooth loss. Before formally presenting the generalization bound, we introduce the following quantity to measure the empirical complexity of a hypothesis classH over a training set S. Definition 1. Let S be a fixed set of samples and consider a hypothesis class H. The worst case disagreement between a pair of models measured by absolute loss is quantified by: λH(S) = suph,h′∈H 1 |S| ∑ (x,y)∈S |h(x)− h′(x)|.\nThe empirical discrepancy characterizes the complexity of hypothesis class over some finite set. The similar concepts are also employed in the related multiple source PAC learning or domain adaption (Kifer et al., 2004; Mansour et al., 2009; Ben-David et al., 2010; Konstantinov et al., 2020; Zhang et al., 2020).\nWe now state the main result on the generalization of the proposed personalization schema. The proof of the theorem is provided in Appendix D. Theorem 1. Let hypothesis classH be compact closed set with finite VC dimension d. Assume loss function ` is Lipschitz continuous with constant G, and bounded in [0, B]. Then with probability at least 1−δ, there exists a constantC, such that the risk of the mixed model hαi = αiĥ∗loc,i+(1−αi)h̄∗ on the ith local distribution Di is bounded by:\nLDi(hαi) ≤ 2α2i LDi(h∗i ) + 2C √ d+ log(1/δ)\nmi +GλH(Si) + 2(1− αi)2 ( L̂D̄(h̄∗) +B‖D̄ − Di‖1 + C √ d+ log(1/δ)\nm\n) ,\n(2)\nwheremi, i = 1, 2, . . . , n is the number of training data at ith user, m = m1 + . . .+mn is the total number of all data, Si to be the local training set drawn from Di, ‖D̄ − Di‖1 = ∫ Ξ |P(x,y)∼D̄ − P(x,y)∼Di |dxdy, is the difference between distributions D̄ and Di, and h∗i = arg minh∈H LDi(h). Remark 1. We note that a very analogous work to ours is Mansour et al. (2020), where a generalization bound is provided for mixing global and local models. However, their bound does not depend on αi, and hence we cannot see how it impacts the generalization ability. In Theorem 1, by omitting constant terms, we observe that the generalization risk of hαi on Di mainly depends on three key quantities: i) m: the number of global data drawn from D̄, ii) divergence between distributions D̄ andDi, and iii)mi: the amount of local data drawn fromDi. Usually,\nthe first quantity m, the amount of global data is fairly large compared to individual users, so global model usually has a better generalization. The second quantity characterizes the data heterogeneity between the average distribution and ith local distribution. If this divergence is too high, then the global model may hurt the local generalization. For the third quantity, as amount of local data mi is often small, the generalization performance of local model can be poor.\nOptimal mixing parameter. We can also find the optimal mixing parameter α∗i that minimizes generalization bound in Theorem 1. Notice that the RHS of (2) is quadratic in αi, so it admits a minimum value at\nα∗i =\n( L̂D̄(h̄∗) +B‖D̄ − Di‖1 + C √ d+log(1/δ)\nm ) ( L̂D̄(h̄∗) +B‖D̄ − Di‖1 + C √ d+log(1/δ)\nm\n) + ( LDi(h∗i ) + 2C √ d+log(1/δ) mi +GλH(Si) ) . The optimal mixture parameter is strictly bounded in [0, 1], which matches our intuition. If the divergence term is large, then the value becomes close to 1, which implies if local distribution drifts too much from average distribution, it is preferable to take more local models. If mi is small, this value will be negligible, indicating that we need to mix more of the global model into the personalized model. Conversely, if mi is large, then this term will be again roughly 1, which means taking the majority of local model will give the desired generalization performance." }, { "heading": "3 OPTIMIZATION METHOD", "text": "To optimize the learning problem we cast in the previous section, here we propose a communication efficient adaptive algorithm to learn the personalized local models and the global model. To do so, we let every hypothesis h in the hypothesis space H to be parameterized by a vector w ∈ W ⊂ Rd where W is some convex closed set and denote the empirical risk at ith device by local objective function fi(w). Adaptive personalized federated learning can be formulated as a two-phase optimization problem: globally update the shared model, and locally update users’ local models. Similar to FedAvg algorithm, the server will solve the following optimization problem:\nmin w∈W\n[ F (w) := 1\nn n∑ i=1 {fi(w) := Eξi [fi (w, ξi)]}\n] , (3)\nwhere fi(.) is the local objective at ith client, ξi is a minibatch of data in data shard at ith client, and n is the total number of clients. Motivated by the trade-off between the global model and local model generalization errors in Theorem 1, we need to learn a personalized model as in (1) to optimize the local empirical risk. To this end, each client needs to solve this optimization over its local data:\nmin v∈W\nfi (αiv + (1− αi)w∗) , (4)\nwhere w∗ = arg minw F (w) is the optimal global model. The balance between these two models is governed by a parameter αi, which is associated with the diversity of the local model and the global model. We first state the algorithm for a pre-defined proper αi, and then propose an adaptive schema to learn this parameter as learning proceeds. Remark 2. As mentioned in Section 2.1, when the hypothesis class is bounded, the mixed model will not coincide with local ERM model. However, if the class is unbounded, the mixed model will eventually converge to local ERM model, which means the personalization fails. Hence, to make sure the correctness of our algorithm, we need to require the parameter comes from some bounded domainW\nLocal Descent APFL. To efficiently optimize the problem we cast in (3) and (4), in this subsection we propose our bilevel optimization algorithm, Local Descent APFL. At each communication round, server uniformly random selects K clients as a set Ut. Each selected client will maintain three models at iteration t: local version of the global modelw(t)i , its own local model v (t) i , and the mixed personalized model v̄(t)i = αiv (t) i + (1−αi)w (t) i . Then, selected clients will perform the following updates locally on their own data for τ iterations:\nw (t) i = ∏ W ( w (t−1) i − ηt∇fi ( w (t−1) i ; ξ t i )) ,v (t) i = ∏ W ( v (t−1) i − ηt∇vfi ( v̄ (t−1) i ; ξ t i )) , (5)\nAlgorithm 1: Local Descent APFL input: Mixture weights α1, · · · , αn, Synchronization gap τ . for t = 0, · · · , T do\nparallel for i ∈ Ut do if t not divides τ then\nw (t) i = ∏ W ( w (t−1) i − ηt∇fi ( w (t−1) i ; ξ t i )) ,\nv (t) i = ∏ W ( v (t−1) i − ηt∇vfi ( v̄ (t−1) i ; ξ t i )) v̄\n(t) i = αiv (t) i + (1− αi)w (t) i , Ut ←− Ut−1\nelse each selected client sends w(t)i to the server w(t) = 1|Ut| ∑ j∈Ut w (t) j\nserver uniformly samples a subset Ut of K clients. server broadcast w(t) to all chosen clients\nend end\nend\nwhere ∇fi (.; ξ) denotes the stochastic gradient of f(.) evaluated at mini-batch ξ. Then, using the updated version of the global model and the local model, we update the personalized model v̄(t)i as well. The clients that are not selected in this round will keep their previous step local modelv(t)i = v\n(t−1) i . After these τ local updates, selected clients will send their local version of the global model w\n(t) i to the server for aggregation by averaging: w (t) = 1|Ut| ∑ j∈Ut w (t) j . Then the server will choose another set of K clients for the next round of training and broadcast this new model to them.\nAdaptively updating α. Even though in Section 2.2, we give the information theoretically optimal mixing parameter, in practice we usually do not know the distance between user’s distribution and the average distribution. Thus, finding the optimal α is infeasible. However, we can infer it empirically during optimization. Based on the local objective defined in (4), the empirical optimum value of α for each client can be found by solving α∗i = arg minαi∈[0,1] fi (αiv + (1− αi)w), where we can use the gradient descent to optimize it at every communication round, using the following step:\nα (t) i = α (t−1) i − ηt∇αfi\n( v̄\n(t−1) i ; ξ t i\n) = α\n(t−1) i − ηt\n〈 v\n(t−1) i −w (t−1) i ,∇fi\n( v̄\n(t−1) i ; ξ t i\n)〉 , (6)\nwhich shows that the mixing coefficient α is updated based on the correlation between the difference of the personalized and the local version of global models, and the gradient at the in-device personalized model. Meaning, when the global model is drifting from the personalized model, the value of α changes to adjust the balance between local data and shared knowledge among all devices captured by the global model." }, { "heading": "4 CONVERGENCE ANALYSIS", "text": "In this section we provide the convergence analysis of Local Descent APFL with fixed αi for strongly convex and nonconvex functions. To have a tight analysis, as well as putting the optimization results in the context of generalization bounds discussed above, we define the following parameterization-invariant quantities that only depend on the distributions of local data across clients and the geometry of loss functions.\nDefinition 2. We define the following quantity to measure the diversity among local gradients with respect to the gradient of the ith client: ζi = supw∈Rd ‖∇F (w) − ∇fi(w)‖22 (Woodworth et al., 2020a). We also define the sum of gradient diversities of n clients as: ζ = ∑n i=1 ζi. Definition 3. We define ∆i = ‖v∗i−w∗‖22, where v∗i = arg minv fi(v), andw∗ = arg minw F (w) to measure the gap between optimal local model and optimal global model.\nWe also need the following standard assumption on the stochastic gradients at local objectives.\nAssumption 1 (Bounded Variance). The variance of stochastic gradients computed at each local data shard is bounded, i.e., ∀i ∈ [n]:E[‖∇fi(x; ξ)−∇fi(x)‖2] ≤ σ2.\nStrongly Convex Loss. We now turn to establishing the convergence of local descent APFL on smooth strongly convex functions. Specifically, the following theorem characterizes the convergence of the personalized local model to the optimal local model. The proof is provided in Appendix E.2.3. Theorem 2. Assume each client’s objective function is µ-strongly convex and L-smooth, and satisfies Assumption 1. Also let κ = L/µ, b = min { K n , 1 2 } . Using Algorithm 1, by choosing the mixing weight αi ≥ max{1− 14√6κ , 1− 1 4 √ 6κ √ µ }, learning rate: ηt = 16µ(t+a) , where a = max{128κ, τ},\nand using average scheme v̂i = 1ST ∑T t=1 pt(αiv (t) i +(1−αi) 1K ∑ j∈Ut w (t) j ), where pt = (t+a) 2,\nST = ∑T t=1 pt, and letting f ∗ i to denote the local minimum of the ith client, then the following convergence rate holds for all clients i ∈ [n]:\nE[fi(v̂i)]− f∗i ≤ α2iO ( σ2\nµbT\n) + (1− αi)2O ( κ2σ2\nµbKT + κ2τ\n( τζi + κ\n2τ ζ K ) µbT 2 + ζi + ζ K µb + κL∆i b ) .\nIf we choose τ = √ T/K, then:\nE[fi(v̂i)]− f∗i ≤ α2iO ( σ2\nµT\n) + (1− αi)2O ( κ2σ2+κ2ζi+κ 4 ζ K\nµKT\n) + (1− αi)2O ( ζi+ ζ K\nµ + κL∆i\n) .\nA few remarks about the convergence of personalized local model are in place: (1) If we set αi = 1, then we recover O ( 1 T ) convergence rate of single machine SGD. If we only focus on the terms with (1−αi)2, which is contributed by the global model’s convergence, and omit the residual error, we achieve the convergence rate of O(1/KT ) using only √ KT communication, which matches with the convergence rate of vanilla local SGD (Stich, 2018; Woodworth et al., 2020a), and (2) The residual error is related to the gradient diversity ζi and local-global optimality gap ∆i. It shows that taking any proportion of the global model will result in a sub-optimal ERM model. As we discussed in Section 2.1, hαi will not be the empirical risk minimizer in most cases. Also, we assume that αi needs to be larger than some value in order to get a tight rate. This condition can be alleviated, but the residual error will be looser. The analysis of this relaxation is presented in Appendix F.\nNonconvex Loss. The following theorem establish the convergence rate of personalized model learned by APFL for nonconvex smooth loss functions. The proof is provided in Appendix G.3.\nTheorem 3. Let v̂(t)i = αiv (t) i + (1 − αi) 1K ∑ j∈Ut w (t) j . If each client’s objective function is L-smooth and domain W be bounded by DW , that is, ∀w,w′ ∈ W, ‖w − w′‖2 ≤ DW . Using Algorithm 1 with full gradient, by choosing K = n and learning rate η = 1\n2 √ 5L √ T , we have\n1\nT T∑ t=1 ∥∥∥∇fi(v̂(t)i )∥∥∥2 ≤ O( L√ T ) + (1− αi)2 ( L√ T ) + (1− α2i )2 ( ζi + L 2DW )\n+ α4i (1− αi)2O ( τ4ζ\nnT 2 + τ2ζi T\n) + (1− αi)2O ( τ2ζ\nnT\n) .\nBy choosing τ = n−1/4T 1/4, it holds that:\n1\nT T∑ t=1 ∥∥∥∇fi(v̂(t)i )∥∥∥2 ≤ O( 1√ T ) + (1− αi)2O ( 1√ T + 1√ nT ) + (1− α2i )2 ( ζi + L 2DW ) .\nHere we show that APFL will converge to stationary point on nonconvex function with sublinear rate plus some residual error, with n3/4T 3/4 communication rounds. The rate with factor (1 − αi)\n2 is contributed from the global model convergence, and here we have some additive residual error reflected by ζi and DW . Compared to most related work by Haddadpour & Mahdavi (2019) regarding the convergence of local SGD on nonconvex functions, they obtain O(1/ √ nT ), while we only have speedup in n on partial terms. This could be solved by using different learning rate for local and global update. Additionally, we assume K = n to derive the convergence in nonconvex setting, and leave the analysis for partial participation as a future work." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we empirically show the effectiveness of the proposed algorithm in personalized federated learning. Due to lack of space, some experimental results are deferred to Appendix B.\nExperimental setup. We run our experiments on Microsoft Azure systems, using Azure ML API. The code is developed on PyTorch (Paszke et al., 2019) using its “distributed” API with MPI. We deploy this code on Standard F64s family of VMs in Azure. We use four datasets for our experiments, MNIST, CIFAR10 (Krizhevsky et al., 2009), EMNIST (Cohen et al., 2017), and a synthetic dataset. For more information on datasets used in the following experiments refer to Appendix B.1. For all the experiments, we have 100 users (except for EMNIST dataset), each of which has access to its own data only. The local dataset is randomly divided into 80% for training and 20% for validation, which is the standard way to examine the local models for personalized use cases. For the learning rate, we use the linear decay structure with respect to local steps, suggested by Bottou (2012). At each iteration the learning rate is decreased by 1%, unless otherwise stated. We report the performance over training data for optimization error and local validation data (from the same distribution as training data for each client) for the generalization accuracy. Throughout these experiments we report the results for the following three models:\n• Global Model: Referring to the global model of FedAvg or SCAFFOLD. • Localized Global Model: Referring to the fine-tuned version of the global model at each round\nof communication after τ steps of local SGD. Here, we have either the localized FedAvg or the localized SCAFFOLD. The reported results are for the average of the performance over all the local models on each online client. In all the experiments τ = 10, unless otherwise stated.\n• Personalized Model: This model is the personalized model produced by our proposed algorithm APFL. The reported results are the average of the respective performance of personalized models over all online clients at each round of communication.\nStrongly convex loss. First, we run a set of experiments on the MNIST dataset, with different levels of non-IIDness by assigning certain number of classes to each client. We use logistic regression with parameter regularization as our strongly convex loss function. In this part, all clients are online for each round, however, the results when client sampling is involved is discussed in Appendix B.2. We compare the personalized model of APFLwith different rates of personalization as αwith global and localized models of FedAvg and SCAFFOLD, as well as their global models. The initial learning\nrate is set to 0.1 and it is decaying as mentioned before. The results of running this experiment on 100 clients and after 100 rounds of communication are depicted in Figure 2, where we move from highly non-IID data distribution (left) to IID data distribution (right). As it can be seen, global models learned by FedAvg and SCAFFOLD have high local training losses. On the other hand, taking more proportion of the local model in the personalized model (namely, increasing α) will result in the lower training losses. For generalization ability, the best performance is given by personalized model with α = 0.25 in both (a) and (b) cases, which outperforms the global (FedAvg and SCAFFOLD) and their localized versions. However, as we move toward IID distribution, the advantage of personalization vanishes as expected. Hence, as expected by the theoretical findings, we can benefit from personalization the most when there is a statistical heterogeneity between the data of different clients. When the data are distributed IID, local models of FedAvg or SCAFFOLD are preferable.\nAn interesting observation from the results in Figure 2, which is inline with our theoretical findings is the relationship of α with both optimization and generalization losses. As it can be seen from the first row, α has a linear relationship with the optimization loss, that is, with smaller α, training loss is getting closer to the global model of FedAvg in terms of optimization loss, which matches with our convergence theory. However, from the second row, it can be inferred that there is no linear relationship between α and generalization. In fact, according to (2), we know that generalization bound is quadratic in α, and hence, the generalization performance does not simply increase or decrease monotonically with α.\nAdaptive α update. In this part, we want to show how adaptively learning the value of α across different clients, based on (6), will affect the training and generalization performance of APFL’s personalized models. We use the three synthetic datasets as described in Appendix B.1, with logistic regression as the loss function. We set the initial value of α(0)i = 0.01 for every i ∈ [n]. The results of this training are depicted in Figure 3, where both optimization and generalization of the learned models are compared. As it can be inferred, in training, APFL outperforms FedAvg in the same datasets. More interestingly, in generalization of learned APFL personalized models, all datasets achieve almost the same performance as a result of adaptively updating α values, while the FedAvg algorithm has a huge gap with them. This shows that, when we do not know the degree of diversity among data of different clients, we should adaptively update α values to guarantee the best generalization performance. We also have results on EMNIST dataset with adaptive tuning of α in Appendix B.2, wih a 2-layer MLP.\nNonconvex loss. To showcase the results for a nonconvex loss, we use CIFAR10 dataset that is distributed in a non-IID way with 2 classes per client. We apply it to a CNN model with 2 convolution layers, followed by 2 fully connected layers, using cross entropy as the loss function. The initial learning rates of APFL and FedAvg algorithms are set to 0.1 with the mentioned decay structure, while for SCAFFOLD this value is 0.05 with 5% decay per iteration to avoid divergence. As it can be inferred from the results in Table 1, the personalized model learned by APFL outperforms the localized models of FedAvg and SCAFFOLD, as well as their global models, in both optimization and generalization. In this case adaptively tuning the α achieves the best training loss, while α = 0.25 case reaching the best generalization performance.\nComparison with other personalization methods. We now compare our proposed APFL with two recent approaches for personalization in federated learning. In addition to FedAvg, we compare with perFedAvg introduced in Fallah et al. (2020) using a meta-learning approach, and pFedMe\nintroduced in Dinh et al. (2020) using a regularization with Moreau envelope function. We run these algorithms to train an MLP with 2 hidden layers, each with 200 neurons, on a non-IID MNIST dataset with 2 classes per client. For perFedAvg, similar to their setting, we use learning rates of α = 0.01 (different from the α in our APFL) and β = 0.001. To have a fair comparison, we use the same validation for perFedAvg and we use 10% of training data as the test dataset that updates the meta-model. For pFedMe, following their setting, we use λ = 15, η = 0.01. We use τ = 20 with total number of communications to 100 and the batch size is 20. The results of these experiments are presented in Table 2, where APFL clearly outperforms all other models in both training and generalization. The APFL model with α = 0.75 has the lowest training loss, and the one with adaptive α has the best validation accuracy. perFedAvg is slightly better than the localized FedAvg, however, it is worse than APFL models. pFedMe performs better than the global model of FedAvg, but it cannot surpass neither the localized model of FedAvg nor APFL models." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we proposed an adaptive federated learning algorithm that learns a mixture of local and global models as the personalized model. Motivated by learning theory in domain adaptation, we provided generalization guarantees for our algorithm that demonstrated the dependence on the diversity between each clients’ data distribution and the representative sample of the overall distribution of data, and the number of per-device samples as key factors in personalization. Moreover, we proposed a communication-reduced optimization algorithm to learn the personalized models and analyzed its convergence rate for both smooth strongly convex and nonconvex functions. Finally, we empirically backed up our theoretical results by conducting experiments in a federated setting." }, { "heading": "SUPPLEMENTARY MATERIAL: ADAPTIVE PERSONALIZED FEDERATED LEARNING", "text": "" }, { "heading": "TABLE OF CONTENTS", "text": "" }, { "heading": "A Additional Related Work 13", "text": "" }, { "heading": "B Additional Experimental Results 15", "text": "B.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 B.2 Additional Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16" }, { "heading": "C Discussions and Extensions 17", "text": "" }, { "heading": "D Proof of Generalization Bound 20", "text": "" }, { "heading": "E Proof of Convergence Rate in Convex Setting 22", "text": "E.1 Proof without Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23\nE.1.1 Proof of Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 E.1.2 Proof of Theorem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 E.1.3 Proof of Theorem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27\nE.2 Proof of Convergence of APFL with Sampling . . . . . . . . . . . . . . . . . . . . 31 E.2.1 Proof of Useful Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 E.2.2 Proof of Theorem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 E.2.3 Proof of Theorem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35" }, { "heading": "F Convergence Rate without Assumption on αi 38", "text": "" }, { "heading": "G Proof of Convergence Rate in Nonconvex Setting 40", "text": "G.1 Proof of Technical Lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 G.2 Proof of Theorem 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 G.3 Proof of Theorem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46" }, { "heading": "A ADDITIONAL RELATED WORK", "text": "The number of research in federated learning is proliferating during the past few years. In federated learning, the main objective is to learn a global model that is good enough for yet to be seen data and has fast convergence to a local optimum. This indicates that there are several uncanny resemblances between federated learning and meta-learning approaches (Finn et al., 2017; Nichol et al., 2018). However, despite this similarity, meta-learning approaches are mainly trying to learn multiple models, personalized for each new task, whereas in most federated learning approaches, the main focus is on the single global model. As discussed by Kairouz et al. (2019), the gap between the performance of global and personalized models shows the crucial importance of personalization in federated learning. Several different approaches are trying to personalize the global model, primarily focusing on optimization error, while the main challenge with personalization is during the inference time. Some of these works on the personalization of models in a decentralized setting can be found in Vanhaesebrouck et al. (2017); Almeida & Xavier (2018), where in addition to the optimization error, they have network constraints or peer-to-peer communication limitation (Bellet et al., 2017; Zantedeschi et al., 2019). In general, as discussed by Kairouz et al. (2019), there are three significant categories of personalization methods in federated learning, namely, local fine-tuning, multi-task learning, and contextualization. Yu et al. (2020) argue that the global model learned by federated learning, especially with having differential privacy and robust learning objectives, can hurt the performance of many clients. They indicate that those clients can obtain a better model by using only their own data. Hence, they empirically show that using these three approaches can boost the performance of those clients. In addition to these three, there is also another category that fits the most to our proposed approach, which is mixing the global and local models.\nLocal fine-tuning: The dominant approach for personalization is local fine-tuning, where each client receives a global model and tune it using its own local data and several gradient descent steps. This approach is predominantly used in meta-learning methods such as MAML by Finn et al. (2017) or domain adaptation and transfer learning (Ben-David et al., 2010; Mansour et al., 2009; Pan & Yang, 2009). Jiang et al. (2019) discuss the similarity between federated learning and meta-learning approaches, notably the Reptile algorithm by Nichol et al. (2018) and FedAvg, and combine them to personalize local models. They observed that federated learning with a single objective of performance of the global model could limit the capacity of the learned model for personalization. In Khodak et al. (2019), authors using online convex optimization to introduce a meta-learning approach that can be used in federated learning for better personalization. Fallah et al. (2020) borrow ideas from MAML to learn personalized models for each client with convergence guarantees. Similar to fine-tuning, they update the local models with several gradient steps, but they use second-order information to update the global model, like MAML. Another approach adopted for deep neural networks is introduced by Arivazhagan et al. (2019), where they freeze the base layers and only change the last “personalized” layer for each client locally. The main drawback of local fine-tuning is that it minimizes the optimization error, whereas the more important part is the generalization performance of the personalized model. In this setting, the personalized model is pruned to overfit.\nMulti-task learning: Another view of the personalization problem is to see it as a multi-task learning problem similar to Smith et al. (2017). In this setting, optimization on each client can be considered as a new task; hence, the approaches of multi-task learning can be applied. One other approach, discussed as an open problem in Kairouz et al. (2019), is to cluster groups of clients based on some features such as region, as similar tasks, similar to one approach proposed by Mansour et al. (2020).\nContextualization: An important application of personalization in federated learning is using the model under different contexts. For instance, in the next character recognition task in Hard et al. (2018), based on the context of the use case, the results should be different. Hence, we need a personalized model on one client under different contexts. This requires access to more features about the context during the training. Evaluation of the personalized model in such a setting has been investigated by Wang et al. (2019), which is in line with our approach in experimental results in Section 5. Liang et al. (2020) propose to directly learn the feature representation locally, and train the discriminator globally, which reduces the effect of data heterogeneity and ensures the fair learning.\nPersonalization via model regularization: Another significant trial for personalization is model regularization. There are several studies to introduce different personalization approaches for federated learning by regularize the difference between the global and local models. Hanzely & Richtárik (2020) try to introduce a new formulation for federated learning where they add the regularization term on the distance of local and global models. In their effort, they use a mixing parameter, which controls the degree of optimization for both local models and the global model. The FedAvg (McMahan et al., 2017) can be considered a special case of this approach. They show that the learned model is in the convex haul of both local and global models, and at each iteration, depend on the local models’ optimization parameters, the global model is getting closer to the global model learned by FedAvg. Similarly, Huang et al. (2020) and Dinh et al. (2020) also propose to use the regularization between local and global model, to realize the personalized learning. Shen et al. (2020) propose a knowledge distillation way to achieve personalization, where they apply the regularization on the predictions between local model and global model.\nPersonalization via model interpolation: Parallel to our work, there are other studies to introduce different personalization approaches for federated learning by mixing the global and local models. The closest approach for personalization to our proposal is introduced by Mansour et al. (2020). In fact, they propose three different approaches for personalization with generalization guarantees, namely, client clustering, data interpolation, and model interpolation. Out of these three, the first two approaches need some meta-features from all clients that makes them not a feasible approach for federated learning, due to privacy concerns. The third schema, which is the most promising one in practice as well, has a close formulation to ours in the interpolation of the local and global models. However, in their theory, the generalization bound does not demonstrate the advantage of mixing models, but in our analysis, we show how the model mixing can impact the generalization\nbound, by presenting its dependency on the mixture parameter, data diversity and optimal models on local and global distributions.\nBeyond different techniques for personalization in federated learning, Kairouz et al. (2019) ask an essential question of “when is a global FL-trained model better?”, or as we can ask, when is personalization better? The answer to these questions mostly depends on the distribution of data across clients. As we theoretically prove and empirically verify in this paper, when the data is distributed IID, we cannot benefit from personalization, and it is similar to the local SGD scenario (Stich, 2018; Haddadpour et al., 2019a;b; Woodworth et al., 2020b). However, when the data is non-IID across clients, which is mostly the case in federated learning, personalization can help to balance between shared and local knowledge. Then, the question becomes, what degree of personalization is best for each client? While this was an open problem in Mohri et al. (2019) on how to appropriately mix the global and local model, we answer this question by adaptively tuning the degree of personalization for each client, as discussed in Section 3, so it can perfectly become agnostic to the local data distributions." }, { "heading": "B ADDITIONAL EXPERIMENTAL RESULTS", "text": "In this section, we present additional experimental results to demonstrate the efficacy of the proposed APFL algorithm. First, we describe different datasets we have used in this paper, and then, present additional results." }, { "heading": "B.1 DATASETS", "text": "For the experiments we use 4 different data sources as follows:\nMNIST and CIFAR10 For the MNIST and CIFAR10 datasets to be similar to the setting in federated learning, we need to manually distribute them in a non-IID way, hence the data distribution is pathologically heterogeneous. To this end, we follow the steps used by McMahan et al. (2017), where they partitioned the dataset based on labels and for each client draw samples from some limited number of classes. We use the same way to create 3 datasets for the MNIST, that are, MNIST non-IID with 2 classes per client, MNIST non-IID with 4 classes per client, and MNIST IID, where the data is distributed uniformly random across different clients. Also, we create a non-IID CIFAR10 dataset, where each client has access to only 2 classes of data.\nEMNIST In addition to pathological heterogeneous data distributions, we applied our algorithm on a real-world heterogeneous dataset, which is an extension to MNIST dataset. The EMNIST dataset includes images of characters divided by authors, where each author has a different style, make their distributions different Caldas et al. (2018). We use only digit characters and 1000 authors’ data to train our models on.\nSynthetic For generating the synthetic dataset, we follow the procedure used by Li et al. (2018), where they use two parameters, say synthetic(γ, β), that control how much the local model and the local dataset of each client differ from that of other clients, respectively. Using these parameters, we want to control the diversity between data and model of different clients. The procedure is that for each client we generate a weight matrixW i ∈ Rm×c and a bias b ∈ Rc, where the output for the ith client is yi = arg max ( σ ( W>i xi + b )) , where σ(.) is the softmax. In this setting, the input data xi ∈ Rm has m features and the output y can have c different values indicating number of classes. The model is generated based on a Gaussian distribution W i ∼ N (µi, 1) and bi ∼ N (µi, 1), where µi ∼ N (0, γ). The input is drown from a Gaussian distribution xi ∼ N (νi,Σ), where νi ∼ N (Vi, 1) and Vi ∼ N (0, β). Also the variance Σ is a diagonal matrix with value of Σk,k = k−1.2. Using this procedure, we generate three different datasets, namely synthetic(0.0, 0.0), synthetic(0.5, 0.5), and synthetic(1.0, 1.0), where we move from an IID dataset to a highly non-IID data." }, { "heading": "B.2 ADDITIONAL RESULTS", "text": "In this part, we present more experimental results that can further illustrate the effectiveness of APFL on other datasets and models.\nEffect of sampling. To understand how the sampling of different clients will affect the performance of the APFL algorithm, we run the same experiment with different sampling rates for the MNIST dataset. The results of this experiment are depicted in Figure 4, where we run the experiment for different sampling rates of K ∈ {0.3, 0.5, 0.7}. Also, we run it with different values of α ∈ {0.25, 0.5, 0.75}. The results are reported for the personalized model of APFL and localized FedAvg. As it can be inferred, decreasing the sampling ratio has a negative impact on both the training and generalization performance of FedAvg. However, we can see that despite the sampling ratio, APFL is outperforming local model of the FedAvg in both training and generalization. Also, from the results of Figure 2, we know that for this dataset that is highly non-IID, larger α values are preferred. Increasing α can diminish the negative impacts of sampling on personalized models both in training and generalization.\nNatural heterogeneous data In addition to the CIFAR10 and MNIST datasets with pathological heterogeneous data distributions, we apply our algorithm on a natural heterogeneous dataset, EMNIST (Caldas et al., 2018). We use the data from 1000 clients, and for each round of communication we randomly select 10% of clients to participate in the training. We use an MLP model with 2 hidden layers, each with 200 neurons and ReLU as the activation function, using cross entropy as the loss function. For APFL, we use the adaptive α scheme with initial value of 0.5 for each client. We run both algorithms for 250 rounds of communication. In each round, each online client performs the local updates for 1 epoch on its data. Figure 5 shows the results of this experiment for personalized model of APFL and the localized model of the FedAvg. APFL with adaptive α can reach to the same training loss of the local FedAvg, while greatly outperforms the local FedAvg model in generalization on local validation data.\nData distribution using Dirichlet distribution Another approach to distribute data in a non-IID way is to use the Dirichlet distribution as discussed in Hsu et al. (2019); Yurochkin et al. (2019). We use this approach and set the parameter of the Dirichlet distribution to 1.0 and repeat the experiments\nfor MNIST dataset with MLP model in the main body. Here, again we have 100 clients and run the experiments for 100 rounds of communication each with 1 epoch of training. The results are summarized in Table 3. Again, it can be inferred that APFL can generalize well on the local test dataset of different clients.\ndistribution for splitting data across clients. The parameter of the Dirichlet distribution is set to 1." }, { "heading": "C DISCUSSIONS AND EXTENSIONS", "text": "Connection between learning guarantee and convergence. As Theorem 1 suggests, the generalization bound depends on the divergence of the local and global distributions. In the language of optimization, the counter-part of divergence of distribution is the gradient diversity; hence, the gradient diversity appears in our empirical loss convergence rate (Theorem 2). The other interesting discovery is in the generalization bound, we have the term λH and LDi(h∗i ), which are intrinsic to the distributions and hypothesis class. Meanwhile, in the convergence result, we have the term ‖v∗i −w∗‖2, which also only depends on the data distribution and hypothesis class we choose. In addition, ‖v∗i −w∗‖2 also reveals the divergence between local and global optimal solutions.\nWhy APFL is “Adaptive”. Both information-theoretically (Theorem 1) and computationally (Theorem 2), we prove that when the local distribution drifts far away from the average distribution, the global model does not contribute too much to improve the local generalization and we have to tune the mixing parameter α to a larger value. Thus it is necessary to make α updated adaptively during empirical risk minimization. In Section 3, (6) shows that the update of α depends on the correlation of local gradient and deviation between local and global models. Experimental results show that our method can adaptively tune α, and can outperform the training scheme using fixed α.\nComparison with local ERM model A crucial question about personalization is when it is preferable to employ a mixed model?, and how bad a local ERM model will be? In the following corollary, we answer this by showing that the risk of local ERM model can be strictly worse than that of our personalized model. Corollary 1. Continuing with Theorem 1, there exist a distribution Di, constant C1 and C2, such that with probability at least 1 − δ, the following upper bound for the difference between risks of\npersonalized model hαi and local ERM model ĥ ∗ i on Di, holds :\nLDi(hαi)− LDi(ĥ∗i ) ≤ (2α2i − 1)LDi(h∗i ) + (2α2iC1 − C2)\n√ d+ log(1/δ)\nmi + 2α2iGλH(Si)\n+ 2(1− αi)2 ( L̂D̄(h̄∗) +B‖D̄ − Di‖1 + C1 √ d+ log(1/δ)\nm\n) .\nBy examining the above bound, the personalized model is preferable to local model if this value is less than 0. In this case, we require (2α2−1) and (2α2iC1−C2) to be negative, which is satisfied by choosing αi ≤ min{ √ 2 2 , √ C2 2C1 }. Then, the term √ d+log(1/δ) mi , should be sufficiently large, and the divergence term, as well as the global model generalization error has to be small. In this case, from the local model perspective, it can benefit from incorporate some global model. Using the similar technique, we can prove the supremacy of mixed model over global model as well.\nProof of Corollary 1. Since in Theorem 1, we already obtained upper bound for LDi(hαi) as following,\nLDi(hαi) ≤ 2α2i LDi(h∗i ) + 2C1 √ d+ log(1/δ)\nmi +GλH(Si) + 2(1− αi)2 ( L̂D̄(h̄∗) +B‖D̄ − Di‖1 + C1 √ d+ log(1/δ)\nm\n) ,\nto find the upper bound of LDi(hαi) − LDi(ĥ∗i ), we just need the lower bound of LDi(ĥ∗i ). The fundamental theorem of statistical learning (Shalev-Shwartz & Ben-David, 2014; Mohri et al., 2018) states a lower risk bound for agnostic PAC learning: for a hypothesis class with finite VC dimension d, then there exists a distribution D, such that for any learning algorithm, which learns a hypothesis h ∈ H on m i.i.d. samples from D, there exists a constant C, with the probability at least 1− δ, we have:\nLD(h)− min h′∈H\nLD(h′) ≥ C √ d+ log(1/δ)\nm .\nSince ĥ∗i is learnt by ERM algorithm, the agnostic PAC learning lower risk bound also holds for it, so in worst case it might hold that under distribution Di, if ĥ∗i is learnt by ERM algorithm using mi samples, then there is a C2, such that with probability at least 1− δ, we have:\nLDi(ĥ∗i ) ≥ LDi(h∗i ) + C2\n√ d+ log(1/δ)\nmi .\nThus we can bound LDi(hαi)− LDi(ĥ∗i ) as Corollary 1 claims.\nPersonalization for new participant nodes. Suppose we already have a trained global model ŵ, and now a new device k joins in the network, which is desired to personalize the global model to adapt its own domain. This can be done by performing a few local stochastic gradient descent updates from the given global model as an initial local model:\nv (t+1) k = v (t) k − ηt∇vfk(αkv (t) k + (1− αk)ŵ; ξ (t) k ) (7)\nto quickly learn a personalized model for the newly joined device. One thing worthy of investigation is the difference between APFL and meta-learning approaches, such as model-agnostic meta-learning (Finn et al., 2017). Our goal is to share the knowledge among the different users, in order to reduce the generalization error; while meta-learning cares more about how to build a meta-learner, to help training models faster and with fewer samples. In this scenario, similar to FedAvg, when a new node joins the network, it gets the global model and takes a few stochastic steps based on its own data to update the global model. In Figure 6, we show the results of applying\nFedAvg and APFL on synthetic data with two different rates of diversity, synthetic(0.0, 0.0) and synthetic(0.5, 0.5). In this experiment, we keep 3 nodes with their data off in the entire training for 100 rounds of communication between 97 nodes. In each round, each client updates its local and personalized models for one epoch. After the training is done, those 3 clients will join the network and get the latest global model and start training local and personalized models of their own. Figure 6 shows the training loss and validation accuracy of these 3 nodes during the 5 epochs of updates. The local model represents the model that will be trained in FedAvg, while the personalized model is the one resulting from APFL. Although the goal of APFL is to adaptively learn the personalized model during the training, it can be inferred that APFL can learn a better personalized model in a meta-learning scenario as well.\nAgnostic global model. As pointed out by Mohri et al. (2019), the global model can be distributionally robust if we optimize the agnostic loss:\nmin w∈Rd max q∈∆n F (w) := n∑ i qifi(w), (8)\nwhere ∆n = {q ∈ Rn+ | ∑ qi = 1} is the n-dimensional simplex. We call this scenario “Adaptive Personalized Agnostic Federated Learning”. In this case, the analysis will be more challenging since the global empirical risk minimization is performed at a totally different domain, so the risk upper bound for hαi we derived does not hold anymore. Also, from a computational standpoint, since the resulted problem is a minimax optimization problem, the convergence analysis of agnostic APFL will be more involved, which we will leave as an interesting future work." }, { "heading": "D PROOF OF GENERALIZATION BOUND", "text": "In this section we present the proof of generalization bound for APFL algorithm. Recall that we define the following hypotheses on ith local true and empirical distributions:\nĥ∗i = arg min h∈H L̂Di(h) (LOCAL EMPIRICAL RISK MINIMIZER) h∗i = arg min h∈H LDi(h) (LOCAL TRUE RISK MINIMIZER) h̄∗ = arg min h∈H LD̄(h) (GLOBAL EMPIRICAL RISK MINIMIZER)\nĥ∗loc,i = arg min h∈H L̂Di(αih+ (1− αi)h̄∗) (MIXED EMPIRICAL RISK MINIMIZER) h∗loc,i = arg min h∈H LDi(αih+ (1− αi)h̄∗) (MIXED TRUE RISK MINIMIZER)\nwhere L̂Di(h) and LDi(h) denote the empirical and true risks on Di, respectively. From a high-level technical view, since we wish to bound the risk of the mixed model on local distribution Di, first we need to utilize the convex property of the risk function, and decompose it into two parts: LDi ( ĥ∗loc,i ) and LDi ( h̄∗ ) . To bound LDi ( ĥ∗loc,i ) , a natural idea is to characterize\nit by the risk of optimal model LDi (h∗i ), plus some excess risk. However, due to fact that ĥ∗loc,i is not the sole local empirical risk minimizer, rather it partially incorporates the global model, we need to characterize to what extent it drifts from the local empirical risk minimizer ĥ∗i . This drift can be depicted by the hypothesis capacity, so that is our motivation to define λH(S) to quantify the empirical loss discrepancy over S among pair of hypotheses in H. We have to admit that there should be a tighter theory to bound this drift, depending how global model is incorporated, which we leave it as a future work.\nThe following simple result will be useful in the proof of generalization.\nLemma 1. Let H be a hypothesis class and D and D′ denote two probability measures over space Ξ. Let LD(h) = E(x,y)∼D [` (h(x), y)] denote the risk of h over D . If the loss function `(·) is bounded by B, then for every h ∈ H:\nLD(h) ≤ LD′(h) +B‖D − D′‖1, (9)\nwhere ‖D − D′‖1 = ∫ Ξ |P(x,y)∼D − P(x,y)∼D′ |dxdy." }, { "heading": "Proof.", "text": "LD(h) ≤ LD′(h) + |LD(h)− LD′(h)| ≤ LD(h) + ∫\nΞ\n|`(y, h(x))||P(x,y)∼D − P(x,y)∼D′ |dxdy\n= LD(h) +B‖D − D′‖1.\nProof of Theorem 1 We now turn to proving the generalization bound for the proposed APFL algorithm. Recall that for the classification task we consider squared hinge loss, and for the regression case we consider MSE loss. We will first prove that in both cases we can decompose the risk as follows:\nLDi(h∗αi) ≤ 2α 2 iLDi ( ĥ∗loc,i ) + 2(1− αi)2LDi ( h̄∗(x) ) . (10)\nWe start with the classification case first. Note that, hinge loss: max{0, 1 − z} is convex in z, so max{0, 1− y(αih+ (1−αi)h′)} ≤ αi max{0, 1− yh}+ (1−αi) max{0, 1− yh′}, according to\nJensen’s inequality. Hence, we have:\nLDi(h∗αi) = LDi(αiĥ ∗ loc,i + (1− αi)h̄∗) = E(x,y)∼Di ( max{0, 1− y(αiĥ∗loc,i(x) + (1− αi)h̄∗(x))} )2\n= E(x,y)∼Di ( αi max{0, 1− yĥ∗loc,i(x)}+ (1− αi) max{0, 1− yh̄∗(x)} )2 ≤ 2α2iE(x,y)∼Di ( max{0, 1− yĥ∗loc,i(x)}\n)2 + 2(1− αi)2E(x,y)∼Di ( max{0, 1− yh̄∗(x)}\n)2 ≤ 2α2iLDi ( ĥ∗loc,i ) + 2(1− αi)2LDi ( h̄∗ ) .\nFor regression case:\nLDi(h∗αi) = LDi(αiĥ ∗ loc,i + (1− αi)h̄∗) = E(x,y)∼Di ∥∥∥y − (αiĥ∗loc,i(x) + (1− αi)h̄∗(x))∥∥∥2\n= E(x,y)∼Di ∥∥∥αiy − αiĥ∗loc,i(x) + (1− αi)y − (1− αi)h̄∗(x)∥∥∥2\n≤ 2α2iE(x,y)∼Di ∥∥∥y − ĥ∗loc,i(x)∥∥∥2 + 2(1− αi)2E(x,y)∼Di ∥∥y − h̄∗(x)∥∥2\n≤ 2α2iLDi ( ĥ∗loc,i ) + 2(1− αi)2LDi ( h̄∗ )\nThus we can conclude:\nLDi(h∗αi) ≤ 2α 2 i LDi ( ĥ∗loc,i ) ︸ ︷︷ ︸\nT1\n+2(1− αi)2 LDi ( h̄∗ )︸ ︷︷ ︸\nT2\n. (11)\nWe proceed to bound the terms T1 and T2 in RHS of above inequality. We first bound T1 as follows. The first step is to utilize uniform VC dimension error bound over H Mohri et al. (2018); ShalevShwartz & Ben-David (2014):\n∀h ∈ H, |LDi(h)− L̂Di(h)| ≤ C\n√ d+ log(1/δ)\nmi ,\nwhere C is constant factor. So we can bound T1 as:\nT1 = LDi(ĥ∗loc,i) = LDi(h∗i ) + LDi(ĥ∗loc,i)− LDi(h∗i ) = LDi(h∗i )\n+ LDi(ĥ∗loc,i)− L̂Di(ĥ∗loc,i)︸ ︷︷ ︸ ≤C √ d+log(1/δ)\nmi\n+L̂Di(ĥ∗loc,i)− L̂Di(h∗i ) + L̂Di(h∗i )− LDi(h∗i )︸ ︷︷ ︸ ≤C √ d+log(1/δ)\nmi\n≤ LDi(h∗i ) + 2C\n√ d+ log(1/δ)\nmi + L̂Di(ĥ∗loc,i)− L̂Di(ĥ∗i ).\nNote that\nL̂Di(ĥ∗loc,i)− L̂Di(ĥ∗i ) ≤ G 1 |Si| ∑\n(x,y)∈Si\n|ĥ∗loc,i(x)− ĥ∗i (x)| ≤ GλH(Si),\nAs a result we can bound T1 by:\nT1 ≤ LDi(h∗i ) + 2C\n√ d+ log(1/δ)\nmi +GλH(Si).\nWe now turn to bounding T2. Plugging Lemma 1 in (11) and using uniform generalization risk bound will immediately give:\nT2 ≤ L̂D̄(h̄∗) +B2‖D − D̄‖1 + C √ d+ log(1/δ)\nm .\nPlugging T1 and T2 back into (11) concludes the proof. Remark 3. One thing worth mentioning is that, we assume the customary boundedness of loss functions. Actually it can be satisfied if the data and the parameters of hypothesis are bounded. For example, considering the scenario where we are learning a linear model w with the constraint ‖w‖ ≤ 1, and also the data tuples (x, y) are drawn from some bounded domain, then the loss is obviously bounded by some finite real value.\nRemark 4. As LDi ( ĥ∗loc,i ) is the risk of the empirical risk minimizer on Di after incorporating a\nmodel learned on a different domain (i.e., global distribution), one might argue that generalization techniques established in multi-domain learning theory (Ben-David et al., 2010; Mansour et al., 2009; Zhang et al., 2020) can be utilized to serve our purpose. However, we note that the techniques developed in Ben-David et al. (2010); Mansour et al. (2009); Zhang et al. (2020) are only applicable to a settings where we aim at directly learning a model in some combination of source and target domain, while in our setting, we partially incorporate the model learned from source domain and then perform ERM on joint model over target domain. Moreover, their results only apply to very simple loss functions, e.g., absolute loss or MSE loss, while we consider squared hinge loss in the classification case. Analogous to multiple domain theory, we derive the multi domain learning bound based on the divergence of source and target domains but measured in absolute distance, ‖ · ‖1. As Mansour et al. (2009) points out, divergence measured by absolute loss can be large, and as a result we leave the development of a more general multiple domain learning theory that can deal with most popular loss functions like hinge loss, cross entropy loss and optimal transport, with tighter divergence measure on distributions as an open question." }, { "heading": "E PROOF OF CONVERGENCE RATE IN CONVEX SETTING", "text": "In this section, we present the proof of convergence raters. For ease of mathematical derivations, we first consider the case without sampling clients at each communication step and then generalize the proof to the setting where K devices are sampled uniformly at random by the server as employed in the proposed algorithm.\nTechnical challenges. The analysis of convergence rates in our setting is more involved compared to analysis of local SGD with periodic averaging by Stich (2018); Woodworth et al. (2020a). The key difficulty arises from the fact that unlike local SGD where local solutions are evolved by employing mini-batch SGD, in our setting we also partially incorporate the global model to compute stochastic gradients over local data. In addition, our goal is to find the convergence rate of the mixed model, rather than merely the local model or global model. To better illustrate this, let us first clarify the notations of models that will be used in analysis. Let us consider the simple case for now where we set K = n (all device participate averaging). We define three virtual sequences: {w(t)}Tt=1, {v̄(t)}Tt=1 and {v̂ (t)}Tt=1 wherew(t) = 1n ∑n j=1w (t) i ,v̄ (t) i = αiv (t) i +(1−αi)w (t) i v̂ (t) i = αiv (t) i + (1−αi)w(t). Since the personalized model incorporates 1−αi percentage of global model, then the key challenge in the convergence analysis is to find out how much the global model benefits/hurts the local convergence. To this end, we analyze how much the dynamics of personalized model v̂\n(t) i and global model w (t) differ from each other at each iteration. To be more specific, we study the distance between gradients ‖∇fi(v̂(t)i ) − ∇F (w(t))‖2. Surprisingly, we relate this distance to gradient diversity, personalized model convergence, global model convergence and local-global optimality gap:\nE [ ‖∇fi(v̂(t)i )−∇F (w (t))‖2 ] ≤ 6ζi + 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6L2E [ ‖w(t) −w∗‖2 ] + 6L2∆i.\nE [ ‖v̂(t)i − v∗‖2 ] and E [ ‖w(t) −w∗‖2 ] will converge very fast under smooth strongly convex\nobjective, and ζi and ∆i will serve as residual error that indicates the heterogeneity among local functions.\nAlgorithm 2: Local Descent APFL (without sampling)\ninput: Mixture weights α1, · · · , αn, Synchronization gap τ , Local models v(0)i for i ∈ [n] and local version of global model w(0)i for i ∈ [n]. for t = 0, · · · , T do if t not divides τ then\nw (t) i = ∏ W ( w (t−1) i − ηt∇fi ( w (t−1) i ; ξ t i )) v\n(t) i = ∏ W ( v (t−1) i − ηt∇vfi ( v̄ (t−1) i ; ξ t i )) v̄\n(t) i = αiv (t) i + (1− αi)w (t) i\nelse each client sends w(t)j to the server\nw(t) = 1n ∑n j=1w (t) j\nserver broadcast w(t) to all clients end\nend for i = 1, · · · , n do\noutput: Personalized model: v̂i = 1ST ∑T t=1 pt(αiv (t) i + (1− αi) 1n ∑n j=1w (t) j );\nGlobal model: ŵ = 1nST ∑T t=1 pt ∑n j=1w (t) j .\nend" }, { "heading": "E.1 PROOF WITHOUT SAMPLING", "text": "Before giving the proof of convergence analysis of the Algorithm 1 in the main paper, we first discuss a warm-up case: local descent APFL without client sampling. As Algorithm 2 shows, all clients will participate in the averaging stage every τ iterations. The convergence of global and local models in Algorithm 2 are given in the following theorems. We start by stating the convergence of global model. Theorem 4 (Global model convergence of Local Descent APFL without Sampling). If each client’s objective function is µ-strongly convex and L-smooth, and satisfies Assumption 1, using Algorithm 2, choosing the mixing weight αi ≥ max{1 − 14√6κ , 1 − 1 4 √ 6κ √ µ }, learning rate\nηt = 16 µ(t+a) , where a = max{128κ, τ}, and using average scheme ŵ = 1\nnST\n∑T t=1 pt ∑n j=1w (t) j ,\nwhere pt = (t+ a)2, ST = ∑T t=1 pt, then the following convergence holds:\nE [F (ŵ)]− F (w∗) ≤ O ( µ T 3 ) +O\n( κ2τ ( σ2 + τ ζ\nn ) µT 2 ) +O ( κ2τ ( σ2 + τ ζ n ) lnT µT 3 ) +O ( σ2 nT ) ,\nwhere w∗ = arg minw F (w) is the optimal global solution.\nProof. Proof is deferred to Appendix E.1.2.\nThe following theorem obtains the convergence of personalized model in Algorithm 2. Theorem 5 (Personalized model convergence of Local Descent APFL without Sampling). If each client’s objective function is µ-strongly convex and L-smooth, and satisfies Assumption 1, using Algorithm 2, choosing the mixing weight αi ≥ max{1− 14√6κ , 1− 1 4 √ 6κ √ µ }, learning rate ηt = 16µ(t+a) , where a = max{128κ, τ}, and using average scheme v̂i =\n1 ST ∑T t=1 pt(αiv (t) i + (1 − αi) 1n ∑n j=1w (t) j ), where pt = (t + a) 2, ST = ∑T t=1 pt,\nand f∗i is the local minimum of the ith client, then the following convergence holds for all i ∈ [n]:\nE[fi(v̂i)]− f∗i ≤ O ( µ T 3 ) + α2iO ( σ2 µT ) + (1− αi)2O ( ζi µ + κL∆i ) + (1− αi)2 ( O ( κL lnT\nT 3\n) +O ( κ2σ2\nµnT\n) +O ( κ2τ ( σ2 + τ(ζi + ζ n ) )\nµT 2\n) +O ( κ4τ ( σ2 + 2τ ζ n ) µT 2 )) .\nProof. Proof is deferred to Appendix E.1.3." }, { "heading": "E.1.1 PROOF OF USEFUL LEMMAS", "text": "Before giving the proof of Theorem 4 and 5, we first prove few useful lemmas. Recall that we define virtual sequences {w(t)}Tt=1,{v̄ (t) i }Tt=1,{v̂ (t) i }Tt=1 wherew(t) = 1n ∑n i=1w (t) i ,v̄ (t) i = αiv (t) i + (1− αi)w (t) i ,v̂ (t) i = αiv (t) i + (1− αi)w(t).\nWe start with the following lemma that bounds the difference between the gradients of local objective and global objective at local and global models. Lemma 2. For Algorithm 2, at each iteration, the gap between local gradient and global gradient is bounded by\nE [ ‖∇fi(v̂(t)i )−∇F (w (t))‖2 ] ≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6ζi + 6L 2E [ ‖w(t) −w∗‖2 ] + 6L2∆i.\nProof. From the smoothness assumption and by applying the Jensen’s inequality we have: E [ ‖∇fi(v̂(t)i )−∇F (w (t))‖2 ]\n≤ 2E [ ‖∇fi(v̂(t)i )−∇fi(v ∗ i )‖2 ] + 2E [ ‖∇fi(v∗i )−∇F (w(t))‖2 ] ≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6E [ ‖∇fi(v∗i )−∇fi(w∗)‖2\n] + 6E [ ‖∇fi(w∗)−∇F (w∗)‖2 ] + 6E [ ‖∇F (w∗)−∇F (w(t))‖2\n] ≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6L2E [ ‖v∗i −w∗‖2 ] + 6ζi + 6L 2E [ ‖w(t) −w∗‖2\n] ≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6L2∆i + 6ζi + 6L 2E [ ‖w(t) −w∗‖2 ] .\nLemma 3 (Local model deviation without sampling). For Algorithm 2, at each iteration, the deviation between each local version of the global model w(t)i and the global model w\n(t) is bounded by:\nE [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3τσ2η2t−1 + 3(ζi + ζ\nn )τ2η2t−1,\n1\nn n∑ i=1 E [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3τσ2η2t−1 + 6τ2 ζ n η2t−1,\nwhere ζn = 1 n ∑n i=1 ζi.\nProof. According to Lemma 8 in Woodworth et al. (2020a):\nE [ ‖w(t) −w(t)i ‖ 2 ] ≤ 1 n n∑ j=1 E [ ‖w(t)j −w (t) i ‖ 2 ]\n≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p t−1∏ q=p+1 (1− µηq)\n1\nn n∑ i=1 E [ ‖w(t) −w(t)i ‖ 2 ] ≤ 1 n2 n∑ i=1 n∑ j=1 E [ ‖w(t)j −w (t) i ‖ 2 ]\n≤ 3 ( σ2 + 2τ ζ\nn ) t−1∑ p=tc η2p t−1∏ q=p+1 (1− µηq) .\nPlugging in ηq = 16µ(a+q) yields:\nE [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p t−1∏ q=p+1 a+ q − 16 a+ q\n≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p t−1∏ q=p+1 a+ q − 16 a+ q\n≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p t−1∏ q=p+1 a+ q − 2 a+ q\n≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p (a+ p− 1)(a+ p) (a+ t− 2)(a+ t− 1)\n≤ 3 ( σ2 + ζiτ + ζ\nn τ ) t−1∑ p=tc η2p η2t−1 η2p\n≤ 3τ ( σ2 + ζiτ + ζ\nn τ\n) η2t−1.\nSimilarly,\n1\nn n∑ i=1 E [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3τσ2η2t−1 + 6τ2 ζ n η2t−1.\nLemma 4. (Convergence of global model) Let w(t) = 1n ∑n i=1w (t) i . Under the setting of Theorem 5, we have:\nE [ ‖w(T+1) −w∗‖2 ] ≤ a 3 (T + a)3 E [ ‖w(1) −w∗‖2 ] + ( T + 16 ( 1\na+ 1 + ln(T + a) )) 1536a2τ (σ2 + 2τ ζn)L2 (a− 1)2µ4(T + a)3 + 128σ2T (T + 2a) nµ2(T + a)3 .\nProof. Using the updating rule and non-expensive property of projection, as well as applying strong convexity and smoothness assumptions yields:\nE [ ‖w(t+1) −w∗‖2 ] ≤ E\n[∥∥∥∥∥w(t) −w(t) − ηt 1n n∑ j=1 ∇fj(w(t)j ; ξ t j)−w∗ ∥∥∥∥∥ 2]\n≤ E [ ‖w(t) −w∗‖2 ] − 2ηtE\n[〈 1\nn n∑ j=1 ∇fj(w(t)j ),w (t) −w∗\n〉]\n+ η2t σ2\nn + η2tE [∥∥∥∥∥ 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2]\n≤ E [ ‖w(t) −w∗‖2 ] − 2ηtE [〈 ∇F (w(t)),w(t) −w∗ 〉] + η2t σ2\nn + η2t E [∥∥∥∥∥ 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2]\n︸ ︷︷ ︸ T1\n−2ηtE\n[〈 1\nn n∑ j=1 ∇fj(w(t)j )−∇F (w (t)),w(t) −w∗ 〉] ︸ ︷︷ ︸\nT2 ≤ (1− µηt)E [ ‖w(t) −w∗‖2 ] − 2ηt(E[F (w(t))]− F (w∗)) + η2t σ2\nn + T1 + T2, (12)\nwhere at the last step we used the strongly convex property.\nNow we are going to bound T1. By the Jensen’s inequality and smoothness, we have:\nT1 ≤ 2η2tE ∥∥∥∥∥∥ 1n n∑ j=1 ∇fj(w(t)j )−∇F (w (t)) ∥∥∥∥∥∥ 2 + 2η2tE [∥∥∥∇F (w(t))∥∥∥2]\n≤ 2η2tL2 1\nn n∑ j=1 E [ ‖w(t)j −w (t)‖2 ] + 4η2tL ( E [ F (w(t)) ] − F (w∗) ) (13)\nThen, we bound T2 as:\nT2 ≤ ηt 2 µ E ∥∥∥∥∥∥ 1n n∑ j=1 ∇fj(w(t)j )−∇F (w (t)) ∥∥∥∥∥∥ 2 + µ 2 E [ ‖w(t) −w∗‖2 ] ≤ 2ηtL 2\nµ\n1\nn n∑ j=1 E [∥∥∥w(t)j −w(t)∥∥∥2]+ µηt2 E [‖w(t) −w∗‖2] . (14)\nNow, by plugging back T1 and T2 from (13) and (14) in (12), we have: E [ ‖w(t+1) −w∗‖2 ] ≤ (\n1− µηt 2\n) E [ ‖w(t) −w∗‖2 ] −(2ηt − 4η2tL)︸ ︷︷ ︸\n≤−ηt\n( E [ F (w(t)) ] − F (w∗) ) + η2t σ2\nn\n+\n( 2ηtL 2\nµ + 2η2tL 2\n) 1\nn n∑ j=1 E [∥∥∥w(t)j −w(t)∥∥∥2] (15)\n≤ (\n1− µηt 2\n) E [ ‖w(t) −w∗‖2 ] + η2t σ2\nn +\n( 2ηtL 2\nµ + 2η2tL 2\n) 1\nn n∑ j=1 E [∥∥∥w(t)j −w(t)∥∥∥2] .\nNow, by using Lemma 3 we have: E [ ‖w(t+1) −w∗‖2 ] ≤ (\n1− µηt 2\n) E [ ‖w(t) −w∗‖2 ] + ( 2ηtL 2\nµ + 2η2tL 2\n) 3τ ( σ2 + 2τ ζ\nn\n) η2t−1 + η 2 t σ2\nn .\nNote that (1− µηt2 ) pt ηt = µ(t+a) 2(t−8+a) 16 ≤ µ(t−1+a)3 16 = pt−1 ηt−1 , so we multiply ptηt on both sides and do the telescoping sum: pT ηT E [ ‖w(T+1) −w∗‖2\n] ≤ p0 η0 E [ ‖w(1) −w∗‖2 ] + T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3τ ( σ2 + 2τ ζ n ) ptη 2 t−1 + T∑ t=1 ptηt σ2 n\n≤ p0 η0\nE [ ‖w(1) −w∗‖2 ] + T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3τ ( σ2 + 2τ ζ n ) 256a2 µ2(a− 1)2 + T∑ t=1 ptηt σ2 n .\n(16) Then, by re-arranging the terms will conclude the proof:\nE [ ‖w(T+1) −w∗‖2 ] ≤ a 3 (T + a)3 E [ ‖w(1) −w∗‖2\n] + ( T + 16 ( 1\na+ 1 + ln(T + a) )) 1536a2τ (σ2 + 2τ ζn)L2 (a− 1)2µ4(T + a)3 + 128σ2T (T + 2a) nµ2(T + a)3 ,\nwhere we use the inequality ∑T t=1 1 t+a ≤ 1 a+1 + ∫ T 1 1 t+a < 1 a+1 + ln(T + a)." }, { "heading": "E.1.2 PROOF OF THEOREM 4", "text": "Proof. According to (15) and (16) in the proof of Lemma 4 we have:\npT ηT\nE [ ‖w(T+1) −w∗‖2 ] ≤ p0 η0 E [ ‖w(1) −w∗‖2 ] − T∑ t=1 pt ( E [ F (w(t)) ] − F (w∗) ) +\nT∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3τ ( σ2 + 2τ ζ n ) 256a2 µ2(a− 1)2 + T∑ t=1 ptηt σ2 n ,\nre-arranging term and dividing both sides by ST = ∑T t=1 pt > T 3 yields:\n1\nST T∑ t=1 pt ( E [ F (w(t)) ] − F (w∗) ) ≤ p0 ST η0 E [ ‖w(1) −w∗‖2 ] + 1\nST T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3τ ( σ2 + 2τ ζ n ) 256a2 µ2(a− 1)2 + 1 ST T∑ t=1 ptηt σ2 n\n≤ O ( µ T 3 ) +O\n( κ2τ ( σ2 + 2τ ζ\nn ) µT 2 ) +O ( κ2τ ( σ2 + 2τ ζ n ) lnT µT 3 ) +O ( σ2 nT ) .\nRecall that ŵ = 1nST ∑T t=1 ∑n j=1w (t) j and convexity of F , we can conclude that:\nE [F (ŵ)]− F (w∗) ≤ O ( µ T 3 ) +O\n( κ2τ ( σ2 + 2τ ζ\nn ) µT 2 ) +O ( κ2τ ( σ2 + 2τ ζ n ) lnT µT 3 ) +O ( σ2 nT ) ." }, { "heading": "E.1.3 PROOF OF THEOREM 5", "text": "Proof. Recall that we defined virtual sequences {w(t)}Tt=1 where w(t) = 1n ∑n i=1w (t) i and v̂ (t) i = αiv (t) i + (1− αi)w(t), then by the updating rule and non-expensiveness of projection we have:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ E [∥∥∥∥∥v̂(t)i − α2i ηt∇fi(v̄(t)i )− (1− αi)ηt 1n n∑ j=1 ∇fj(w(t)j )− v ∗ i ∥∥∥∥∥ 2]\n+ E [∥∥∥∥∥α2i ηt(∇fi(v̄(t)i )−∇fi(v̄(t)i ; ξti)) + (1− αi)ηt 1n ∑ j∈Ut ( ∇fj(w(t)j )−∇fj(w (t) j ; ξ t j) )∥∥∥∥∥ 2]\n≤ E [ ‖v̂(t)i − v ∗ i ‖2 ] − 2E [〈 α2i ηt∇fi(v̄ (t) i ) + (1− αi)ηt 1\nn n∑ j=1 ∇fj(w(t)j ), v̂ (t) i − v ∗ i\n〉]\n+ η2tE [∥∥∥∥∥α2i∇fi(v̄(t)i ) + (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2] + α2i η 2 t σ 2 + (1− αi)2η2t σ2 n\n= E [ ‖v̂(t)i − v ∗ i ‖2 ] −2(α2i + 1− αi)ηtE [〈 ∇fi(v̄(t)i ), v̂ (t) i − v ∗ i 〉] ︸ ︷︷ ︸\nT1\n−2ηt(1− αi)E\n[〈 1\nn n∑ j=1 ∇fj(w(t)j )−∇fi(v̄ (t) i ), v̂ (t) i − v ∗ i 〉] ︸ ︷︷ ︸\nT2\n+ η2t E [ ‖α2i∇fi(v̄ (t) i ) + (1− αi) 1\nn n∑ j=1 ∇fj(w(t)j )‖ 2 ] ︸ ︷︷ ︸\nT3\n+α2i η 2 t σ 2 + (1− αi)2η2t σ2\nn . (17)\nNow, we bound the term T1 as follows: T1 = −2ηt(α2i + 1− αi)E [〈 ∇fi(v̂(t)i ), v̂ (t) i − v ∗ i 〉] − 2ηt(α2i + 1− αi)E [〈 ∇fi(v̄(t)i )−∇fi(v̂ (t) i ), v̂ (t) i − v ∗ i\n〉] ≤ −2ηt(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) + µ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n+ (α2i + 1− αi)ηt (\n8L2 µ(1− 8(αi − α2i )) E [ ‖v̂(t)i − v̄ (t) i ‖ 2 ] + µ(1− 8(αi − α2i )) 8 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n≤ −2ηt(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) + µ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n+ ηt\n( 8L2(1− αi)2 µ(1− 8(αi − α2i )) E [ ‖w(t) −w(t)i ‖ 2 ] + µ(1− 8(αi − α2i )) 8 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n≤ −2ηt(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) − 7µηt 8 E [ ‖v̂(t)i − v ∗ i ‖2 ]\n+ 8ηtL\n2(1− αi)2 µ(1− 8(αi − α2i )) E [ ‖w(t) −w(t)i ‖ 2 ] , (18)\nwhere we use the fact (α2i + 1 − αi) ≤ 1. Note that, because we set αi ≥ max{1 − 14√6κ , 1 − 1 4 √ 6κ √ µ }, and hence 1 − 8(αi − α2i ) ≥ 0, so in the second inequality we can use the arithmeticgeometry inequality.\nNext, we turn to bounding the term T2 in (17):\nT2 = −2ηt(1− αi)E\n[〈 1\nn n∑ j=1 ∇fj(w(t)j )−∇fi(v̄ (t) i ), v̂ (t) i − v ∗ i\n〉]\n≤ ηt(1− αi)\n( 2(1− αi)\nµ E [∥∥∥∥∥∇fi(v̄(t)i )− 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2] + µ 2(1− αi) E [ ‖v̂(t)i − v ∗ i ‖2 ])\n≤ 6(1− αi) 2ηt\nµ\n( E [∥∥∥∇fi(v̄(t)i )−∇fi(v̂(t)i )∥∥∥2]+ E [∥∥∥∇fi(v̂(t)i )−∇F (w(t))∥∥∥2]\n+E [∥∥∥∥∥∇F (w(t))− 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2]) + ηtµ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ]\n≤ 6(1− αi) 2ηt\nµ\n( L2E [∥∥∥w(t) −w(t)i ∥∥∥2]+ E [∥∥∥∇fi(v̂(t)i )−∇F (w(t))∥∥∥2]\n+E [∥∥∥∥∥∇F (w(t))− 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2]) + ηtµ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ] . (19)\nAnd finally, we bound the term T3 in (17) as follows:\nT3 = E [∥∥∥∥∥α2i∇fi(v̄(t)i ) + (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥ 2]\n≤ 2(α2i + 1− αi)2E [ ‖∇fi(v̄(t)i )‖ 2 ] + 2E [∥∥∥∥∥(1− αi) ( 1 n n∑ j=1 ∇fj(w(t)j )−∇fi(v̄ (t) i ) )∥∥∥∥∥ 2]\n≤ 2 ( 2(α2i + 1− αi)2E [ ‖∇fi(v̂(t)i )−∇f ∗ i ‖2 ] + 2(α2i + 1− αi)2E [ ‖∇fi(v̄(t)i )−∇fi(v̂ (t) i )‖ 2 ])\n+ 2(1− αi)2E [∥∥∥∥∥ 1n n∑ j=1 ∇fj(w(t)j )−∇fi(v̄ (t) i ) ∥∥∥∥∥ 2]\n≤ 8L(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − f∗i ) + 4(1− αi)2L2E [ ‖w(t) −w(t)i ‖ 2 ]\n+ 6(1− αi)2 ( L2E [∥∥∥w(t) −w(t)i ∥∥∥2]+ E [∥∥∥∇fi(v̂(t)i )−∇F (w(t))∥∥∥2] + 1\nn n∑ j=1 L2E [∥∥∥w(t) −w(t)j ∥∥∥2] ) . (20)\nNow, using Lemma 3, (1 − αi)2 ≤ 1 and plugging back T1, T2, and T3 from (18), (19), and (20) into (17), yields:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − 2(ηt − 4η2tL)(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + ( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) E [∥∥∥w(t) −w(t)i ∥∥∥2]\n+\n( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 1\nn n∑ j=1 E [∥∥∥w(t) −w(t)j ∥∥∥2]\n+ ( 6ηt µ + 6η2t ) (1− αi)2E [∥∥∥∇F (w(t))−∇fi(v̂(t)i )∥∥∥2]+ α2i η2t σ2 + (1− αi)2η2t σ2n , ≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − 2(ηt − 4η2tL)(α2i + 1− αi) ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + ( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τ ( σ2 + (ζi + ζ\nn )τ\n) η2t−1\n+\n( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τ ( σ2 + 2 ζ\nn τ\n) η2t−1\n+ ( 6ηt µ + 6η2t ) (1− αi)2E [∥∥∥∇F (w(t))−∇fi(v̂(t)i )∥∥∥2]︸ ︷︷ ︸ T4 +α2i η 2 t σ 2 + (1− αi)2η2t σ2 n , (21)\nwhere using Lemma 2 we can bound T4 as:\nT4 ≤ 6ηt µ\n(1− αi)2 ( 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6ζi + 6L 2E [ ‖w(t) −w∗‖2 ] + 6L2∆i ) + 6η2t (1− αi)2 ( 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6ζi + 6L 2E [ ‖w(t) −w∗‖2 ] + 6L2∆i ) . (22)\nNote that we choose αi ≥ max{1 − 14√6κ , 1 − 1 4 √ 6κ √ µ }, hence 12L 2(1−αi)2 µ ≤ µ 8 and 12L\n2(1 − αi) 2 ≤ µ8 , thereby we have:\nT4 ≤ µηt 4 ‖v̂(t)i − v\n∗‖2 + 36ηt ( 1\nµ + ηt\n) (1− αi)2 ( ζi + L 2E [ ‖w(t) −w∗‖2 ] + L2∆i ) .\nNow, using Lemma 4 we have:\nT4 ≤ µηt 4\nE [ ‖v̂(t)i − v ∗‖2 ] + 36ηt ( 1\nµ + ηt ) (1− αi)2(\nζi + L 2\n( a3 (t+ a− 1)3E [ ‖w(1) −w∗‖2 ] + ( t+ 16 ( 1\na+ 1 + ln(t+ a)\n)) 1536τ ( σ2 + 2τ ζ\nn\n) L2\nµ4(t+ a− 1)3 + 128σ2t(t+ 2a) nµ2(t+ a− 1)3\n) + L2∆i ) . (23)\nBy plugging back T4 from (23) in (21) and using the fact−(ηt−4η2tL) ≤ − 12ηt, and (α 2 i+1−αi) ≥ 3 4 , we have:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (1− µηt 8\n)E [ ‖v̂(t)i − v ∗ i ‖2 ] − 3ηt\n4\n( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t σ 2 + (1− αi)2η2t σ2\nn\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τ ( σ2 + (ζi + ζ\nn )τ\n) η2t−1\n+\n( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τ ( σ2 + 2 ζ\nn τ\n) η2t−1\n+ 36ηt\n( 1\nµ + ηt\n) (1− αi)2 ζi + L2 a3E [ ‖w(1) −w∗‖2 ] (t− 1 + a)3\n+ ( t+ 16 ( 1\na+ 1 + ln(t+ a)\n)) 1536τ ( σ2 + 2τ ζ\nn\n) L2\nµ4(t+ a− 1)3 + 128σ2t(t+ 2a) nµ2(t− 1 + a)3 + ∆i\n)) .\nNote that (1− µηt8 ) pt ηt ≤ pt−1ηt−1 where pt = (t+a) 2, so, we multiply ptηt on both sides, and re-arrange the terms:\n3pt 4\n( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) ≤ pt−1 ηt−1 E [ ‖v̂(t)i − v ∗ i ‖2 ] − pt ηt E [ ‖v̂(t+1)i − v ∗ i ‖2 ] + ptηt ( α2iσ 2 + (1− αi)2 σ2 n\n) + ( 8L2(1− αi)2\nµ(1− 8(αi − α2i )) +\n6(1− αi)2L2\nµ + 10(1− αi)2ηtL2\n) 3τ ( σ2 + (ζi + ζ\nn )τ\n) ptη 2 t−1\n+\n( 6(1− αi)2L2\nµ + 6(1− αi)2ηtL2\n) 3τ ( σ2 + 2 ζ\nn τ\n) ptη 2 t−1 + 36pt ( 1\nµ + ηt\n) (1− αi)2 ( ζi + L 2∆i )\n+ 36pt\n( 1\nµ + ηt\n) (1− αi)2\nL2 (\na3\n(t− 1 + a)3 + (t+ 16Θ(ln t)) 1536τ\n( σ2 + 2τ ζ\nn\n) L2\nµ4(t+ a− 1)3 + 128σ2t(t+ 2a) nµ2(t− 1 + a)3\n) .\nBy applying the telescoping sum and dividing both sides by ST = ∑T t=1 pt ≥ T 3 we have:\nfi(v̂i)− fi(v∗i )\n≤ 1 ST T∑ t=1 pt(fi(v̂ (t) i )− fi(v ∗ i ))\n≤ 4p0E\n[ ‖v̂(1)i − v ∗ i ‖2 ]\n3η0ST +\n1\nST\n4\n3 T∑ t=1 ptηt ( α2iσ 2 + (1− αi)2 σ2 n )\n+ 1\nST\n4\n3 T∑ t=1 ( 8L2(1− αi)2 µ(1− 8(αi − α2i )) + 6(1− αi)2L2 µ + 10(1− αi)2ηtL2 ) 3τ ( σ2 + (ζi + ζ n )τ ) ptη 2 t−1\n+ 1\nST\n4\n3 T∑ t=1 ( 6(1− αi)2L2 µ + 6(1− αi)2ηtL2 ) 3τ ( σ2 + 2 ζ n τ ) ptη 2 t−1\n+ 48(1− αi)2\nL2\nST T∑ t=1 pt ( 1 µ + ηt )( a3 (t− 1 + a)3 + (t+ 16Θ(ln t)) 1536τ ( σ2 + 2τ ζ n ) L2 µ4(t+ a− 1)3 + 128σ2t(t+ 2a) nµ2(t− 1 + a)3 ) + 48(1− αi)2 ( ζi + L 2∆i ) 1 ST T∑ t=1 pt ( 1 µ + ηt )\n≤ 4p0E\n[ ‖v̂(1)i − v ∗ i ‖2 ]\n3η0ST +\n32T (T + a)\n3µST\n( α2iσ 2 + (1− αi)2 σ2\nn ) + 4(1− αi)2\n3\n( 8L2T\nµ(1− 8(αi − α2i ))ST +\n6L2T\nµST +\n10L2Θ(lnT )\nµST\n) 3τ ( σ2 + (ζi + ζ\nn )τ\n) 256a2\nµ2(a− 1)2\n+ 4\n3\n( 6(1− αi)2L2T\nµST + 6(1− αi)2L2Θ(lnT ) µST\n) 3τ ( σ2 + 2 ζ\nn τ\n) 256a2\nµ2(a− 1)2\n+ 48(1− αi)2L2 a2\n(a− 1)2ST( a3Θ(lnT )\nµ +\n( T\na + Θ(lnT )\n) 1536L2τ ( σ2 + 2τ ζ\nn ) µ5 + 64(2a+ 1)σ2T (T + a) naµ3 )\n+ 48(1− αi)2L2 a2\n(a− 1)2ST\n( 16a3π2\n6µ + (Θ(lnT ))\n1536L2τ ( σ2 + 2τ ζ\nn ) µ5 + 2048(2a+ 1)σ2 naµ3 T )\n+ 48(1− αi)2 ( ζi + L 2∆i ) 1 ST ( ST µ + 8T (T + 2a) µ ) = O ( µ T 3 ) + α2iO ( σ2 µT ) + (1− αi)2O ( ζi µ + κL∆i\n) + (1− αi)2 ( O ( κL lnT\nT 3\n) +O ( κ2σ2\nµnT\n) +O ( κ2τ2(ζi + ζ n ) + κ2τσ2\nµT 2\n) +O ( κ4τ ( σ2 + 2τ ζ n ) µT 2 )) .\nwhere we use the convergence of ∑∞ t=1 ln t t2 −→ O(1), and ∑∞ t=1 1 t2 −→ π2 6 ." }, { "heading": "E.2 PROOF OF CONVERGENCE OF APFL WITH SAMPLING", "text": "In this section we will provide the formal proof of the Theorem 2. Before proceed to the proof, we would like to give the convergence of global model here first. The following theorem establishes the convergence of global model in APFL. Theorem 6 (Global model convergence of Local Descent APFL). If each client’s objective function is µ-strongly convex and L-smooth, and satisfies Assumption 1, using Algorithm 1, by choosing the learning rate ηt = 16µ(t+a) , where a = max{128κ, τ}, κ = L µ , and using average scheme\nŵ = 1KST ∑T t=1 pt ∑ j∈Ut w (t) j , where pt = (t+ a) 2, ST = ∑T t=1 pt, and letting F ∗ to denote the\nminimum of the F , then the following convergence holds:\nE [F (ŵ)]− F ∗ ≤ O ( µ T 3 ) +O\n( κ2τ ( σ2 + 2τ ζ\nK ) µT 2 ) +O ( κ2τ ( σ2 + 2τ ζ K ) lnT µT 3 ) +O ( σ2 KT ) ,\n(24)\nwhere τ is the number of local updates (i.e., synchronization gap) .\nProof. The proof is provided in Appendix E.2.2.\nRemark 5. It is noticeable that the obtained rate matches the convergence rate of the FedAvg, and if we choose τ = √ T/K, we recover the rate O( √ 1/KT ), which is the convergence rate of well-known local SGD with periodic averaging (Woodworth et al., 2020a).\nNow we switch to the proof of the Theorem 2. The proof pipeline is similar to what we did in Appendix E.1.3, non-sampling setting. The only difference is that we use sampling method here, hence, we will introduce the variance depending on sampling size K. Now we first begin with the proof of some technique lemmas." }, { "heading": "E.2.1 PROOF OF USEFUL LEMMAS", "text": "Lemma 5. For Algorithm 1, at each iteration, the gap between local gradient and global gradient is bounded by\nE [∥∥∥∥∥∇fi(v̂(t)i )− 1K ∑ j∈Ut ∇fj(w(t)) ∥∥∥∥∥ 2]\n≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6 ( 2ζi + 2 ζ\nK\n) + 6L2E [ ‖w(t) −w∗‖2 ] + 6L2∆i.\nProof. From the smoothness assumption and by applying the Jensen’s inequality we have:\nE [ ‖∇fi(v̂(t)i )− 1\nK ∑ j∈Ut\n∇fj(w(t))‖2 ]\n≤ 2E [ ‖∇fi(v̂(t)i )−∇fi(v ∗ i )‖2 ] + 2E [ ‖∇fi(v∗i )− 1\nK ∑ j∈Ut\n∇fj(w(t))‖2 ]\n≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6E [ ‖∇fi(v∗i )−∇fi(w∗)‖2 ] + 6E [ ‖∇fi(w∗)− 1\nK ∑ j∈Ut\n∇fj(w∗)‖2 ] + 6E [ ‖∇ 1\nK ∑ j∈Ut ∇fj(w∗)− 1 K ∑ j∈Ut\n∇fj(w(t))‖2 ]\n≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6L2E [ ‖v∗i −w∗‖2 ] + 6 ( 2ζi + 2 1\nK ∑ j∈Ut ζj\n) + 6L2E [ ‖w(t) −w∗‖2 ] ≤ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6L2∆i + 6 ( 2ζi + 2 ζ\nK\n) + 6L2E [ ‖w(t) −w∗‖2 ] .\nLemma 6 (Local model deviation with sampling). For Algorithm 1, at each iteration, the deviation between each local version of the global model w(t)i and the global model w (t) is bounded by:\nE [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3τσ2η2t−1 + 3(ζi + ζ\nK )τ2η2t−1,\n1\nK ∑ i∈Ut E [ ‖w(t) −w(t)i ‖ 2 ] ≤ 3τσ2η2t−1 + 6τ2 ζ K η2t−1.\nwhere ζK = 1 K ∑n i=1 ζi.\nProof. According to Lemma 8 in Woodworth et al. (2020a):\nE [ ‖w(t) −w(t)i ‖ 2 ] ≤ 1 K ∑ j∈Ut E [ ‖w(t)j −w (t) i ‖ 2 ]\n≤ 3 ( σ2 + ζiτ + ζ\nK τ ) t−1∑ p=tc η2p t−1∏ q=p+1 (1− µηq)\n1\nn n∑ i=1 E [ ‖w(t) −w(t)i ‖ 2 ] ≤ 1 n2 n∑ i=1 n∑ j=1 E [ ‖w(t)j −w (t) i ‖ 2 ]\n≤ 3 ( σ2 + 2τ ζ\nK ) t−1∑ p=tc η2p t−1∏ q=p+1 (1− µηq) .\nThen the rest of the proof follows Lemma 3.\nLemma 7. (Convergence of Global Model) Let w(t) = 1K ∑ j∈Ut w (t) j . In Theorem 2’s setting, using Algorithm 1 by choosing learning rate as ηt = 16µ(t+a) , we have:\nE [ ‖w(T+1) −w∗‖2 ] ≤ a 3 (T + a)3 E [ ‖w(1) −w∗‖2 ] + ( T + 16 ( 1\na+ 1 + ln(T + a)\n)) 1536a2τ ( σ2 + 2τ ζ\nK\n) L2\n(a− 1)2µ4(T + a)3 + 128σ2T (T + 2a) Kµ2(T + a)3 .\nProof. According to the updating rule and non-expensiveness of projection, and the strong convexity we have:\nE [ ‖w(t+1) −w∗‖2 ] ≤ E ∥∥∥∥∥∥w(t) − ηt 1K ∑ j∈Ut ∇fj(w(t)j ; ξ t j)−w∗ ∥∥∥∥∥∥ 2 \n≤ E [ ‖w(t) −w∗‖2 ] − 2ηtE 〈 1 K ∑ j∈Ut ∇fj(w(t)j ),w (t) −w∗ 〉 + η2tE ∥∥∥∥∥∥ 1K ∑ j∈Ut ∇fj(w(t)j ) ∥∥∥∥∥∥ 2 + η2t σ2K\n≤ (1− µηt)E [ ‖w(t) −w∗‖2 ] − (2ηt − 2Lη2t )E [ F (w(t))− F (w∗) ] + η2t σ2\nK\n+ η2t 1\nK ∑ j∈Ut L2E [∥∥∥w(t)j −w(t)∥∥∥2]− 2ηtE 〈 1 K ∑ j∈Ut ∇fj(w(t)j )−∇fj(w (t)),w(t) −w∗ 〉 ≤ (1− µηt)E [ ‖w(t) −w∗‖2 ] −(2ηt − 4Lη2t )︸ ︷︷ ︸\n≤−ηt\nE [ F (w(t))− F (w∗) ] + η2t σ2\nK\n+ 2η2tL 2 1\nK ∑ j∈Ut E [∥∥∥w(t)j −w(t)∥∥∥2]+ 2ηtL2µ 1K ∑ j∈Ut E [∥∥∥w(t)j −w(t)∥∥∥2]+ µηt2 E [‖w(t) −w∗‖2] .\n(25)\nThen, merging the term, multiplying both sides with ptηt , and do the telescoping sum yields:\npT ηT\nE [ ‖w(T+1) −w∗‖2 ] ≤ p0 η0 E [ ‖w(1) −w∗‖2 ] − E[F (w(t))− F (w∗)]\n+ T∑ t=1 ( 2L2 µ + 2ηtL 2 ) pt 1 K ∑ j∈Ut E [∥∥∥w(t)j −w(t)∥∥∥2]+ T∑ t=1 ptηt σ2 K .\n(26)\nPlugging Lemma 6 into (26) yields: pT ηT E [ ‖w(T+1) −w∗‖2 ] ≤ p0 η0 E [ ‖w(1) −w∗‖2 ] − E[F (w(t))− F (w∗)]\n+ T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3ptη 2 t−1τ ( σ2 + 2τ ζ K ) + T∑ t=1 ptηt σ2 K .\n(27)\nThen, by re-arranging the terms will conclude the proof as\nE [ ‖w(T+1) −w∗‖2 ] ≤ a 3 (T + a)3 E [ ‖w(1) −w∗‖2 ] + ( T + 16 ( 1\na+ 1 + ln(T + a) )) 1536a2L2τ (σ2 + 2τ ζK) (a− 1)2µ4(T + a)3\n+ 128σ2T (T + 2a)\nKµ2(T + a)3 ." }, { "heading": "E.2.2 PROOF OF THEOREM 6", "text": "Proof. According to (28) we have: pT ηT E [ ‖w(T+1) −w∗‖2 ] ≤ p0 η0 E [ ‖w(1) −w∗‖2 ] − E[F (w(t))− F (w∗)]\n+ T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3ptη 2 t−1τ ( σ2 + 2τ ζ K ) + T∑ t=1 ptηt σ2 K .\n(28) By re-arranging the terms and dividing both sides by ST = ∑T t=1 pt > T 3 yields:\n1\nST T∑ t=1 pt ( E [ F (w(t)) ] − F (w∗) ) ≤ p0 ST η0 E [ ‖w(1) −w∗‖2 ] + 1 ST T∑ t=1 ( 2L2 µ + 2ηtL 2 ) 3ptη 2 t−1τ ( σ2 + 2τ ζ K ) + 1 ST T∑ t=1 ptηt σ2 K\n≤ O\nµE [ ‖w(1) −w∗‖2 ] T 3 +O(κ2τ (σ2 + 2τ ζK ) µT 2 ) +O ( κ2τ ( σ2 + 2τ ζ K ) lnT µT 3 ) +O ( σ2 KT ) .\nRecalling that ŵ = 1nST ∑T t=1 pt ∑n j=1w (t) j , from the convexity of F (·), we can conclude that E [F (ŵ)]− F (w∗) ≤ O ( µ T 3 ) +O ( κ2τ ( σ2 + 2τ ζ K ) µT 2 ) +O ( κ2τ ( σ2 + 2τ ζ K ) lnT µT 3 ) +O ( σ2 KT ) ." }, { "heading": "E.2.3 PROOF OF THEOREM 2", "text": "Now we provide the formal proof of Theorem 2. The main difference from without-sampling setting is that only a subset of local models get updated each period due to partial participation of devices, i.e., K out of all n devices that are sampled uniformly at random. To generalize the proof, we will use an indicator function to model this stochastic update, and show that while the stochastic gradient is unbiased, the variance is changed.\nProof. Recall that we defined virtual sequences of {w(t)}Tt=1 where w(t) = 1K ∑ j∈Ut w (t) i and v̂ (t) i = αiv (t) i + (1− αi)w(t). We also define an indicator variable to denote whether ith client was selected at iteration t:\nIti = { 1 if i ∈ Ut 0 else\nobviously, E [Iti] = Kn . Then, according to updating rule and non-expensiveness of projection we have:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ E [∥∥∥∥∥v̂(t)i − α2i Itiηt∇fi(v̄(t)i )− (1− αi)ηt 1K ∑ j∈Ut ∇fj(w(t)j )− v ∗ i ∥∥∥∥∥ 2]\n+ E [∥∥∥∥∥α2i Itiηt (∇fi(v̄(t)i )−∇fi(v̄(t)i ; ξti))+ (1− αi)ηt ( 1 K ∑ j∈Ut ∇fj(w(t)j )− 1 K ∑ j∈Ut ∇fj(w(t)j ; ξ t) )∥∥∥∥∥ 2]\n= E [ ‖v̂(t)i − v ∗ i ‖2 ] − 2\n〈 K\nn α2i ηt∇fi(v̄ (t) i ) + (1− αi)ηt\n1\nK ∑ j∈Ut ∇fj(w(t)j ), v̂ (t) i − v ∗ i\n〉\n+ η2tE [∥∥∥∥∥α2i Iti∇fi(v̄(t)i ) + (1− αi) 1K ∑ j∈Ut ∇fj(w(t)j ) ∥∥∥∥∥ 2] + α2i η 2 t 2K2σ2 n2 + (1− αi)2η2t 2σ2 K .\n= E [ ‖v̂(t)i − v ∗ i ‖2 ] −2ηt 〈( K\nn α2i + 1− αi\n) ∇fi(v̄(t)i ), v̂ (t) i − v ∗ i 〉 ︸ ︷︷ ︸\nT1\n−2ηt(1− αi)E\n[〈 1\nK ∑ j∈Ut ∇fj(w(t)j )−∇fi(v̄ (t) i ), v̂ (t) i − v ∗ i 〉] ︸ ︷︷ ︸\nT2\n+ η2t E [∥∥∥∥∥α2i Iti∇fi(v̄(t)i ) + (1− αi) 1K ∑ j∈Ut ∇fj(w(t)j ) ∥∥∥∥∥ 2]\n︸ ︷︷ ︸ T3\n+α2i η 2 t\n2K2σ2\nn2 + (1− αi)2η2t\n2σ2\nK .\nNow we switch to bound T1:\nT1 = −2ηt( K\nn α2i + 1− αi)E\n[〈 ∇fi(v̂(t)i ), v̂ (t) i − v ∗ i 〉] − 2ηt( K\nn α2i + 1− αi)E\n[〈 ∇fi(v̄(t)i )−∇fi(v̂ (t) i ), v̂ (t) i − v ∗ i 〉] ≤ −2ηt( K\nn α2i + 1− αi)\n( E [ fi(v̂ (t) i ) ] − fi(v∗i ) + µ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n+ ( K\nn α2i + 1− αi)ηt\n( 8L2\nµ(1− 8(αi − α2i Kn )) E [ ‖v̂(t)i − v̄ (t) i ‖ 2 ]\n+ µ(1− 8(αi − α2i Kn )) 8 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n≤ −2ηt( K\nn α2i + 1− αi)\n( E [ fi(v̂ (t) i ) ] − fi(v∗i ) + µ 2 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n+ ηt\n( 8L2(1− αi)2\nµ(1− 8(αi − Kn α 2 i ))\nE [ ‖w(t) −w(t)i ‖ 2 ] + µ(1− 8(αi − Kn α 2 i )) 8 E [ ‖v̂(t)i − v ∗ i ‖2 ])\n≤ −2ηt( K\nn α2i + 1− αi)\n( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) − 7µηt 8 E [ ‖v̂(t)i − v ∗ i ‖2 ]\n+ 8ηtL\n2(1− αi)2 µ(1− 8(αi − α2i )) E [ ‖w(t) −w(t)i ‖ 2 ] , (29)\nFor T2, we use the same approach as we did in (19); To deal with T3, we also employ the similar technique in (20):\nT3 = E ∥∥∥∥∥∥α2i Iti∇fi(v̄(t)i ) + (1− αi) 1K ∑ j∈Ut ∇fj(w(t)j ) ∥∥∥∥∥∥ 2 \n≤ 2(K n α2i + 1− αi)2E\n[ ‖∇fi(v̄(t)i )‖ 2 ] + 2E ∥∥∥∥∥∥(1− αi) 1 K ∑ j∈Ut ∇fj(w(t)j )−∇fi(v̄ (t) i ) ∥∥∥∥∥∥ 2 \n≤ 2 ( 2( K\nn α2i + 1− αi)2E\n[ ‖∇fi(v̂(t)i )−∇f ∗ i ‖2 ]\n+2( K\nn α2i + 1− αi)2E\n[ ‖∇fi(v̄(t)i )−∇fi(v̂ (t) i )‖ 2 ])\n+ 2(1− αi)2E ∥∥∥∥∥∥ 1K ∑ j∈Ut ∇fj(w(t)j )−∇fi(v̄ (t) i ) ∥∥∥∥∥∥ 2 \n≤ 8L(K n α2i + 1− αi)\n( E [ fi(v̂ (t) i ) ] − f∗i ) + 4(1− αi)2L2E [ ‖w(t) −w(t)i ‖ 2 ]\n+ 6(1− αi)2 ( L2E [∥∥∥w(t) −w(t)i ∥∥∥2]+ E [∥∥∥∇fi(v̂(t)i )−∇F (w(t))∥∥∥2]\n+ 1\nK ∑ j∈Ut L2E [∥∥∥w(t) −w(t)j ∥∥∥2] . (30)\nThen plugging T1, T2, T3 back, we obtain the similar formulation as the without sampling case in (17). Thus:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − 2(ηt − 4η2tL) ( α2i K\nn + 1− αi\n)( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t 2Kσ2\nn + (1− αi)2η2t\n2σ2\nK\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i Kn )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) E [∥∥∥w(t) −w(t)i ∥∥∥2]\n+\n( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 1\nK ∑ j∈Ut E [∥∥∥w(t) −w(t)j ∥∥∥2]\n+ ( 6ηt µ + 6η2t ) (1− αi)2E [∥∥∥∥∥ 1K ∑ j∈Ut ∇fj(w(t))−∇fi(v̂(t)i ) ∥∥∥∥∥ 2] . (31)\nwe then examine the lower bound of α2i K n +1−αi. Notice that: α 2 i K n +1−αi = K n ((αi− n 2K ) 2 + n K − n2 4K2 ).\nCase 1: n2K ≥ 1 The lower bound is attained when αi = 1: α 2 i K n + 1− αi ≥ K n .\nCase 2: n2K < 1 The lower bound is attained when αi = n 2K : α 2 i K n + 1− αi ≥ 1− n 4K > 1 2 .\nSo α2i K n + 1− αi ≥ b := min{ K n , 1 2} always holds.\nNow we plug it and Lemma 6 back to (31):\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − bηt ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t 2Kσ2\nn + (1− αi)2η2t\n2σ2\nK\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i Kn )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + (ζi + ζ\nK )τ ) + ( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + 2 ζ\nK τ\n)\n+ ( 6ηt µ + 6η2t ) (1− αi)2E [∥∥∥∥∥ 1K ∑ j∈Ut ∇fj(w(t))−∇fi(v̂(t)i ) ∥∥∥∥∥ 2] . (32)\nPlugging Lemma 5 yields: E [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − bηt ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t 2Kσ2\nn + (1− αi)2η2t\n2σ2\nK\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i Kn )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + (ζi + ζ\nK )τ ) + ( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + 2 ζ\nK τ ) + ( 6ηt µ + 6η2t ) (1− αi)2 [ 2L2E [ ‖v̂(t)i − v ∗‖2 ] + 6 ( 2ζi + 2 ζ K ) + 6L2E [ ‖w(t) −w∗‖2 ] + 6L2∆i ] .\nThen following the same procedure in Appendix E.1.3, together with the application of Lemma 7 we can conclude that:\nfi(v̂i)− fi(v∗i )\n≤ 1 ST T∑ t=1 pt(fi(v̂ (t) i )− fi(v ∗ i ))\n≤ p0E\n[ ‖v̂(1)i − v ∗ i ‖2 ]\nbη0ST +\n1\nbST T∑ t=1 ptηt ( α2i η 2 t 2Kσ2 n + (1− αi)2η2t 2σ2 K )\n+ 1\nbST T∑ t=1\n(1− αi)2L2 (\n8\nµ(1− 8(αi − α2i Kn )) +\n6 µ + 10ηt\n) 3τptη 2 t−1 ( σ2 + (ζi + ζ\nK )τ\n)\n+ 1\nbST T∑ t=1 (1− αi)2L2 ( 6 µ + 10ηt ) 3τptη 2 t−1 ( σ2 + 2 ζ K τ )\n+ 36(1− αi)2 L2\nbST T∑ t=1 pt ( 1 µ + ηt )( a3 (t− 1 + a)3E [ ‖w(1) −w∗‖2 ] + ( t+ 16 ( 1\na+ 1 + ln(t+ a)\n)) 1536a2τ ( σ2 + 2τ ζ\nK\n) L2\n(a− 1)2µ4(t− 1 + a)3 + 128σ2t(t+ 2a) Kµ2(t− 1 + a)3\n)\n+ 36(1− αi)2 ( 2ζi + 2 ζ\nK + L2∆i\n) 1\nbST T∑ t=1 pt ( 1 µ + ηt ) .\n= O ( µ bT 3 ) + α2iO ( σ2 µbT ) + (1− αi)2O ( 2ζi + 2 ζ K µb + κL∆i b )\n+ (1− αi)2 ( O ( κL lnT\nbT 3\n) +O ( κ2σ2\nµbKT\n) +O ( κ2τ2(ζi + ζ K ) + κ2τσ2\nµbT 2\n)\n+O\n( κ4τ ( σ2 + 2τ ζ\nK ) µbT 2 )) .\nF CONVERGENCE RATE WITHOUT ASSUMPTION ON αi\nIn this section, we provide the convergence results of Algorithm 1 without assumption on αi. The following Theorem establish the convergence rate:\nTheorem 7 (Personalized model convergence of Local Descent APFL without assumption on αi). If each client’s objective function is µ-strongly-convex and L-smooth, and its gradient is bounded by G, using Algorithm 1, learning rate: ηt = 8µ(t+a) , where a = max{64κ, τ}, and using average scheme v̂i = 1ST ∑T t=1 pt(αiv (t) i + (1− αi) 1K ∑ j∈Ut w (t) j ), where pt = (t+ a) 2, ST = ∑T t=1 pt, and f∗i is the local minimum of the ith client, then the following convergence holds for all i ∈ [n]:\nE[fi(v̂i)]− f∗i ≤ O ( µ bT 3 ) + α2iO ( σ2 µbT ) + (1− αi)2O ( G2 µb ) + (1− αi)2 ( O ( κL lnT\nbT 3\n) +O ( κ2σ2\nµbKT\n) +O ( κ2τ2(ζi + ζ K ) + κ 2τσ2\nµbT 2\n)\n+O\nκ4τ ( σ2 + 2τ ζK ) µbT 2 , (33) where b = min{Kn , 1 2}\nRemark 6. Here we remove the assumption αi ≥ max{1− 14√6κ , 1− 1 4 √ 6κ √ µ }. The key difference is that we can only show the residual error with dependency on G, instead of more accurate quantities ζi and ∆i. Apparently, when the diversity among data shards is small, ζi and ∆i terms become small which leads to a tighter convergence rate. Also notice that, to realize the bounded gradient assumption, we need to require the parameters come from a bounded domainW . Thus, we need to do projection during parameter update, which is inexpensive.\nProof. According to (32):\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− 3µηt 8\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − bηt ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t 2Kσ2\nn + (1− αi)2η2t\n2σ2\nK\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i Kn )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + (ζi + ζ\nK )τ ) + ( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + 2 ζ\nK τ\n)\n+ ( 6ηt µ + 6η2t ) (1− αi)2E ∥∥∥∥∥∥ 1K ∑ j∈Ut ∇fj(w(t))−∇fi(v̂(t)i ) ∥∥∥∥∥∥ 2 .\nHere, we directly use the bound E [∥∥∥ 1K ∑j∈Ut ∇fj(w(t))−∇fi(v̂(t)i )∥∥∥2] ≤ 2G2. Then we have:\nE [ ‖v̂(t+1)i − v ∗ i ‖2 ]\n≤ (\n1− µηt 4\n) E [ ‖v̂(t)i − v ∗ i ‖2 ] − bηt ( E [ fi(v̂ (t) i ) ] − fi(v∗i ) ) + α2i η 2 t 2Kσ2\nn + (1− αi)2η2t\n2σ2\nK\n+\n( 8ηtL 2(1− αi)2\nµ(1− 8(αi − α2i Kn )) +\n6(1− αi)2ηtL2\nµ + 10(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + (ζi + ζ\nK )τ ) + ( 6(1− αi)2ηtL2\nµ + 6(1− αi)2η2tL2\n) 3τη2t−1 ( σ2 + 2 ζ\nK τ ) + ( 12ηt µ + 12η2t ) (1− αi)2G2.\nThen following the same procedure in Appendix E.1.3, we can conclude that:\nfi(v̂i)− fi(v∗i )\n≤ 1 ST T∑ t=1 pt(fi(v̂ (t) i )− fi(v ∗ i ))\n≤ p0E\n[ ‖v̂(1)i − v ∗ i ‖2 ]\nbη0ST +\n1\nbST T∑ t=1 ptηt ( α2i η 2 t 2Kσ2 n + (1− αi)2η2t 2σ2 K )\n+ 1\nbST T∑ t=1\n(1− αi)2L2 (\n8\nµ(1− 8(αi − α2i Kn )) +\n6 µ + 10ηt\n) 3τptη 2 t−1 ( σ2 + (ζi + ζ\nK )τ\n)\n+ 1\nbST T∑ t=1 (1− αi)2L2 ( 6 µ + 10ηt ) 3τptη 2 t−1 ( σ2 + 2 ζ K τ )\n+ 12(1− αi)2G2 1\nbST T∑ t=1 pt ( 1 µ + ηt ) .\n= O ( µ bT 3 ) + α2iO ( σ2 µbT ) + (1− αi)2O ( G2 µb ) + (1− αi)2 ( O ( κL lnT\nbT 3\n) +O ( κ2σ2\nµbKT\n) +O ( κ2τ2(ζi + ζ K ) + κ2τσ2\nµbT 2\n) +O ( κ4τ ( σ2 + 2τ ζ K ) µbT 2 )) ." }, { "heading": "G PROOF OF CONVERGENCE RATE IN NONCONVEX SETTING", "text": "In this section we will provide the proof of convergence results on nonconvex functions. Let us first present the convergence rate of the global model of APFL, on nonconvex function:\nTheorem 8 (Global model convergence of Local Descent APFL). If each client’s objective function is L-smooth, using Algorithm 1 with full gradient, by choosing K = n and learning rate η =\n1 2 √ 5L √ T , then the following convergence holds:\n1\nT T∑ t=1 ∥∥∥∇F (w(t))∥∥∥2 ≤ O( L√ T ) +O ( τ2ζ nT ) .\nProof. The proof is provided in Appendix G.2.\nAs usual, let us introduce several useful lemmas before the formal proof of Theorem 3 and 8." }, { "heading": "G.1 PROOF OF TECHNICAL LEMMAS", "text": "Lemma 8. Under Theorem 3’s assumptions, the following statement holds true:\nfi(v̂ (t+1) i ) ≤ fi(v̂ (t) i )−\n1 8 η‖∇fi(v̂i)‖2\n+ 3\n2 (1− αi)2η\n1\nn n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 3α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 6η(1− α2i )2ζi + 12η(1− α2i )2L2D2W + 12η(αi − α2i )2 ∥∥∥∇F (w(t))∥∥∥2 .\nProof. Define the following quantities:\ng(t) = α2i∇fi(v̄ (t) i ) + (1− αi)\n1\nn n∑ j=1 ∇fj(w(t)j )\nPW(v̂ (t) i , g\n(t), η) = 1\nη v̂(t)i −∏ W v̂(t)i − η α2i∇fi(v̄(t)i ) + (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) \nAccording to Ghadimi et al. (2016) Lemma 1 :For all w ∈ W ⊂ Rd, g ∈ Rd and η > 0, we have:\n〈g, PW(w, g, η)〉 ≥ ‖PW(w, g, η)‖2.\nAccording to the updating rule and smoothness of fi, we have:\nfi(v̂ (t+1) i ) ≤ fi(v̂ (t) i ) + 〈 ∇fi(v̂(t)i ), v̂ (t+1) i − v̂ (t) i 〉 + L\n2 ∥∥∥v̂(t+1)i − v̂(t)i ∥∥∥2 . ≤ fi(v̂(t)i ) + 〈 ∇fi(v̂(t)i ), v̂ (t+1) i − v̂ (t) i 〉 + L\n2 ∥∥∥v̂(t+1)i − v̂(t)i ∥∥∥2 ≤ fi(v̂(t)i )− η 〈 ∇fi(v̂(t)i ), PW(v̂ (t) i , g (t), η) 〉 + η2L\n2 ∥∥∥PW(v̂(t)i , g(t), η)∥∥∥2 ≤ fi(v̂(t)i )− η 〈 g(t), PW(v̂ (t) i , g (t), η) 〉 − η 〈 ∇fi(v̂(t)i )− g (t), PW(v̂ (t) i , g (t), η) 〉\n+ η2L\n2 ∥∥∥PW(v̂(t)i , g(t), η)∥∥∥2 (34)\nUsing the identity:\n〈 g(t), PW(v̂ (t) i , g (t), η) 〉 ≥ ‖g(t)‖2,\nand Cauchy-Schwartz inequality that\n〈 ∇fi(v̂(t)i )− g (t), PW(v̂ (t) i , g (t), η) 〉 ≤ 1\n2 ‖∇fi(v̂(t)i )− g (t)‖2 + 1 2 ‖PW(v̂(t)i , g (t), η)‖2\nwe have:\nfi(v̂ (t+1) i ) ≤ fi(v̂ (t) i )− ∥∥∥g(t)∥∥∥2 + η 2 ∥∥∥PW(v̂(t)i , g(t), η)∥∥∥2 + η2L2 ∥∥∥PW(v̂(t)i , g(t), η)∥∥∥2 + η\n2 ∥∥∥∥∥∥∇fi(v̂(t)i )− α2i∇fi(v̄(t)i )− (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥∥ 2\n≤ fi(v̂(t)i )− ( η\n2 − η\n2L\n2 ) ︸ ︷︷ ︸\n≤ 14η\n∥∥∥g(t)∥∥∥2\n+ η\n2 ∥∥∥∥∥∥∇fi(v̂(t)i )− α2i∇fi(v̄(t)i )− (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥∥ 2\n≤ fi(v̂(t)i )− 1 4 η ∥∥∥g(t)∥∥∥2 + (1− αi)2η ∥∥∥∥∥∥∇F (w(t))− 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥∥ 2\n+ η ∥∥∥α2i (∇fi(v̂(t)i )−∇fi(v̄(t)i ))− (1− αi)∇F (w(t)) + (1− α2i )∇fi(v̂(t)i )∥∥∥2\n≤ fi(v̂(t)i )− 1 4 η ∥∥∥g(t)∥∥∥2 + (1− αi)2η 1 n n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 2η ∥∥∥α2i (∇fi(v̂(t)i )−∇fi(v̄(t)i ))∥∥∥2 + 2η\n∥∥∥(1− α2i )∇fi(v̂(t)i )− (1− α2i )∇F (v̂(t)i ) + (1− α2i )∇F (v̂(t)i )− (1− αi)∇F (w(t))∥∥∥2 ≤ fi(v̂(t)i )− 1 4 η ∥∥∥g(t)∥∥∥2 + (1− αi)2η 1 n n∑ j=1\n∥∥∥w(t) −w(t)j ∥∥∥2 + 2α4i ηL2 ∥∥∥v̂(t)i − v̄(t)i ∥∥∥2 + 4η\n∥∥∥(1− α2i )∇fi(v̂(t)i )− (1− α2i )∇F (v̂(t)i )∥∥∥2 + 4η ∥∥∥(1− α2i )∇F (v̂(t)i )− (1− αi)∇F (w(t))∥∥∥2 ≤ fi(v̂(t)i )− 1 4 η ∥∥∥g(t)∥∥∥2 + (1− αi)2η 1 n n∑ j=1\n∥∥∥w(t) −w(t)j ∥∥∥2 + 2α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 4η(1− α2i )2ζi + 8η\n∥∥∥(1− α2i )∇F (v̂(t)i )− (1− α2i )∇F (w(t))∥∥∥2 + 8η ∥∥∥(αi − α2i )∇F (w(t))∥∥∥2 ≤ fi(v̂(t)i )− 1 4 η ∥∥∥g(t)∥∥∥2 + (1− αi)2η 1 n n∑ j=1\n∥∥∥w(t) −w(t)j ∥∥∥2 + 2α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 4η(1− α2i )2ζi + 8η(1− α2i )2L2D2W + 8η(αi − α2i )2 ∥∥∥∇F (w(t))∥∥∥2\nUsing the following inequality to replace ‖g(t)‖2:\n‖∇fi(v̂i)‖2 ≤ 2‖∇fi(v̂i)− g(t)‖2 + 2‖g(t)‖2\nhence we can conclude the proof:\nfi(v̂ (t+1) i ) ≤ fi(v̂ (t) i )−\n1 4 η\n[ 1\n2 ‖∇fi(v̂i)‖2 − ‖∇fi(v̂i)− g(t)‖2 ] + (1− αi)2η 1\nn n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 2α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 4η(1− α2i )2ζi + 8η(1− α2i )2L2D2W + 8η(αi − α2i )2\n∥∥∥∇F (w(t))∥∥∥2 ≤ fi(v̂(t)i )− 1\n8 η‖∇fi(v̂i)‖2 +\n1 4 η ∥∥∥∥∥∥∇fi(v̂i)− α2i∇fi(v̄(t)i ) + (1− αi) 1n n∑ j=1 ∇fj(w(t)j ) ∥∥∥∥∥∥ 2\n+ (1− αi)2η 1\nn n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 2α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 4η(1− α2i )2ζi + 8η(1− α2i )2L2D2W + 8η(αi − α2i )2\n∥∥∥∇F (w(t))∥∥∥2 ≤ fi(v̂(t)i )− 1\n8 η‖∇fi(v̂i)‖2\n+ 3\n2 (1− αi)2η\n1\nn n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 3α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 6η(1− α2i )2ζi + 12η(1− α2i )2L2D2W + 12η(αi − α2i )2\n∥∥∥∇F (w(t))∥∥∥2 . Lemma 9. Under Theorem 3’s assumptions, the following statement holds true:\n1\nT T∑ t=1 ∥∥∥∇F (w(t))∥∥∥2 ≤ 8 ηT F (w(1)) + 6L2 1 T T∑ t=1 1 n n∑ j=1 ∥∥∥ w(t)j −w(t)∥∥∥2 . Proof. Define the following quantities:\nḡ(t) = 1\nn n∑ j=1 ∇fj(w(t)j )\nR(t) = PW(w (t), ḡ(t), η) =\n1\nη w(t) −∏ W w(t) − η 1 n n∑ j=1 ∇fj(w(t)j ) According to the updating rule and smoothness of fi, we have:\nF (w(t+1)) ≤ F (w(t)) + 〈 ∇F (w(t)),w(t+1) −w(t) 〉 + L\n2 ∥∥∥w(t+1) −w(t)∥∥∥2 ≤ F (w(t))− η 〈 ∇F (w(t)), PW(w(t), ḡ(t), η) 〉 + η2L\n2 ∥∥∥PW(w(t), ḡ(t), η)∥∥∥2 ≤ F (w(t))− η 〈 ḡ(t), PW(w (t), ḡ(t), η) 〉 − η 〈 ∇F (w(t))− ḡ(t), PW(w(t), ḡ(t), η)\n〉 + η2L\n2 ∥∥∥PW(w(t), ḡ(t), η)∥∥∥2 (35)\nUsing the identity: 〈 ḡ(t), PW(w (t), ḡ(t), η) 〉 ≥ ‖ḡ(t)‖2, and Cauchy-Schwartz inequality that〈\n∇F (w(t))− ḡ(t), PW(w(t), ḡ(t), η) 〉 ≤ 12‖∇F (w (t))− ḡ(t)‖2 + 12‖PW(w (t), ḡ(t), η)‖2 we have:\nF (w(t+1)) ≤ F (w(t))− η ∥∥∥ḡ(t)∥∥∥2 + (η\n2 + η2L 2 )∥∥∥PW(w(t), ḡ(t), η)∥∥∥2 + η 2 ∥∥∥∇F (w(t))− ḡ(t)∥∥∥2 ≤ F (w(t))− η ∥∥∥ḡ(t)∥∥∥2 + (η 2 + η2L 2 )∥∥∥ḡ(t)∥∥∥2 + ηL2 2 1 n n∑ i=1\n∥∥∥ w(t)j −w(t)∥∥∥2 . ≤ F (w(t))− 1 4 η ∥∥∥ḡ(t)∥∥∥2 + ηL2 2 1 n n∑ i=1\n∥∥∥ w(t)j −w(t)∥∥∥2 . Using the following inequality to replace ‖g(t)‖2:\n‖∇F (w(t))‖2 ≤ 2‖∇F (w(t))− ḡ(t)‖2 + 2‖ḡ(t)‖2\nhence we obtain:\nF (w(t+1)) ≤ F (w(t))− 1 4 η\n( 1\n2 ‖∇F (w(t))‖2 − ‖∇F (w(t))− ḡ(t)‖2\n) + ηL2\n2\n1\nn n∑ i=1 ∥∥∥ w(t)j −w(t)∥∥∥2 ≤ F (w(t))− 1\n8 η‖∇F (w(t))‖2 + 3ηL\n2\n4\n1\nn n∑ i=1 ∥∥∥ w(t)j −w(t)∥∥∥2 . Re-arranging terms and doing the telescoping sum from t = 1 to T :\n1\nT T∑ t=1 ∥∥∥∇F (w(t))∥∥∥2 ≤ 8 ηT F (w(1)) + 6L2 1 T T∑ t=1 1 n n∑ j=1 ∥∥∥ w(t)j −w(t)∥∥∥2 .\nLemma 10. Under Theorem 3’s assumptions, the following statement holds true:\n1\nT T∑ t=1 1 n n∑ i=1 ∥∥∥w(t) −w(t)i ∥∥∥2 ≤ 10τ2η2 ζn , 1\nT T∑ t=1 ∥∥∥w(t) −w(t)i ∥∥∥2 ≤ 200L2τ4η4 ζn + 20τ2η2ζi. Proof. For the first statement, we define γt = 1n ∑n i=1\n∥∥∥w(t) −w(t)i ∥∥∥2, and let tc be the latest synchronization stage. Then we have:\nγt = 1\nn n∑ i=1 ∥∥∥∥∥∥wtc − t∑\nj=tc\nη\nn n∑ k=1 ∇fk(w(j)k )− wtc − t∑ j=tc η∇fi(w(j)i ) ∥∥∥∥∥∥ 2\n= τ t∑ j=tc η2 n n∑ i=1 ∥∥∥∥∥ 1n n∑ k=1 ∇fk(w(j)k )−∇fi(w (j) i ) ∥∥∥∥∥ 2\n= τ t∑ j=tc η2 n n∑ i=1 ∥∥∥∥∥ 1n n∑ k=1 ∇fk(w(j)k )−∇fk(w (j)) +∇fk(w(j))−∇fi(w(j)) +∇fi(w(j))−∇fi(w(j)i ) ∥∥∥∥∥ 2\n≤ τ t+τ∑ j=tc 5η2 ( 2L2γj + ζ n ) .\nSumming over t from tc to tc + τ yields: tc+τ∑ t=tc γt ≤ tc+τ∑ t=tc tc+τ∑ j=tc 5τη2 ( 2L2γj + ζ n )\n≤ 10L2τ2η2 (r+1)τ∑ j=rτ γj + 5τ3η2 ζ n .\nSince η ≤ 1 2 √ 5τL , we have 10L2τ2η2 ≤ 12 , hence by re-arranging the terms we have:\ntc+τ∑ t=tc γt ≤ 10τ3η2 ζ n .\nSumming over all synchronization stages tc, and dividing both sides by T can conclude the proof of the first statement:\n1\nT T∑ t=1 γt ≤ 10τ2η2 ζ n . (36)\nTo prove the second statement, let δit = ∥∥∥w(t) −w(t)i ∥∥∥2. Notice that:\nδit = ∥∥∥∥∥∥wtc − t∑\nj=tc\nη\nn n∑ k=1 ∇fk(w(j)k )− wtc − t∑ j=tc η∇fi(w(j)i ) ∥∥∥∥∥∥ 2\n= τ t∑ j=tc η2 ∥∥∥∥∥ 1n n∑ k=1 ∇fk(w(j)k )−∇fi(w (j) i ) ∥∥∥∥∥ 2\n= τ t∑ j=tc η2 ∥∥∥∥∥ 1n n∑ k=1 ∇fk(w(j)k )−∇fk(w (j)) +∇fk(w(j))−∇fi(w(j)) +∇fi(w(j))−∇fi(w(j)i ) ∥∥∥∥∥ 2\n≤ τ t+τ∑ j=tc 5η2 ( L2γj + L 2δij + ζi ) .\nSumming over t from tc to tc + τ yields: tc+τ∑ t=tc γt ≤ tc+τ∑ t=tc tc+τ∑ j=tc 5τη2 ( L2γj + L 2δij + ζi )\n≤ 5L2τ2η2 tc+τ∑ j=tc γj + 5L 2τ2η2 tc+τ∑ j=tc δij + 5τ 3η2ζi.\nSince η ≤ 1 2 √ 5τL , we have 5L2τ2η2 ≤ 14 , hence by re-arranging the terms we have:\ntc+τ∑ t=tc δit ≤ 20L2τ2η2 tc+τ∑ j=tc γj + 20τ 3η2ζi.\nSumming over all synchronization stages tc, and dividing both sides by T can conclude the proof of the first statement:\n1\nT T∑ t=1 δit ≤ 20L2τ2η2 1 T T∑ t=1 γt + 20τ 2η2ζi.\nUsing the result from (36) to bound 2 1T ∑T t=1 γt we have:\n1\nT T∑ t=1 δit ≤ 200L2τ4η4 ζ n + 20τ2η2ζi." }, { "heading": "G.2 PROOF OF THEOREM 8", "text": "Proof. According to Lemma 9:\n1\nT T∑ t=1 ∥∥∥∇F (w(t))∥∥∥2 ≤ 8 ηT F (w(1)) + 6L2 1 T T∑ t=1 1 n n∑ j=1 ∥∥∥ w(t)j −w(t)∥∥∥2 . Then plugging in Lemma 10 will conclude the proof." }, { "heading": "G.3 PROOF OF THEOREM 3", "text": "Proof. According to Lemma 8:\nfi(v̂ (t+1) i ) ≤ fi(v̂ (t) i )−\n1 8 η‖∇fi(v̂i)‖2\n+ 3\n2 (1− αi)2η\n1\nn n∑ j=1 ∥∥∥w(t) −w(t)j ∥∥∥2 + 3α4i (1− α)2ηL2 ∥∥∥w(t) −w(t)i ∥∥∥2 + 6η(1− α2i )2ζi + 12η(1− α2i )2L2D2W + 12η(αi − α2i )2\n∥∥∥∇F (w(t))∥∥∥2 . Re-arranging the terms, summing from t = 1 to T , and dividing both sides with T yields:\n1\nT T∑ t=1 ∥∥∥∇fi(v̂(t)i )∥∥∥2 ≤ 8fi(v̂ (1) i )\nηT\n+ 24α4i (1− αi)2L2 1\nT T∑ t=1 ∥∥∥w(t)i −w(t)∥∥∥2 + 12(1− αi)2L2 1n n∑ j=1 1 T T∑ t=1 ∥∥∥w(t)j −w(t)∥∥∥2 + 128(1− αi)2 1\nT T∑ t=1 ∥∥∥∇F (w(t))∥∥∥2 + 48(1− α2i )2ζi + 128(1− α2i )2L2DW , Then, plug in Lemma 9 and 10 :\n1\nT T∑ t=1 ∥∥∥∇fi(v̂(t)i )∥∥∥2 ≤ 8fi(v̂ (1) i )\nηT + 48(1− α2i )2ζi + 128(1− α2i )2L2DW\n+ 24α4i (1− αi)2L2 1\nT T∑ t=1 ∥∥∥w(t)i −w(t)∥∥∥2 + 12(1− αi)2L2 1n n∑ j=1 1 T T∑ t=1 ∥∥∥w(t)j −w(t)∥∥∥2\n+ 128(1− αi)2 8 ηT F (w(1)) + 6L2 1 T T∑ t=1 1 n n∑ j=1 ∥∥∥ w(t)j −w(t)∥∥∥2 \n≤ 8fi(v̂ (1) i )\nηT + 48(1− α2i )2ζi + 128(1− α2i )2L2DW + 24α4i (1− αi)2L2 [ 200L2τ4η4 ζ\nn + 20τ2η2ζi ] + 7800τ2η2(1− αi)2L2 ζ\nn +\n1024(1− αi)2\nηT F (w(1)).\nPlugging in η = 1 2 √ 5 √ TL concludes the proof:\n1\nT T∑ t=1 ∥∥∥∇fi(v̂(t)i )∥∥∥2 ≤ O ( L√ T ) + (1− αi)2 ( L√ T ) + (1− α2i )2 ( ζi + L 2DW )\n+ α4i (1− αi)2O ( τ4ζ\nnT 2 + τ2ζi T\n) + (1− αi)2O ( τ2ζ\nnT\n) ." } ]
2,020
null
SP:8c67767eb8c691157f0807c4c8393749b78bdf47
[ "The paper focus on the segmentation of images in which the correct topology of the segmentation an important role plays. The key idea of the paper is the use of Discrete Morse Theory to identify those areas of the segmentation/likelihood map that are important for ensuring the correct topology of the segmentation. This is achieved by modulating the normal binary cross entropy loss function so that areas of Morse structures are focused on during the learning. " ]
In the segmentation of fine-scale structures from natural and biomedical images, per-pixel accuracy is not the only metric of concern. Topological correctness, such as vessel connectivity and membrane closure, is crucial for downstream analysis tasks. In this paper, we propose a new approach to train deep image segmentation networks for better topological accuracy. In particular, leveraging the power of discrete Morse theory (DMT), we identify global structures, including 1D skeletons and 2D patches, which are important for topological accuracy. Trained with a novel loss based on these global structures, the network performance is significantly improved especially near topologically challenging locations (such as weak spots of connections and membranes). On diverse datasets, our method achieves superior performance on both the DICE score and topological metrics.
[ { "affiliations": [], "name": "Xiaoling Hu" }, { "affiliations": [], "name": "Yusu Wang" }, { "affiliations": [], "name": "Chao Chen" } ]
[ { "authors": [ "Henry Adams", "Tegan Emerson", "Michael Kirby", "Rachel Neville", "Chris Peterson", "Patrick Shipman", "Sofya Chepushtanova", "Eric Hanson", "Francis Motta", "Lori Ziegelmeier" ], "title": "Persistence images: A stable vector representation of persistent homology", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Bjoern Andres", "Jörg H Kappes", "Thorsten Beier", "Ullrich Köthe", "Fred A Hamprecht" ], "title": "Probabilistic image segmentation with closedness constraints", "venue": "In 2011 International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "I Arganda-Carreras", "HS Seung", "A Vishwanathan", "D Berger" ], "title": "3d segmentation of neurites in em images challenge-isbi", "venue": null, "year": 2013 }, { "authors": [ "Ignacio Arganda-Carreras", "Srinivas C Turaga", "Daniel R Berger", "Dan Cireşan", "Alessandro Giusti", "Luca M Gambardella", "Jürgen Schmidhuber", "Dmitry Laptev", "Sarvesh Dwivedi", "Joachim M Buhmann" ], "title": "Crowdsourcing the creation of image segmentation algorithms for connectomics", "venue": "Frontiers in neuroanatomy,", "year": 2015 }, { "authors": [ "Vijay Badrinarayanan", "Alex Kendall", "Roberto Cipolla" ], "title": "Segnet: A deep convolutional encoderdecoder architecture for image segmentation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Samik Banerjee", "Lucas Magee", "Dingkang Wang", "Xu Li", "Bing-Xing Huo", "Jaikishan Jayakumar", "Katherine Matho", "Meng-Kuan Lin", "Keerthi Ram", "Mohanasankar Sivaprakasam" ], "title": "Semantic segmentation of microscopic neuroanatomical data by combining topological priors with encoder– decoder deep networks", "venue": "Nature Machine Intelligence,", "year": 2020 }, { "authors": [ "U. Bauer", "C. Lange", "M. Wardetzky" ], "title": "Optimal topological simplification of discrete functions on surfaces", "venue": "Discr. Comput. Geom.,", "year": 2012 }, { "authors": [ "Mathieu Carriere", "Marco Cuturi", "Steve Oudot" ], "title": "Sliced wasserstein kernel for persistence diagrams", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Mathieu Carrière", "Frédéric Chazal", "Yuichi Ike", "Théo Lacombe", "Martin Royer", "Yuhei Umeda" ], "title": "A general neural network architecture for persistence diagrams and graph classification", "venue": null, "year": 1904 }, { "authors": [ "Chao Chen", "Daniel Freedman", "Christoph H Lampert" ], "title": "Enforcing topological constraints in random field image segmentation", "venue": null, "year": 2011 }, { "authors": [ "Chao Chen", "Xiuyan Ni", "Qinxun Bai", "Yusu Wang" ], "title": "A topological regularizer for classifiers via persistent homology", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Semantic image segmentation with deep convolutional nets and fully connected crfs", "venue": "arXiv preprint arXiv:1412.7062,", "year": 2014 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": "arXiv preprint arXiv:1706.05587,", "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Özgün Çiçek", "Ahmed Abdulkadir", "Soeren S Lienkamp", "Thomas Brox", "Olaf Ronneberger" ], "title": "3d u-net: learning dense volumetric segmentation from sparse annotation", "venue": "In International conference on medical image computing and computer-assisted intervention,", "year": 2016 }, { "authors": [ "James R. Clough", "Ilkay Oksuz", "Nicholas Byrne", "Veronika A. Zimmer", "Julia A. Schnabel", "Andrew P. King" ], "title": "A topological loss function for deep-learning based image segmentation using persistent homology, 2019", "venue": null, "year": 1910 }, { "authors": [ "O. Delgado-Friedrichs", "V. Robins", "A. Sheppard" ], "title": "Skeletonization and partitioning of digital images using discrete morse theory", "venue": "IEEE Trans. Pattern Anal. Machine Intelligence,", "year": 2015 }, { "authors": [ "T. Dey", "J. Wang", "Y. Wang" ], "title": "Road network reconstruction from satellite images with machine learning supported by topological methods", "venue": "In Proc. 27th ACM SIGSPATIAL Intl. Conf. Adv. Geographic Information Systems (GIS),", "year": 2019 }, { "authors": [ "Tamal K. Dey", "Jiayuan Wang", "Yusu Wang" ], "title": "Graph reconstruction by discrete morse theory", "venue": "In Proc. 34th Intl. Sympos. Comput. Geom. (SoCG),", "year": 2018 }, { "authors": [ "Henghui Ding", "Xudong Jiang", "Ai Qun Liu", "Nadia Magnenat Thalmann", "Gang Wang" ], "title": "Boundaryaware feature propagation for scene segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Herbert Edelsbrunner", "John Harer" ], "title": "Computational topology: an introduction", "venue": "American Mathematical Soc.,", "year": 2010 }, { "authors": [ "Herbert Edelsbrunner", "David Letscher", "Afra Zomorodian" ], "title": "Topological persistence and simplification", "venue": "In Proceedings 41st Annual Symposium on Foundations of Computer Science,", "year": 2000 }, { "authors": [ "Rolando Estrada", "Carlo Tomasi", "Scott C Schmidler", "Sina Farsiu" ], "title": "Tree topology estimation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2014 }, { "authors": [ "Ahmed Fakhry", "Hanchuan Peng", "Shuiwang Ji" ], "title": "Deep models for brain em image segmentation: novel insights and improved", "venue": "performance. Bioinformatics,", "year": 2016 }, { "authors": [ "R. Forman" ], "title": "Morse theory for cell complexes", "venue": "Advances in mathematics,", "year": 1998 }, { "authors": [ "Robin Forman" ], "title": "A user’s guide to discrete morse theory", "venue": "Sém. Lothar. Combin,", "year": 2002 }, { "authors": [ "Jan Funke", "Fabian David Tschopp", "William Grisaitis", "Arlo Sheridan", "Chandan Singh", "Stephan Saalfeld", "Srinivas C Turaga" ], "title": "A deep structured learning approach towards automating connectome reconstruction from 3d electron micrographs", "venue": "arXiv preprint arXiv:1709.02974,", "year": 2017 }, { "authors": [ "Mingchen Gao", "Chao Chen", "Shaoting Zhang", "Zhen Qian", "Dimitris Metaxas", "Leon Axel" ], "title": "Segmenting the papillary muscles and the trabeculae from high resolution cardiac ct through restoration of topological handles", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2013 }, { "authors": [ "Xiao Han", "Chenyang Xu", "Jerry L. Prince" ], "title": "A topology preserving level set method for geometric deformable models", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2003 }, { "authors": [ "Moritz Hardt", "Ben Recht", "Yoram Singer" ], "title": "Train faster, generalize better: Stability of stochastic gradient descent", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Christoph Hofer", "Roland Kwitt", "Marc Niethammer", "Andreas Uhl" ], "title": "Deep learning with topological signatures", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Christoph Hofer", "Roland Kwitt", "Mandar Dixit", "Marc Niethammer" ], "title": "Connectivity-optimized representation learning via persistent homology", "venue": "arXiv preprint arXiv:1906.09003,", "year": 2019 }, { "authors": [ "Xiaoling Hu", "Fuxin Li", "Dimitris Samaras", "Chao Chen" ], "title": "Topology-preserving deep image segmentation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Michał Januszewski", "Jörgen Kornfeld", "Peter H Li", "Art Pope", "Tim Blakely", "Larry Lindsey", "Jeremy Maitin-Shepard", "Mike Tyka", "Winfried Denk", "Viren Jain" ], "title": "High-precision automated reconstruction of neurons with flood-filling networks", "venue": "Nature methods,", "year": 2018 }, { "authors": [ "Davood Karimi", "Septimiu E Salcudean" ], "title": "Reducing the hausdorff distance in medical image segmentation with convolutional neural networks", "venue": null, "year": 1904 }, { "authors": [ "Hoel Kervadec", "Jihene Bouchtiba", "Christian Desrosiers", "Eric Granger", "Jose Dolz", "Ismail Ben Ayed" ], "title": "Boundary loss for highly unbalanced segmentation", "venue": "In International Conference on Medical Imaging with Deep Learning,", "year": 2019 }, { "authors": [ "Genki Kusano", "Yasuaki Hiraoka", "Kenji Fukumizu" ], "title": "Persistence weighted gaussian kernel for topological data analysis", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Carole Le Guyader", "Luminita A Vese" ], "title": "Self-repelling snakes for topology-preserving segmentation models", "venue": "IEEE Transactions on Image Processing,", "year": 2008 }, { "authors": [ "Jonathan Long", "Evan Shelhamer", "Trevor Darrell" ], "title": "Fully convolutional networks for semantic segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Volodymyr Mnih" ], "title": "Machine learning for aerial image labeling", "venue": "University of Toronto (Canada),", "year": 2013 }, { "authors": [ "Agata Mosinska", "Pablo Marquez-Neila", "Mateusz Koziński", "Pascal Fua" ], "title": "Beyond the pixel-wise loss for topology-aware delineation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xiuyan Ni", "Novi Quadrianto", "Yusu Wang", "Chao Chen" ], "title": "Composing tree graphical models with persistent homology features for clustering mixed-type data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Sebastian Nowozin", "Christoph H Lampert" ], "title": "Global connectivity potentials for random field models", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Martin Ralf Oswald", "Jan Stühmer", "Daniel Cremers" ], "title": "Generalized connectivity constraints for spatio-temporal 3d reconstruction", "venue": "In European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Adrien Poulenard", "Primoz Skraba", "Maks Ovsjanikov" ], "title": "Topological function optimization for continuous shape matching", "venue": "In Computer Graphics Forum,", "year": 2018 }, { "authors": [ "Jan Reininghaus", "Stefan Huber", "Ulrich Bauer", "Roland Kwitt" ], "title": "A stable multi-scale kernel for topological machine learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "V. Robins", "P.J. Wood", "A.P. Sheppard" ], "title": "Theory and algorithms for constructing discrete morse complexes from grayscale digital images", "venue": "IEEE Trans. Pattern Anal. Machine Intelligence,", "year": 2011 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical image computing and computerassisted intervention,", "year": 2015 }, { "authors": [ "Florent Ségonne" ], "title": "Active contours under topology control—genus preserving level", "venue": "sets. International Journal of Computer Vision,", "year": 2008 }, { "authors": [ "L Soler", "A Hostettler", "V Agnus", "A Charnoz", "J Fasquel", "J Moreau", "A Osswald", "M Bouhadjar", "J Marescaux" ], "title": "3d image reconstruction for comparison of algorithm database: a patient-specific anatomical and medical image database", "venue": null, "year": 2010 }, { "authors": [ "T. Sousbie" ], "title": "The persistent cosmic web and its filamentary structure - I", "venue": "Theory and implementation", "year": 2011 }, { "authors": [ "Joes Staal", "Michael D Abràmoff", "Meindert Niemeijer", "Max A Viergever", "Bram Van Ginneken" ], "title": "Ridge-based vessel segmentation in color images of the retina", "venue": "IEEE transactions on medical imaging,", "year": 2004 }, { "authors": [ "Jan Stuhmer", "Peter Schroder", "Daniel Cremers" ], "title": "Tree shape priors with connectivity constraints using convex relaxation on general graphs", "venue": "In Proceedings of the IEEE International Conference on Computer Vision, pp", "year": 2013 }, { "authors": [ "Ganesh Sundaramoorthi", "Anthony Yezzi" ], "title": "Global regularizing flows with topology preservation for active contours and polygons", "venue": "IEEE Transactions on Image Processing,", "year": 2007 }, { "authors": [ "Srinivas C Turaga", "Kevin L Briggman", "Moritz Helmstaedter", "Winfried Denk", "H Sebastian Seung" ], "title": "Maximin affinity learning of image segmentation", "venue": "arXiv preprint arXiv:0911.5372,", "year": 2009 }, { "authors": [ "Mustafa Gokhan Uzunbas", "Chao Chen", "Dimitris Metaxas" ], "title": "An efficient conditional random field approach for automatic and interactive neuron segmentation", "venue": "Medical image analysis,", "year": 2016 }, { "authors": [ "Sara Vicente", "Vladimir Kolmogorov", "Carsten Rother" ], "title": "Graph cut based image segmentation with connectivity priors", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2008 }, { "authors": [ "S. Wang", "Y. Wang", "Y. Li" ], "title": "Efficient map reconstruction and augmentation via topological methods", "venue": "In Proc. 23rd ACM SIGSPATIAL,", "year": 2015 }, { "authors": [ "Pengxiang Wu", "Chao Chen", "Yusu Wang", "Shaoting Zhang", "Changhe Yuan", "Zhen Qian", "Dimitris Metaxas", "Leon Axel" ], "title": "Optimal topological cycles and their application in cardiac trabeculae restoration", "venue": "In International Conference on Information Processing in Medical Imaging,", "year": 2017 }, { "authors": [ "Ze Ye", "Cong Chen", "Changhe Yuan", "Chao Chen" ], "title": "Diverse multiple prediction on neuron image reconstruction", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2019 }, { "authors": [ "Tao Zeng", "Bian Wu", "Shuiwang Ji" ], "title": "Deepem3d: approaching human-level performance on 3d anisotropic em image", "venue": "segmentation. Bioinformatics,", "year": 2017 }, { "authors": [ "Yun Zeng", "Dimitris Samaras", "Wei Chen", "Qunsheng Peng" ], "title": "Topology cuts: A novel min-cut/maxflow algorithm for topology preserving segmentation in n–d images", "venue": "Computer vision and image understanding,", "year": 2008 }, { "authors": [ "Afra Zomorodian", "Gunnar Carlsson" ], "title": "Computing persistent homology", "venue": "Discrete & Computational Geometry,", "year": 2005 }, { "authors": [ "Qin Zou", "Yu Cao", "Qingquan Li", "Qingzhou Mao", "Song Wang" ], "title": "Cracktree: Automatic crack detection from pavement images", "venue": "Pattern Recognition Letters,", "year": 2012 }, { "authors": [ "R K" ], "title": "Following the approach developed in (Wang et al., 2015; Dey et al., 2018), we initialize a trivial discrete gradient vector field where all cells are initially critical. Let > 0 be a threshold for simplification. We then perform persistence algorithm (Edelsbrunner et al., 2000) induced by the super-level set filtration of ρ and pair up all cells", "venue": null, "year": 2000 } ]
[ { "heading": "1 INTRODUCTION", "text": "Segmenting objects while preserving their global structure is a challenging yet important problem. Various methods have been proposed to encourage neural networks to preserve fine details of objects (Long et al., 2015; He et al., 2017; Chen et al., 2014; 2018; 2017). Despite their high per-pixel accuracy, most of them are still prone to structural errors, such as missing small object instances, breaking thin connections, and leaving holes in membranes. These structural errors can significantly damage downstream analysis. For example, in the segmentation of biomedical structures such as membranes and vessels, small pixel errors at a junction will induce significant structure error, leading to catastrophic functional mistakes. See Fig. 1 for an illustration.\nTopology is a very global characterization that needs a lot of observations to learn. Any training set is insufficient in teaching the network to correctly reason about topology, especially near challenging spots, e.g., blurred membrane locations or weak vessel connections. A neural network tends to learn from the clean-cut cases and converge quickly. Meanwhile, topologically-challenging locations remain mis-classified, causing structural/topological errors. We note that this issue cannot be alleviated even with more annotated (yet still unbalanced) images.\nWe propose a novel approach that identifies critical topological structures during training and teaches a neural network to learn from these structures. Our method can produce segmentations with correct topology, i.e., having the same Betti number (i.e., number of connected components and handles/tunnels) as the ground truth. Underlying our method is the classic Morse theory (Milnor, 1963), which captures singularities of the gradient vector field of the likelihood function. Intuitively speaking, we treat the likelihood as a terrain function and Morse theory helps us capture terrain structures such as ridges and valleys. See Fig. 1 for an illustration. These structures, composed of 1D and 2D manifold pieces, reveal the topological information captured by the (potentially noisy) likelihood function.\nWe consider these Morse structures as topologically critical; they encompass all potential skeletons of the object. We propose a new loss that identifies these structures and enforce higher penalty along them. This way, we effectively address the sampling bias issue and ensure that the networks predict correctly near these topologically difficult locations. Since the Morse structures are identified based\n∗Email: Xiaoling Hu (xiaolhu@cs.stonybrook.edu).\non the (potentially noisy) likelihood function, they can be both false negatives (a structure can be a true structure but was missed in the segmentation) and false positives (a hallucination of the model and should be removed). Our loss ensures that both kinds of structural mistakes are corrected.\nSeveral technical challenges need to be addressed. First, classical Morse theory was defined for smooth functions on continuous domains. Computing the Morse structures can be expensive and numerically unstable. Furthermore, the entire set of Morse structures may include an excessive amount of structures, a large portion of which can be noisy, irrelevant ones. To address these challenges, we use the discrete version of Morse theory by Forman (1998; 2002). For efficiency purposes, we also use an approximation algorithm to compute 2D Morse structures with almost linear time. The idea is to compute zero dimensional Morse structures of the dual image, which boils down to a minimum spanning tree computation. Finally, we use the theory of persistent homology (Edelsbrunner et al., 2000; Edelsbrunner & Harer, 2010) to prune spurious Morse structures that are not relevant.\nOur discrete-Morse-theory based loss, called the DMT-loss, can be evaluated efficiently and can effectively train the neural network to achieve high performance in both topological accuracy and per-pixel accuracy. Our method outperforms state-of-the-art methods in multiple topology-relevant metrics (e.g., ARI and VOI) on various 2D and 3D benchmarks. It has superior performance in the Betti number error, which is an exact measurement of the topological fidelity of the segmentation.\nRelated work. Closely related to our method are recent works on persistent-homology-based losses (Hu et al., 2019; Clough et al., 2019). These methods identify a set of critical points of the likelihood function, e.g., saddles and extrema, as topologically critical locations for the neural network to memorize. However, only identifying a sparse set of critical points at every epoch is inefficient in terms of training. Instead, our method identifies a much bigger set of critical locations at each epoch, i.e., 1D or 2D Morse skeletons (curves and patches). This is beneficial in both training efficiency and model performance. Extending the critical location sets from points to 1D curves and 2D patches makes it much more efficient in training. Compared with TopoLoss (Hu et al., 2019), we observe a 3-time speedup in practice. Furthermore, by focusing on more critical locations early, our method is more likely to escape poor local minima of the loss landscape. Thus it achieves better topological accuracy than TopoLoss. The shorter training time may also contribute to better stability of the SGD algorithm, and thus better test accuracy (Hardt et al., 2016).\nAnother topology-aware loss (Mosinska et al., 2018) uses pretrained filters to detect broken connections. However, this method cannot be generalized to unobserved geometry and higher dimensional topology (loops and voids). We also refer to other existing work on topological features and their applications (Adams et al., 2017; Reininghaus et al., 2015; Kusano et al., 2016; Carriere et al., 2017; Ni et al., 2017; Wu et al., 2017). Deep neural networks have also been proposed to learn from topological features directly extracted from data (Hofer et al., 2017; Carrière et al., 2019). Persistent-homology-inspired objective functions have been proposed for graphics (Poulenard et al., 2018) machine learning (Chen et al., 2019; Hofer et al., 2019). Discrete Morse theory has been used to identify skeleton structures from images; e.g., (Delgado-Friedrichs et al., 2015; Robins et al., 2011; Wang et al., 2015). The resulting 1D Morse structure has been used to enhance neural network architecture: e.g., in (Dey et al., 2019) it is used to both pre- and post-process images, while\nin (Banerjee et al., 2020), the 1D Morse skeleton is used as a topological prior (part of input) for an encoder-decoder deep network for semantic segmentation of microscopic neuroanatomical data. Our work, in contrast, uses Morse structures (beyond 1D) of the output to strengthen the global structural signal more explicitly in an end-to-end training of a network.\nMany methods have leveraged the power of deep neural networks (Ronneberger et al., 2015; Long et al., 2015; Badrinarayanan et al., 2017; Ding et al., 2019; Kervadec et al., 2019; Karimi & Salcudean, 2019; Mosinska et al., 2018) for fine-scale structure segmentation. One may also enforce connectivity constraints when postprocessing the likelihood map (Han et al., 2003; Le Guyader & Vese, 2008; Sundaramoorthi & Yezzi, 2007; Ségonne, 2008; Wu et al., 2017; Gao et al., 2013; Vicente et al., 2008; Nowozin & Lampert, 2009; Zeng et al., 2008; Chen et al., 2011; Andres et al., 2011; Stuhmer et al., 2013; Oswald et al., 2014; Estrada et al., 2014). However, when the deep neural network itself is topology-agnostic, the likelihood map may be fundamentally flawed and cannot be salvaged topologically. Specific to neuron image segmentation, some methods (Funke et al., 2017; Turaga et al., 2009; Januszewski et al., 2018; Uzunbas et al., 2016; Ye et al., 2019) directly find neuron regions instead of their boundary/membranes. These methods cannot be generalized to other types of data such as satellite images, retinal images, vessel images, etc." }, { "heading": "2 METHOD", "text": "We propose a novel loss to train a topology-aware network end-to-end. It uses global structures captured by discrete Morse theory (DMT) to discover critical topological structures. In particular, through the language of 1- and 2-stable manifolds, DMT helps identify 1D skeletons or 2D sheets (separating 3D regions) that may be critical for structural accuracy. These Morse structures are used to define a DMT-loss that is essentially the cross-entropy loss constrained to these topologically critical structures. As the training continues, the neural network learns to better predict around these critical structures, and eventually achieves better topological accuracy. Please refer to Fig. 2 for an overview of our method." }, { "heading": "2.1 MORSE THEORY", "text": "Morse theory (Milnor, 1963) identifies topologically critical structures from a likelihood map (Fig. 3(a)). In particular, it views the likelihood as a terrain function (Fig. 3(b)) and extracts its landscape features such as mountain ridges and their high-dimensional counterparts. The broken connection in the likelihood map corresponds to a local dip in the mountain ridge of the terrain in Fig. 3(b) and Fig. 3(c). The bottom of this dip is captured by a so-called saddle point (S in Fig. 3(c)) of the likelihood map. The mountain ridge connected to this bottom point captures the main part of missing pixels. Such “mountain ridges” can be captured by the so-called stable manifold w.r.t. the saddle point using the language of Morse theory. By finding the saddle points and the stable manifold of the saddle points on the likelihood map, we can ensure the model learns to “correctly” handle pixels near these structures. We note that an analogous scenario can also happen with such 1D signals (such as blood vessels) as well as 2D signals (such as membranes of cells) in 3D images – they can also be captured by saddles (of different indices) and their stable manifolds.\nIn this paper, we focus on the application of segmenting 2D and 3D images. Specifically, suppose we have a smooth function f : Rd → R to model the likelihood (density) map. Given any point x ∈ Rd, the negative gradient −∇f(x) = −[ ∂f∂x1 , ∂f ∂x2 , . . . , ∂f∂xd ] T indicates the steepest descending direction of f . A point x = (x1, x2, . . . , xk) is critical if the function gradient at this point vanishes (i.e., ∇f(x) = 0). For a well-behaved function (more formally, called Morse function) defined on Rd, a critical point could be a minimum, a maximum, or d− 1 types of saddle points. See Fig. 3(c) for an example. For d = 2, there is only one saddle point type. For d = 3, there are two saddle point types, referred to as index-1 and index-2 saddles. Formally, when taking the eigen-values of the Hessian matrix at a critical point, its index is equal to the number of negative eigenvalues.\nIntuitively, imagine we put a drop of water on the graph of f (i.e, the terrain in Fig. 3(b)) at the lift of x onto this terrain, then −∇f(x) indicates the direction along which the water will flow down. If we track the trajectory of this water drop as it flows down, this gives rise to a so-called integral line (a flow line). Such flow lines can only start and end at critical points1, where the gradient vanishes.\nThe stable manifold S(p) of a critical point p is defined as the collection of points whose flow line ends at p. For a 2D function f : R2 → R, for a saddle q, its stable manifold S(q) starts from local maxima (mountain peaks in the terrain) and ends at q, tracing out the mountain ridges separating different valleys (Fig. 3(c)). The stable manifold S(p) of a minimum p, on the other hand, corresponds to the entire valley around this minimum p. See the valley point V and its corresponding stable field in Fig. 3(c). For a 3D function f : R3 → R, the stable manifold w.r.t. an index-2 saddle connects mountain peaks to saddles, tracing 1D mountain ridges as in the case of a 2D function. The stable manifold w.r.t. an index-1 saddle q consists of flow lines starting at index-2 saddles and ending at q. Their union, called the 2-stable manifold of f , consists of a collection of 2-manifold pieces.\nThese stable manifolds indicate important topological structures (graph-like or sheet-like) based on the likelihood of the current neural network. Using these structures, we will propose a novel loss (Sec. 2.3) to improve the topological awareness of the model. In practice, for images, we will leverage the discrete version of Morse theory for both numerical stability and easier simplification.\nDiscrete Morse theory. Due to space limitations, we briefly explain discrete Morse theory, leaving technical details to Appendix A.2.1. We view a d-dimensional image, d = 2 or 3, as a d-dimensional cubical complex, meaning it consists of 0-, 1-, 2- and 3-dimensional cells corresponding to vertices (pixels/voxels), edges, squares, and cubes as its building blocks. Discrete Morse theory (DMT), originally introduced in (Forman, 1998; 2002), is a combinatorial version of Morse theory for general cell complexes. In this setting, the analog of a gradient vector is a pair of adjacent cells, called discrete gradient vectors. The analog of an integral line (flow line) is a sequence of such cell-pairs (discrete gradient vectors), forming a so-called V-path. Critical points correspond to critical cells which do not participate in any discrete gradient vectors. A minimum, an index-1 saddle, an index-2 saddle and a maximum for a 3D domain intuitively correspond to a critical vertex, a critical edge, a critical square and a critical cube respectively. A 1-stable manifold in 2D will correspond to a V-path, i.e., a sequence of cells, connecting a critical square (a maximum) and a critical edge (a saddle). See Fig. 3(c) for an illustration. In 3D, it will be a V-path connecting a critical cube and a critical square.\n1More precisely, flow lines only tend to critical points in the limit and never reach them." }, { "heading": "2.2 SIMPLIFICATION AND COMPUTATION", "text": "In this section, we describe how we extract discrete Morse structures corresponding to the 1-stable and 2-stable manifolds in the continuous analog. First, we prune unnecessary Morse structures, based on the theory of persistent homology (Edelsbrunner et al., 2000; Edelsbrunner & Harer, 2010). Second, we approximate the 2-stable manifold structures using 0-stable manifolds of the dual to achieve high efficiency in practice, because it is rather involved to compute based on the original definition.\nPersistence-based structure pruning. While Morse structures reveal important structural information, they can be sensitive to noise. Without proper pruning, there can be an excessive amount of Morse structures, many of which are spurious and not relevant to the true signal. See Fig. 4(c) for an example. Similar to previous approaches e.g., (Sousbie, 2011; Delgado-Friedrichs et al., 2015; Wang et al., 2015), we will prune these structures using persistent homology.\nPersistent homology is one of the most important developments in the field of topological data analysis in the past two decades (Edelsbrunner & Harer, 2010; Edelsbrunner et al., 2000). Intuitively speaking, we grow the complex by starting from the empty set and gradually include more and more cells using a decreasing threshold. Through this course, new topological features can be created upon adding a critical cell, and sometimes a feature can be destroyed upon adding another critical cell. The persistence algorithm (Edelsbrunner et al., 2000) will pair up these critical cells; that is, its output is a set of critical cell pairs, where each pair captures the birth and death of topological features during this evolution. The persistence of a pair is defined as the difference of function values of the two critical cells, intuitively measuring how long the topological feature lives in term of f .\nUsing persistence, we can prune critical cells that are less topologically salient, and thus their corresponding Morse structures. Recall each 1- and 2-stable Morse structure is constituted by V-paths flowing into a critical cell (corresponding to a saddle in the continuous setting). We then use the persistence associated with this critical cell to determine the saliency of the corresponding Morse structure. If the persistence is below a certain threshold , we prune the corresponding Morse structure via an operation called Morse cancellation (more details are in the Appendix A.2.2).2 See Fig. 4(d) for example Morse structures after pruning. We denote by S1( ) and S2( ) the remaining sets of 1- and 2-stable manifolds after pruning. We’ll use these Morse structures to define the loss (Sec. 2.3).\nComputation. We need an efficient algorithm to compute S1( ) and S2( ) from a given likelihood f , because this computation needs to be carried out at each epoch. It is significantly more involved to define and compute S2( ) in the discrete Morse setting (Delgado-Friedrichs et al., 2015). Furthermore, this also requires the computation of persistent homology up to 2-dimensions, which takes time T = O(nω) (where ω ≈ 2.37 is the exponent in the matrix multiplication time, i.e., the time to multiply two n× n matrices). To this end, we propose to approximate S2( ) by Ŝ2( ) (more details can be found in Appendix A.2.3) which intuitively comes from the “boundary” of the stable manifold for minima. Note that in the smooth case for a function f : R3 → R, the closure of 2-stable manifolds exactly corresponds to the 2D sheets on the boundary of stable manifolds of minima. 3 This is both conceptually clear and also avoids the costly persistence computation. In particular, given a minimum q with persistence greater than the pruning threshold , the collections of V-paths ending at the minimum q form a spanning tree Tq . In fact, consider all minima {q1, . . . , q`} with persistence at least . {Tqi} form a maximum spanning forest among all edges with persistence value smaller than (Bauer et al., 2012; Dey et al., 2018). Hence it can be computed easily in O(n log n) time, where n is the image size.\n2Technically, not all spurious structures can be pruned/cancelled. But in practice, most of them can. 3This however is not always true for the discrete Morse setting.\nWe then take all the edges incident to nodes from different trees. The dual of these edges, denoted as Ŝ2( ), serves as the “boundaries” separating different spanning trees (representing stable manifolds to different minima with persistence ≥ ). See Fig. 5. Overall, the computation of Ŝ2( ) takes only O(n log n) by a maximum spanning tree algorithm.\nAs for S1( ), we use a simplified algorithm of (Dey et al., 2018), which can compute S1( ) in O(n log n) time for a 2D image, in which n is the image size. For a 3D image, the time is O(n log n+ T ), where T = O(nω) is the time to compute persistent homology, where ω ≈ 2.37 is the exponent in matrix multiplication time." }, { "heading": "2.3 THE DMT-BASED LOSS FUNCTION AND TRAINING DETAILS", "text": "Our loss has two terms, the cross-entropy term, Lbce and the DMT-loss, Ldmt: L(f, g) = Lbce(f, g)+ βLdmt(f, g), in which f is the likelihood, g is the ground truth, and β is the weight of Ldmt. Here we focus on one single image, while the actual loss is aggregated over the whole training set.\nThe DMT-loss enforces the correctness of the topologically challenging locations discovered by our algorithm. These locations are pixels of the (approximation of) 1- and 2-stable manifolds S1( ) and Ŝ2( ) of the likelihood, f (Fig. 1(e)). Denote byMf a binary mask of the union of pixels of all Morse structures in S1( ) ∪ Ŝ2( ). We want to enforce these locations to be correctly segmented. We use the cross-entropy between the likelihood map f and ground truth g restricted to the Morse structures, formally, Ldmt(f, g) = Lbce(f ◦Mf , g ◦Mf ), in which ◦ is the Hadamard product. Different topological error types. Recall that the Morse structures are computed over the potentially noisy likelihood function of a neural network, which can help identify two types of structural errors: (1) false negative: a true structure that is incomplete in the segmentation, but can be visible in the Morse structures. This types of false negatives (broken connections, holes in membrane) can be restored as the additional cross-entropy loss near the Morse structures will force the network to increase its likelihood value on these structures. (2) false positive: phantom structures hallucinated by the network when they do not exist (spurious branches, membrane pieces). These errors can be eliminated as the extra cross entropy loss on these structures will force the network to decrease the likelihood values along these structures. We provide a more thorough discussion in the Appendix A.1.2.\nDifferentiability. We note that the Morse structures are recomputed at every epoch. The structures, as well as their maskMf , may change with f . However, the change is not continuous; the output of the discrete Morse algorithm is a combinatorial solution that does not change continuously with f . Instead, it only changes at singularities, i.e., when the function values of f at different pixels/voxels are the same. In other words, for a general f , the likelihood function is real-valued, so it is unlikely two pixels share the exact same value. In case that they are, the persistence homology algorithm by default will break the tie and choose one as critical. The maskMf remains a constant within a small neighborhood of current f . Therefore, the gradient of Ldmt exists and can be computed naturally.\nTraining details. Although our method is architecture-agnostic, for 2D datasets, we select an architecture driven by a 2D U-net (Ronneberger et al., 2015); for 3D datasets, we select an architecture inspired by a 3D U-Net (Çiçek et al., 2016). Both U-Net and 3D U-Net were originally designed for neuron segmentation tasks, capturing the fine-structures of images. In practice, we first pretrain the network with only the cross-entropy loss, and then train the network with the combined loss." }, { "heading": "3 EXPERIMENTS ON 2D AND 3D DATASETS", "text": "Experiments on 2D dataset. Six natural and biomedical 2D datasets are used: ISBI12 (ArgandaCarreras et al., 2015), ISBI13 (Arganda-Carreras et al., 2013), CREMI, CrackTree (Zou et al., 2012), Road (Mnih, 2013) and DRIVE (Staal et al., 2004). More details about the datasets are included in Appendix A.3.1. For all the experiments, we use a 3-fold cross-validation to tune hyperparameters for both the proposed method and other baselines, and report the mean performance over the validation set. This also holds for 3D experiments.\nEvaluation metrics. We use five different evaluation metrics: Pixel-wise accuracy, DICE score, ARI, VOI and the most important one is Betti number error, which directly compares the topology" }, { "heading": "ISBI13", "text": "" }, { "heading": "CREMI", "text": "(number of handles/voids) between the segmentation and the ground truth. More details about the evaluation metrics are provided in Appendix A.3.2. The last three metrics are topology-aware.\nBaselines. DIVE (Fakhry et al., 2016) is a popular neural network that predicts the probability of a pixel being a membrane (border) pixel or not. U-Net (Ronneberger et al., 2015) is one of the most powerful image segmentation methods trained with cross-entropy loss. Mosin. (Mosinska et al., 2018) uses a topology-aware loss based on the response of selected filters from a pretrained CNN. TopoLoss (Hu et al., 2019) proposes a novel topological loss to learn to segment with correct topology. For all methods, we generate segmentations by thresholding the predicted likelihood maps at 0.5, and this also holds for 3D experiments.\nQuantitative and qualitative results. Table 1 shows quantitative results for four 2D image datasets, ISBI13, CREMI, CrackTree and Road. The results are highlighted when they are significantly better, and statistical significance is determined by t-tests. Results for ISBI12 and DRIVE are included in Appendix A.3.3. The DMT-loss outperforms others in both DICE score and topological accuracy (ARI, VOI and Betti Error). Please note that here the backbone of TopoLoss is the same as in (Hu et al., 2019), a heavily engineered network. The performance of TopoLoss will be worse if we implement it using the same U-Net backbone as DMT-Loss. Comparisons with the same backbones can be found in Appendix A.3.4.\nFig. 6 shows qualitative results. Our method correctly segments fine structures such as membranes, roads and vessels. Our loss is a weighted combination of the cross entropy and DMT-losses. When β = 0, the proposed method degrades to a standard U-Net. The performance improvement over all datasets (U-Net and DMT line in Table 1) demonstrates that our DMT-loss is helping the deep neural nets to learn a better structural segmentation.\nExperiments on 3D datasets. We use three different biomedical 3D datasets: ISBI13, CREMI and 3Dircadb (Soler et al., 2010). ISBI13 and CREMI, which have been discussed above, are originally 3D datasets, can be used in both 2D and 3D evaluations. We also evaluate our method on the open dataset 3Dircadb, which contains 20 enhanced CT volumes with artery annotations.\nEvaluation metrics. We use similar evaluation metrics as the 2D part. Note that, in term of 2D images, for ARI and VOI, we compute the numbers for each slice and then average the numbers for different slices as the final performance; for 3D images, we compute the performance for the whole" }, { "heading": "ISBI13", "text": "" }, { "heading": "CREMI", "text": "volume. For 2D images, we compute the 1D Betti number (number of holes) to obtain the Betti Error; while for 3D images, we compute the 2D Betti number (number of voids) to obtain the Betti Error.\nBaselines. 3D DIVE (Zeng et al., 2017), 3D U-Net (Çiçek et al., 2016), 3D TopoLoss (Hu et al., 2019) are the 3D versions for DIVE, U-Net and TopoLoss. MALA (Funke et al., 2017) trains the U-Net using a new structured loss function.\nQuantitative and qualitative results. Table 2 shows the quantitative results for three different 3D image datasets, ISBI13, CREMI and 3Dircadb. Our method outperforms existing methods in topological accuracy (in all three topology-aware metrics), which demonstrates the effectiveness of the proposed method. More qualitative results for 3D cases are included in Appendix A.3.5.\nThe benefit of the proposed DMT-loss. Instead of capturing isolated critical points in TopoLoss (Hu et al., 2019), the proposed DMT-loss captures the whole V-path as critical structures. Taking the patch in Fig. 4(a) as an example, TopoLoss identifies ≈ 80 isolated critical pixels for further training, whereas the critical structures captured by the DMT-loss contain≈ 1000 critical pixels (Fig. 4(d)). We compare the efficiency of DMT-loss and TopoLoss using the same backbone network, evaluated on the CREMI 2D dataset. Both methods start from a reasonable pre-trained likelihood map. TopoLoss achieves 1.113 (Betti Error), taking ≈3h to converge; while DMT-loss achieves 0.956 (Betti Error), taking ≈1.2h to converge (the standard U-Net takes ≈0.5h instead). Aside from converging faster, the DMT-loss is also less likely to converge to low-quality local minima. We hypothesize that the loss landscape of the topological loss will have more local minima than that of the DMT-loss, even though the global minima of both landscapes may have the same quality.\nAblation study for persistence threshold . As illustrated in Fig. 4, different persistence thresholds will identify different critical structures. The ablation experiment is also conducted on the CREMI 2D dataset. When = 0.2 (See Fig. 4(d)), the proposed DMT-loss achieves the best performance 0.982 (Betti Error). When = 0.1 and = 0.3, the performance drops to 1.206 and 2.105 (both in Betti Error), respectively. This makes sense for the following reasons: 1) for = 0.1, the DMT-loss captures lots of unnecessary structures which mislead the neural networks; 2) for = 0.3, the DMTloss misses lots of important critical structures, making the performance drop significantly. The is chosen via cross-validation. The ablation study for balanced term β is provided in Appendix A.3.6.\nComparison with other simpler choices. The proposed method essentially highlights geometrically rich locations. To demonstrate the effectiveness of the proposed method, we also compare with two baselines: canny edge detection, and ridge detection, which achieve 2.971 and 2.507 in terms of Betti Error respectively, much worse than our results (Betti Error: 0.982). Although our focus is the Betti error, we also report per-pixel accuracy for reference (See Table 5 in Appendix for details). From the results we observe that the baseline models could not solve topological errors, even though they achieve high per-pixel accuracy. Without a persistent-homology based pruning, these baselines generate too many noisy structures, and thus are not as effective as DMT-loss.\nRobustness of the proposed method. We run another ablation study on images corrupted with Gaussian noise. The experiment is also conducted on the CREMI 2D dataset. From the results (See Table 6 in Appendix for details), the DMT-loss is fairly robust and maintains good performance even with high noise levels. The reason is that the Morse structures are computed on the likelihood map, which is already robust to noise.\nComparison with reweighted cross entropy loss. We run an additional ablation study to compare with a baseline of reweighting the FP and FN pixels in the cross-entropy loss (Reweighting CE). The weights of the FP/FN pixels are hyperparameters tuned via cross-validation. The reweighting CE strategy achieves 2.753 in terms of Betti Error (on CREMI 2D data), and the DMT-loss is better than this baseline. The reason is that the DMT-loss specifically penalizes FP and FN pixels which are topology-critical. Meanwhile, reweighting CE adds weights to FP/FN pixels without discrimination. A majority of these misclassified pixels are not topology-critical. They are near the boundary of the foreground. Please see Fig. 7 for an illustration." }, { "heading": "4 CONCLUSION", "text": "In this paper, we proposed a new DMT-loss to train deep image segmentation neural networks for better topological accuracy. With the power of discrete Morse theory (DMT), we could identify 1D skeletons and 2D patches which are important for topological accuracy. Trained with the new loss based on these global structures, the networks perform significantly better, especially near topologically challenging locations (such as weak spots of connections and membranes).\nAcknowledgements. The research of Xiaoling Hu and Chao Chen is partially supported by NSF IIS-1909038. The research of Li Fuxin is partially supported by NSF IIS-1911232. The research of Yusu Wang is partially supported by NSF under grants CCF-1733798, RI-1815697, and by NIH under grants R01EB022899, RF1MH125317. Dimitris Samaras is partially supported by a gift from Adobe, the Partner University Fund, the NSF IUCRC for Visual and Decision Informatics and the SUNY2020 Infrastructure Transportation Security Center." }, { "heading": "A APPENDIX", "text": "In Sec. A.1, we will discuss different types of topological errors. In Sec. A.2, we will provide more details for the method part. In Sec. A.3, more details and results for the experiments will be provided." }, { "heading": "A.1 DIFFERENT TYPES OF TOPOLOGICAL ERRORS", "text": "A.1.1 ILLUSTRATION OF 3D TOPOLOGICAL ERRORS\nIn Sec.2.1 of the main paper, we have already introduced discrete Morse theory with a 2D example. Here, we would like to illustrate 3D topological errors with 3D examples.\nFig. 8 and Fig. 9 illustrate two different types of topological errors for 3D data. Fig.8 illustrates an index-1 topological error for 3D synthetic data. 3D EM/neuron has the same type of topological error as the synthetic data. Fig.9 illustrates index-2 topological error for 3D vessel data." }, { "heading": "A.1.2 FALSE NEGATIVE AND FALSE POSITIVE ERRORS", "text": "In the main paper, we have mentioned that the proposed DMT-loss can capture and fix two different types of topological errors: false negative and false positive. We illustrate these two types in Fig. 10. The two highlighted red rectangles represent the two types of topological errors: 1) The red rectangle on the right represents a sample of false negative error; part of the membrane structure is missing, due to a blurred region near the membrane. 2) The red rectangle on the left represents a sample of false positive error. In this specific case, it is caused by mitochondria which are not the boundary of neurons.\nIn summary, with the help of the proposed DMT-loss, we can identify both these two types of topological errors, and then force the network to increase/decrease its likelihood value on these structures to correctly segment the images with with correct topology." }, { "heading": "A.2 ADDITIONAL DETAILS ON THE METHOD", "text": "" }, { "heading": "A.2.1 DISCRETE MORSE THEORY", "text": "We view a dD image, d = 2 or 3, as a d-dimensional cubical complex, meaning it consists of 0-, 1-, 2- and 3-dimensional cells corresponding to vertices, edges, squares, and voxels (cubes) as its building blocks.\nDiscrete Morse theory (DMT), originally introduced in (Forman, 1998; 2002), is a combinatorial version of Morse theory for general cell complexes. There are many beautiful results established for\nDMT, analogous to classical Morse theory. We will however only briefly introduce some relevant concepts for the present paper, and we will describe it in the setting of cubical complexes (instead of simplicial complexes) as it is more suitable for images.\nLet K be a cubical complex. Given a p-cell τ , we denote by σ < τ if σ is a (p− 1)-dimensional face for τ . A discrete gradient vector (also called a V-pair for simplicity) is a pair (τ, σ) where σ < τ . Now suppose we are given a collection of V-pairs M(K) over the cubical complex K. A sequence of cells π : τp+10 , σ p 1 , τ p+1 1 , σ p 2 , · · · , σ p k, τ p+1 k , σ p k+1, where the superscript p in α\np stands for the dimension of this cell α, form a V-path if (τi, σi) ∈ M(K) for for any i ∈ [1, k] and σi < τi−1 for any i ∈ [1, k + 1]. A V-path π is acyclic if (τ0, σk+1) /∈ M(K). This collection of V-pairs M(K) form a discrete gradient vector field4 if (cond-i) each cell in M(K) can only appear in at most one pair in M(K); and (cond-ii) all V-paths in M(K) are acyclic. Given a discrete gradient vector field M(K), a simplex σ ∈ K is critical if it is not in any V-pair in M(K). Even though a discrete gradient vector (a V-pair), say (τ, σ) is a combinatorial pair instead of a real vector, it still indicates a “flow” from τ to its face σ. A V-path thus corresponds to a flow path (integral line) in the smooth setting. However, to make a collection of V-pairs a valid analog of gradient field, (cond-i) says that at each simplex there should only be one “flow” direction; while (cond-ii) is necessarily as flow lines traced by gradient can only go down in function values and thus never come back (thus acyclic).\n4We will not introduce the concept of discrete Morse function, as the discrete gradient vector field is sufficient to define all relevation notations.\nA critical simplex has “vanishing gradient” as it is not involved in any V-pair in M(K) (i.e, there is no flow at this simplex). Given a 2D cubical complex K a a discrete gradient vector field M(K), we can view critical 0-, 1-, 2- and 3-cells as minima, saddle points, and maxima, respectively. If K is 3D, then we can view critical 0-, 1-, 2- and 3-cells as minima, index-1 saddle, index-2 saddle and maxima, respectively.\nHence, a 1-stable manifold in 2D will correspond to a V-path connecting a critical square (a maximum) and a critical edge (a saddle), while in 3D, it will be a V-path connecting a critical cube and a critical square.\nMorse cancellation. A given discrete gradient field M(K) could be noisy, e.g, there are shallow valleys where the mountain ridge around it should be ignored. Fortunately, the discrete Morse theory provides an elegant and purely combinatorial way to cancel pairs of critical simplices (and thus reduce their stable manifolds). In particular, given M(K), a pair of critical simplices 〈δ(p+1),γp〉 is cancellable if there is a unique V-path π = δ = δ0, γ1, δ1, . . . , δs, γs+1 = γ from δ to γ. The Morse cancellation operation simple reverse all V-pairs along this path by removing all V-pairs along these path, and adding (δi−1, γi) to M(K) for any i ∈ [1, s + 1]. It is easy to check that after the cancellation neither δ nor γ is critical." }, { "heading": "A.2.2 PERSISTENCE PRUNING", "text": "We can extend this vertex-valued function ρ to a function ρ : K → R, by setting ρ(σ) for each cell to be the maximum ρ-value of each vertex in σ. How to obtain a discrete gradient vector field from such function ρ : K → R? Following the approach developed in (Wang et al., 2015; Dey et al., 2018), we initialize a trivial discrete gradient vector field where all cells are initially critical. Let > 0 be a threshold for simplification. We then perform persistence algorithm (Edelsbrunner et al., 2000) induced by the super-level set filtration of ρ and pair up all cells in K, denoted by Pρ(K). Persistent homology is one of the most important development in the field of topological data analysis in the past two decades (Edelsbrunner & Harer, 2010; Edelsbrunner et al., 2000; Zomorodian & Carlsson, 2005). We will not introduce it formally here. Imagine we grow the complex K by starting from the empty set and gradually include more and more cells in decreasing ρ values. (More formally, this is the so-called super-level set filtration ofK induced by ρ.) Through this course, new topological feature can be created upon adding a simplex σ, and sometiems a feature can be destroyed upon adding a simplex τ . Persistence algorithm (Edelsbrunner et al., 2000) will pair up simplices; that is, its output is a set of pairs of simplices Pρ(K) = {(σ, τ)}, where each pair captures the birth and death of topological features during this evolution. The persistence of a pair, say p = (σ, τ), is defined as pers(p) = ρ(σ)− ρ(τ), measuring how long the topological feature captured by p lives in term of ρ. In this case, we also write pers(σ) = pers(τ) = pers(p) – the persistence of a simplex (say σ or τ ) can be viewed as the importance of this simplex.\nWith this intuition of the persistence pairings, we next perform Morse-cancellation operation to all pairs of these cells (σ, τ) ∈ Pρ(K) in increasing order their persistence if (i) its persistence pers(δ, γ) < (i.e, this pair has low persistence and thus not important); and (ii) this pair (δ, γ) is cancellable.\nLet M (K) be the resulting discrete gradient field after simplifying all low-persistence critical simplices. We then construct the 1-stable and 2-stable manifolds for the remaining (high persistence, and thus important) saddles (critical 1-cell and 2-cells) from M (K). Let S1( ) and S2( ) be the resulting collection of 1- and 2-stable manifolds respectively. In particular, see an illustration of a V-path (highlighted in black) corresponding to a 1-stable manifold of the green saddle in Fig. 3(c)." }, { "heading": "A.2.3 MORE DETAILS ON THE APPROXIMATION OF S2 VIA Ŝ2.", "text": "We approximate S2 by taking the boundary of the stable manifold of the minima (basins/valleys in the terrain). This is like a watershed algorithm: growing the basins from all minima until they meet. The stable manifolds of the minima is approximated using spanning trees. This algorithm is inspired by the continuous analog for Morse functions." }, { "heading": "A.3 EXPERIMENTS", "text": "" }, { "heading": "A.3.1 DATASETS", "text": "We conduct experiments on six different 2D datasets. The first three datasets are neuron images (Electron Microscopy images). The task is to segment membranes and eventually partition the image into neuron regions.\nCREMI contains 125 images. The resolution for each image is 1250x1250.\nISBI12 (Arganda-Carreras et al., 2015) contains 30 images. The resolution for each image is 512x512.\nISBI13 (Arganda-Carreras et al., 2013) contains 100 images. The resolution for each image is 1024x1024.\nThe next three are natural image datasets, and their structures are vital for their functionality.\nCrackTree (Zou et al., 2012) contains 206 images of cracks in road. The resolution for each image is 600x800.\nRoad (Mnih, 2013) has 1108 images from the Massachusetts Roads Dataset. The resolution for each image is 1500x1500.\nDRIVE (Staal et al., 2004) is a retinal vessel segmentation dataset with 20 images. The resolution for each image is 584x565." }, { "heading": "A.3.2 EVALUATION METRICS", "text": "We use five different evaluation metrics to evaluate the proposed DMT-loss.\nPixel-wise accuracy: Pixel-wise accuracy is one of the most common metrics which measures the percentage of correctly classified pixels.\nDICE score: DICE score (also known as DICE coefficient, DICE similarity index) is the same as the F1 score.\nAdapted Rand Index (ARI): ARI is the maximal F-score of the foreground-restricted Rand index, a measure of similarity between two clusters. On this version of the Rand index we exclude the zero component of the original labels (background pixels of the ground truth).\nVariation of Information (VOI): VOI is a measure of the distance between two clusterings. It is closely related to mutual information; indeed, it is a simple linear expression involving the mutual information.\nBetti number error: which directly compares the topology (number of handles) between the segmentation and the ground truth. We randomly sample patches over the segmentation and report the average absolute difference between their Betti numbers and the corresponding ground truth patches." }, { "heading": "A.3.3 QUANTITATIVE RESULTS FOR MORE 2D DATASETS", "text": "Table. 3 shows quantitative results for ISBI12 and DRIVE." }, { "heading": "A.3.4 FAIRNESS COMPARISONS WITH SAME BACKBONE NETWORKS", "text": "We copy the numbers of TopoLoss from (Hu et al., 2019), which is TopoLoss+DIVE. And in this paper, we use U-Net as the backbone. Indeed, with U-Net, TopoLoss will be worse and the gap will be even bigger. The DIVE used in (Hu et al., 2019) is more expensive and better designed specifically for EM images. We choose U-Net in this manuscript as it is lightweight and easy to generalize to many datasets. We also apply our backbone-agnostic DMT-loss to the DIVE network (Fakhry et al., 2016). All the experiments are conducted on CREMI 2D dataset. The quantitative results (Betti Error) are shown in the Table 4." }, { "heading": "A.3.5 QUALITATIVE RESULTS FOR 3D DATASETS", "text": "Fig. 11 shows qualitative results for ISBI13 dataset." }, { "heading": "ISBI12", "text": "" }, { "heading": "DRIVE", "text": "A.3.6 THE ABLATION STUDY FOR BALANCED TERM β .\nWe conduct another ablation study for the balanced weight of parameter β. Note that, the parameter β is dataset dependent. We conduct the ablation experiment on CREMI 2D dataset. When β = 3, the proposed DMT-loss achieves best performance 0.982 (Betti Error). When β = 2 and β = 4, the performance drops to 1.074 and 1.181 (both in Betti Error), respectively. The parameter β is also chosen via cross-validation." }, { "heading": "A.3.7 COMPARISON WITH OTHER SIMPLER CHOICES", "text": "Table 5 shows the results comparing with Canny edge detection and ridge detection" }, { "heading": "A.3.8 RESULTS WITH DIFFERENT NOISE LEVELS", "text": "In Table 6, 10% means the percentage of corrupted pixels, and δ/2δ means the sdv of the added Gaussian noise. For reference, we note that the performance of the standard U-Net is 3.016 (Betti Error)." } ]
2,021
TOPOLOGY-AWARE SEGMENTATION USING DISCRETE MORSE THEORY
SP:9c3d7291bb936e41d94e0357e2085cb1621d4f3a
[ "This paper presents a new unsupervised learning method for learning latent representations for visual RL control domains. The method, Augmented Temporal Contrast (ATC), can be used alone to learn a representation to be combined with an RL algorithm, or as an auxiliary task in an end-to-end system. ATC matches or outperforms comparable end-to-end systems in several environments. The paper provides an extensive experimental study to support its claims." ]
In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning. To this end, we introduce a new unsupervised learning (UL) task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss. In online RL experiments, we show that training the encoder exclusively using ATC matches or outperforms end-to-end RL in most environments. Additionally, we benchmark several leading UL algorithms by pre-training encoders on expert demonstrations and using them, with weights frozen, in RL agents; we find that agents using ATC-trained encoders outperform all others. We also train multi-task encoders on data from multiple environments and show generalization to different downstream RL tasks. Finally, we ablate components of ATC, and introduce a new data augmentation to enable replay of (compressed) latent images from pre-trained encoders when RL requires augmentation. Our experiments span visually diverse RL benchmarks in DeepMind Control, DeepMind Lab, and Atari, and our complete code is available at hiddenurl.
[]
[ { "authors": [ "Ankesh Anand", "Evan Racah", "Sherjil Ozair", "Yoshua Bengio", "Marc-Alexandre Côté", "R Devon Hjelm" ], "title": "Unsupervised state representation learning in atari", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Marc G Bellemare", "Yavar Naddaf", "Joel Veness", "Michael Bowling" ], "title": "The arcade learning environment: An evaluation platform for general agents", "venue": "Journal of Artificial Intelligence Research,", "year": 2013 }, { "authors": [ "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemyslaw Debiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse" ], "title": "Dota 2 with large scale deep reinforcement learning", "venue": "arXiv preprint arXiv:1912.06680,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": null, "year": 2002 }, { "authors": [ "Karl Cobbe", "Christopher Hesse", "Jacob Hilton", "John Schulman" ], "title": "Leveraging procedural generation to benchmark reinforcement learning", "venue": "arXiv preprint arXiv:1912.01588,", "year": 2019 }, { "authors": [ "Coline Devin", "Pieter Abbeel", "Trevor Darrell", "Sergey Levine" ], "title": "Deep object-centric representations for generalizable robot learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Alexey Dosovitskiy", "German Ros", "Felipe Codevilla", "Antonio Lopez", "Vladlen Koltun" ], "title": "Carla: An open urban driving simulator", "venue": "arXiv preprint arXiv:1711.03938,", "year": 2017 }, { "authors": [ "C. Finn", "Xin Yu Tan", "Yan Duan", "T. Darrell", "S. Levine", "P. Abbeel" ], "title": "Deep spatial autoencoders for visuomotor learning", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2016 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Daniel Guo", "Bernardo Avila Pires", "Bilal Piot", "Jean-bastien Grill", "Florent Altché", "Rémi Munos", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap latent-predictive representations for multitask reinforcement learning", "venue": null, "year": 2004 }, { "authors": [ "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Bernardo A Pires", "Rémi Munos" ], "title": "Neural predictive belief representations", "venue": "arXiv preprint arXiv:1811.06407,", "year": 2018 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Jimmy Ba", "Mohammad Norouzi" ], "title": "Dream to control: Learning behaviors by latent imagination", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": null, "year": 1911 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Matteo Hessel", "Hubert Soyer", "Lasse Espeholt", "Wojciech Czarnecki", "Simon Schmitt", "Hado van Hasselt" ], "title": "Multi-task deep reinforcement learning with popart", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Max Jaderberg", "Volodymyr Mnih", "Wojciech Marian Czarnecki", "Tom Schaul", "Joel Z Leibo", "David Silver", "Koray Kavukcuoglu" ], "title": "Reinforcement learning with unsupervised auxiliary tasks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Max Jaderberg", "Wojciech M Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castaneda", "Charles Beattie", "Neil C Rabinowitz", "Ari S Morcos", "Avraham Ruderman" ], "title": "Humanlevel performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke" ], "title": "Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "arXiv preprint arXiv:1806.10293,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Thomas Kipf", "Elise van der Pol", "Max Welling" ], "title": "Contrastive learning of structured world models", "venue": "arXiv preprint arXiv:1911.12247,", "year": 2019 }, { "authors": [ "Ilya Kostrikov", "Denis Yarats", "Rob Fergus" ], "title": "Image augmentation is all you need: Regularizing deep reinforcement learning from pixels", "venue": "arXiv preprint arXiv:2004.13649,", "year": 2020 }, { "authors": [ "Michael Laskin", "Kimin Lee", "Adam Stooke", "Lerrel Pinto", "Pieter Abbeel", "Aravind Srinivas" ], "title": "Reinforcement learning with augmented data", "venue": "arXiv preprint arXiv:2004.14990,", "year": 2020 }, { "authors": [ "Michael Laskin", "Aravind Srinivas", "Pieter Abbeel" ], "title": "Curl: Contrastive unsupervised representations for reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Alex X Lee", "Anusha Nagabandi", "Pieter Abbeel", "Sergey Levine" ], "title": "Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model", "venue": null, "year": 1907 }, { "authors": [ "Kuang-Huei Lee", "Ian Fischer", "Anthony Liu", "Yijie Guo", "Honglak Lee", "John Canny", "Sergio Guadarrama" ], "title": "Predictive information accelerates learning in rl", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Sergey Levine", "Chelsea Finn", "Trevor Darrell", "Pieter Abbeel" ], "title": "End-to-end training of deep visuomotor policies", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Bogdan Mazoure", "Remi Tachet des Combes", "Thang Doan", "Philip Bachman", "R Devon Hjelm" ], "title": "Deep reinforcement and infomax learning", "venue": "arXiv preprint arXiv:2006.07217,", "year": 2020 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, 518(7540):529–533,", "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Max Schwarzer", "Ankesh Anand", "Rishab Goel", "R Devon Hjelm", "Aaron Courville", "Philip Bachman" ], "title": "Data-efficient reinforcement learning with momentum predictive representations", "venue": "arXiv preprint arXiv:2007.05929,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Greg Wayne", "Chia-Chun Hung", "David Amos", "Mehdi Mirza", "Arun Ahuja", "Agnieszka GrabskaBarwinska", "Jack Rae", "Piotr Mirowski", "Joel Z Leibo", "Adam Santoro" ], "title": "Unsupervised predictive memory in a goal-directed agent", "venue": "arXiv preprint arXiv:1803.10760,", "year": 2018 }, { "authors": [ "Wilson Yan", "Ashwin Vangipuram", "Pieter Abbeel", "Lerrel Pinto" ], "title": "Learning predictive representations for deformable objects using contrastive estimation", "venue": "arXiv preprint arXiv:2003.05436,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer", "venue": "arXiv preprint arXiv:1612.03928,", "year": 2016 }, { "authors": [ "Hessel" ], "title": "2019) (see the appendix therein), except we used only empirical returns, computed offline (without bootstrapping). For CPC, we tried training batch shapes, batch× time", "venue": null, "year": 2019 }, { "authors": [ "Jaderberg" ], "title": "2017), again with full empirical returns. We ran each algorithm for up to 1e5 updates, although final ATC results used 5e4 updates. We ran each RL agent with and without observation normalization on the latent image and observed no difference in performance. Pretraining data was 125e3 samples sourced from the replay buffer of DQN agents trained for 15e6", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "In an effort to overcome limitations of reward-driven feature learning in deep reinforcement learning (RL) from images, we propose decoupling representation learning from policy learning. To this end, we introduce a new unsupervised learning (UL) task, called Augmented Temporal Contrast (ATC), which trains a convolutional encoder to associate pairs of observations separated by a short time difference, under image augmentations and using a contrastive loss. In online RL experiments, we show that training the encoder exclusively using ATC matches or outperforms end-to-end RL in most environments. Additionally, we benchmark several leading UL algorithms by pre-training encoders on expert demonstrations and using them, with weights frozen, in RL agents; we find that agents using ATC-trained encoders outperform all others. We also train multi-task encoders on data from multiple environments and show generalization to different downstream RL tasks. Finally, we ablate components of ATC, and introduce a new data augmentation to enable replay of (compressed) latent images from pre-trained encoders when RL requires augmentation. Our experiments span visually diverse RL benchmarks in DeepMind Control, DeepMind Lab, and Atari, and our complete code is available at hiddenurl." }, { "heading": "1 INTRODUCTION", "text": "Ever since the first fully-learned approach succeeded at playing Atari games from screen images (Mnih et al., 2015), standard practice in deep reinforcement learning (RL) has been to learn visual features and a control policy jointly, end-to-end. Several such deep RL algorithms have matured (Hessel et al., 2018; Schulman et al., 2017; Mnih et al., 2016; Haarnoja et al., 2018) and have been successfully applied to domains ranging from real-world (Levine et al., 2016; Kalashnikov et al., 2018) and simulated robotics (Lee et al., 2019; Laskin et al., 2020a; Hafner et al., 2020) to sophisticated video games (Berner et al., 2019; Jaderberg et al., 2019), and even high-fidelity driving simulators (Dosovitskiy et al., 2017). While the simplicity of end-to-end methods is appealing, relying on the reward function to learn visual features can be severely limiting. For example, it leaves features difficult to acquire under sparse rewards, and it can narrow their utility to a single task. Although our intent is broader than to focus on either sparse-reward or multi-task settings, they arise naturally in our studies. We investigate how to learn visual representations which are agnostic to rewards, without degrading the control policy.\nA number of recent works have significantly improved RL performance by introducing auxiliary losses, which are unsupervised tasks that provide feature-learning signal to the convolution neural network (CNN) encoder, additionally to the RL loss (Jaderberg et al., 2017; van den Oord et al., 2018; Laskin et al., 2020b; Guo et al., 2020; Schwarzer et al., 2020). Meanwhile, in the field of computer vision, recent efforts in unsupervised and self-supervised learning (Chen et al., 2020; Grill et al., 2020; He et al., 2019) have demonstrated that powerful feature extractors can be learned without labels, as evidenced by their usefulness for downstream tasks such as ImageNet classification. Together, these advances suggest that visual features for RL could possibly be learned entirely without rewards, which would grant greater flexibility to improve overall learning performance. To our knowledge, however, no single unsupervised learning (UL) task has been shown adequate for this purpose in general vision-based environments.\nIn this paper, we demonstrate the first decoupling of representation learning from reinforcement learning that performs as well as or better than end-to-end RL. We update the encoder weights using only UL and train a control policy independently, on the (compressed) latent images. This capability stands in contrast to previous state-of-the-art methods, which have trained the UL and RL objectives jointly, or Laskin et al. (2020b), which observed diminished performance with decoupled encoders.\nOur main enabling contribution is a new unsupervised task tailored to reinforcement learning, which we call Augmented Temporal Contrast (ATC). ATC requires a model to associate observations from nearby time steps within the same trajectory (Anand et al., 2019). Observations are encoded via a convolutional neural network (shared with the RL agent) into a small latent space, where the InfoNCE loss is applied (van den Oord et al., 2018). Within each randomly sampled training batch, the positive observation, ot+k, for every anchor, ot, serves as negative for all other anchors. For regularization, observations undergo stochastic data augmentation (Laskin et al., 2020b) prior to encoding, namely random shift (Kostrikov et al., 2020), and a momentum encoder (He et al., 2020; Laskin et al., 2020b) is used to process the positives. A learned predictor layer further processes the anchor code (Grill et al., 2020; Chen et al., 2020) prior to contrasting. In summary, our algorithm is a novel combination of elements that enables generic learning of the structure of observations and transitions in MDPs without requiring rewards or actions as input.\nWe include extensive experimental studies establishing the effectiveness of our algorithm in a visually diverse range of common RL environments: DeepMind Control Suite (DMControl; Tassa et al. 2018), DeepMind Lab (DMLab; Beattie et al. 2016), and Atari (Bellemare et al., 2013). Our experiments span discrete and continuous control, 2D and 3D visuals, and both on-policy and off policy RL algorithms. Complete code for all of our experiments is available at hiddenurl. Our empirical contributions are summarized as follows:\nOnline RL with UL: We find that the convolutional encoder trained solely with the unsupervised ATC objective can fully replace the end-to-end RL encoder without degrading policy performance. ATC achieves nearly equal or greater performance in all DMControl and DMLab environments tested and in 5 of the 8 Atari games tested. In the other 3 Atari games, using ATC as an auxiliary loss or for weight initialization still brings improvements over end-to-end RL.\nEncoder Pre-Training Benchmarks: We pre-train the convolutional encoder to convergence on expert demonstrations, and evaluate it by training an RL agent using the encoder with weights frozen. We find that ATC matches or outperforms all prior UL algorithms as tested across all domains, demonstrating that ATC is a state-of-the-art UL algorithm for RL.\nMulti-Task Encoders: An encoder is trained on demonstrations from multiple environments, and is evaluated, with weights frozen, in separate downstream RL agents. A single encoder trained on four DMControl environments generalizes successfully, performing equal or better than end-to-end RL in four held-out environments. Similar attempts to generalize across eight diverse Atari games result in mixed performance, confirming some limited feature sharing among games.\nAblations and Encoder Analysis: Components of ATC are ablated, showing their individual effects. Additionally, data augmentation is shown to be necessary in DMControl during RL even when using a frozen encoder. We introduce a new augmentation, subpixel random shift, which matches performance while augmenting the latent images, unlocking computation and memory benefits." }, { "heading": "2 RELATED WORK", "text": "Several recent works have used unsupervised/self-supervised representation learning methods to improve performance in RL. The UNREAL agent (Jaderberg et al., 2017) introduced unsupervised auxiliary tasks to deep RL, including the Pixel Control task, a Q-learning method requiring predictions of screen changes in discrete control environments, which has become a standard in DMLab (Hessel et al., 2019). CPC (van den Oord et al., 2018) applied contrastive losses over multiple time steps as an auxiliary task for the convolutional and recurrent layers of RL agents, and it has been extended with future action-conditioning (Guo et al., 2018). Recently, PBL (Guo et al., 2020) surpassed these methods with an auxiliary loss of forward and backward predictions in the recurrent latent space using partial agent histories. Where the trend is of increasing sophistication in auxiliary recurrent architectures, our algorithm is markedly simpler, requiring only observations, and yet it proves sufficient in partially observed settings (POMDPs).\nST-DIM (Anand et al., 2019) introduced various temporal, contrastive losses, including ones that operate on “local” features from an intermediate layer within the encoder, without data augmentation. CURL (Laskin et al., 2020b) introduced an augmented, contrastive auxiliary task similar to ours, including a momentum encoder but without temporal contrast. Mazoure et al. (2020) provided extensive analysis pertaining to InfoNCE losses on functions of successive time steps in MDPs, including local features in their auxiliary loss (DRIML) similar to ST-DIM, and finally conducted experiments using global temporal contrast of augmented observations in the Procgen (Cobbe et al., 2019) environment. Most recently, MPR (Schwarzer et al., 2020) combined data augmentation with multi-step, convolutional forward modeling and a similarity loss to improve DQN agents in the Atari 100k benchmark. Hafner et al. (2019; 2020); Lee et al. (2019) proposed to leverage world-modeling in a latent-space for continuous control. A small number of model-free methods have attempted to decouple encoder training from the RL loss as ablations, but have met reduced performance relative to end-to-end RL (Laskin et al., 2020b; Lee et al., 2020). None have previously been shown effective in as diverse a collection of RL environments as ours (Bellemare et al., 2013; Tassa et al., 2018; Beattie et al., 2016).\nFinn et al. (2016); Ha & Schmidhuber (2018) are example works which pretrained encoder features in advance using image reconstruction losses such as the VAE (Kingma & Welling, 2013). Devin et al. (2018); Kipf et al. (2019) pretrained object-centric representations, the latter learning a forward model by way of contrastive losses; Yan et al. (2020) introduced a similar technique to learn encoders supporting manipulation of deformable objects by traditional control methods. MERLIN (Wayne et al., 2018) trained a convolutional encoder and sophisticated memory module online, detached from the RL agent, which learned read-only accesses to memory. It used reconstruction and one-step latent-prediction losses and achieved high performance in DMLab-like environments with extreme partial observability. Our loss function may benefit those settings, as it outperforms similar reconstruction losses in our experiments. Decoupling unsupervised pretraining from downstream tasks is common in computer vision (Hénaff et al., 2019; He et al., 2019; Chen et al., 2020) and has favorable properties of providing task agnostic features which can be used for training smaller taskspecific networks, yielding significant gains in computational efficiency over end-to-end methods." }, { "heading": "3 AUGMENTED TEMPORAL CONTRAST", "text": "Our unsupervised learning task, Augmented Temporal Contrast (ATC), requires a model to associate an observation, ot, with one from a specified, near-future time step, ot+k. Within each training batch, we apply stochastic data augmentation to the observations (Laskin et al., 2020b), namely random shift (Kostrikov et al., 2020), which is simple to implement and provides highly effective regularization in most cases. The augmented observations are encoded into a small latent space where a contrastive loss is applied. This task encourages the learned encoder to extract meaningful elements of the structure of the MDP from observations.\nOur architecture for ATC consists of four learned components - (i) a convolutional encoder, fθ, which processes the anchor observation, ot, into the latent image zt = fθ(AUG(ot)), (ii) a linear global compressor, gφ to produce a small latent code vector ct = gφ(zt), (iii) a residual predictor MLP, hψ , which acts as an implicit forward model to advance the code pt = hψ(ct) + ct, and (iv) a contrastive transformation matrix, W . To process the positive observation, ot+k into the target code c̄t+k = gφ̄(fθ̄(AUG(ot+k)), we use a momentum encoder (He et al., 2019) parameterized as a slowly moving average of the weights from the learned encoder and compressor layer:\nθ̄ ← (1− τ)θ̄ + τθ ; φ̄← (1− τ)φ̄+ τφ . (1)\nThe complete architecture is shown in Figure 1. The convolutional encoder, fθ, alone is shared with the RL agent.\nWe employ the InfoNCE loss (Gutmann & Hyvärinen, 2010; van den Oord et al., 2018) using logits computed bilinearly, as l = ptWc̄t+k. In our implementation, every anchor in the training batch utilizes the positives corresponding to all other anchors as its negative examples. Denoting an observation indexed from dataset O as oi, and its positive as oi+, the logits can be written as li,j+ = piWc̄j+; our loss function in practice is:\nLATC = −EO [ log\nexp li,i+∑ oj∈O exp li,j+\n] . (2)" }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EVALUATION ENVIRONMENTS AND ALGORITHMS", "text": "We evaluate ATC on three standard, visually diverse RL benchmarks - the DeepMind control suite (DMControl; Tassa et al. 2018), Atari games in the Arcade Learning Environment (Bellemare et al., 2013), and DeepMind Lab (DMLab; Beattie et al. 2016). Atari requires discrete control in arcadestyle games. DMControl is comprised of continuous control robotic locomotion and manipulation tasks. In contrast, DMLab requries the RL agent to reason in more visually complex 3D maze environments with partial observability.\nWe use ATC to enhance both on-policy and off-policy RL algorithms. For DMControl, we use RADSAC (Laskin et al., 2020a; Haarnoja et al., 2018) with the augmentation of Kostrikov et al. (2020), which randomly shifts the image in each coordinate (by up to 4 pixels), replicating edge pixel values as necessary to restore the original image size. A difference from prior work is that we use more downsampling in our convolutional network, by using strides (2, 2, 2, 1) instead of (2, 1, 1, 1) to reduce the convolution output image by 25x.1 For both Atari and DMLab, we use PPO (Schulman et al., 2017). In Atari, we use feed-forward agents, sticky actions, and no end-of-life boundaries for RL episodes. In DMLab we used recurrent, LSTM agents receiving only a single time-step image input, the four-layer convolution encoder from Jaderberg et al. (2019), and we tuned the entropy bonus for each level. In the online setting, the ATC loss is trained using small replay buffer of recent experiences.\nWe include all our own baselines for fair comparison and provide complete settings in an appendix. Unless otherwise noted, each curve represents a minimum of 3 random seeds. The bold lines show the average, and the lightly shaded area around each curve represents the maximum extent of the best and worst seeds at each checkpoint." }, { "heading": "4.2 ONLINE RL WITH ATC", "text": "DMControl In the online setting, we found ATC to be capable of training the encoder by itself (i.e., with encoder fully detached from any RL gradient update), achieving essentially equal or better scores versus end-to-end RL in all six environments we tested, Figure 2. In CARTPOLE-SWINGUPSPARSE, where rewards are only received once the pole reaches vertical, ATC training enabled the agent to master the task significantly faster. The encoder is trained with one update for every RL update to the policy, using the same batch size, except in CHEETAH-RUN, which required twice the ATC updates.\nDMLab We experimented with two kinds of levels in DMLab: EXPLORE GOAL LOCATIONS, which requires repeatedly navigating a maze whose layout is randomized every episode, and LASERTAG THREE OPPONENTS, which requires fast reflexes to pursue and tag enemies at a distance. We found ATC capable of training fully detached encoders while achieving equal or better performance than end-to-end RL. Results are shown in Figure 3. Both environments exhibit spar-\n1For our input image size 84 × 84, the convolution output image is 7 × 7 rather than 35 × 35. Performance remains largely unchanged, except for a small decrease in the HALF-CHEETAH environment, but the experiments run significantly faster and use less GPU memory.\nsity which is greater in the “large” version than the “small” version, which our algorithm addresses, discussed next.\nIn EXPLORE, the goal object is rarely seen, especially early on, making its appearance difficult to learn. We therefore introduced prioritized sampling for ATC , with priorities corresponding to empirical absolute returns: p ∝ 1 + Rabs, where Rabs = ∑n t=0 γ\nt|rt|, to train more frequently on more informative scenes.2 Whereas uniform-ATC performs slightly below RL, uniform-ATC outperforms RL and nearly matches using ATC (uniform) as an auxiliary task. By considering the encoder as a stand-alone feature extractor separate from the policy, no importance sampling correction is required.\nIn LASERTAG, enemies are often seen, but the reward of tagging one is rarely achieved by the random agent. ATC learns the relevant features anyway, boosting performance while the RL-only agent remains at zero average score. We found that increasing the rate of UL training to do twice as many updates3 further improved the score to match the ATC-auxiliary agent, showing flexibility to address the representation-learning bottleneck when opponents are dispersed.\nAtari We tested a diverse subset of eight Atari games, shown in Figure 4. We found detachedencoder training to work as well as end-to-end RL in five games, but performance suffered in BREAKOUT and SPACE INVADERS in particular. Using ATC as an auxiliary task, however, improves performance in these games and others. We found it helpful to anneal the amount of UL training over the course of RL in Atari (details in an appendix). Notably, we found several games, including SPACE INVADERS, to benefit from using ATC only to initialize encoder weights, done using an initial 100k transitions gathered with a uniform random policy. Some of our remaining experiments provide more insights into the challenges of this domain." }, { "heading": "4.3 ENCODER PRE-TRAINING BENCHMARKS", "text": "To benchmark the effectiveness of different UL algorithms for RL, we propose a new evaluation methodology that is similar to how UL pre-training techniques are measured in computer vision (see\n2In EXPLORE GOAL LOCATIONS, the only reward is +10, earned when reaching the goal object. 3Since the ATC batch size was 512 but the RL batch size was 1024, performing twice as many UL updates\nstill only consumed the same amount of encoder training data as RL. We did not fine-tune for batch size.\ne.g. Chen et al. (2020); Grill et al. (2020)): (i) collect a data set composed of expert demonstrations from each environment; (ii) pre-train the CNN encoder with that data offline using UL; (iii) evaluate by using RL to learn a control policy while keeping the encoder weights frozen. This procedure isolates the asymptotic performance of each UL algorithm for RL. For convenience, we drew expert demonstrations from partially-trained RL agents, and every UL algorithm trained on the same data set for each environment. Our RL agents used the same post-encoder architectures as in the online experiments. Further details about pre-training by each algorithm are provided in an appendix.\nDMControl We compare ATC against two competing algorithms: Augmented Contrast (AC), from CURL (Laskin et al., 2020b), which uses the same observation for the anchor and the positive, and a VAE (Kingma & Welling, 2013), for which we found better performance by introducing a time delay to the target observation (VAE-T). We found ATC to match or outperform the other algorithms, in all four test environments, as shown in Figure 5. Further, ATC is the only one to match or outperform the reference end-to-end RL across all cases.\nDMLab We compare against both Pixel Control (Jaderberg et al., 2017; Hessel et al., 2019) and CPC (van den Oord et al., 2018), which have been shown to bring strong benefits in DMLab. While all algorithms perform similarly well in EXPLORE, ATC performs significantly better in LASERTAG, Figure 6. Our algorithm is simpler than Pixel Control and CPC in the sense that it uses neither actions, deconvolution, nor recurrence.\nAtari We compare against Pixel Control, VAE-T, and a basic inverse model which predicts actions between pairs of observations. We also compare\nagainst Spatio-Temporal Deep InfoMax (ST-DIM), which uses temporal contrastive losses with “local” features from an intermediate convolution layer to ensure attention to the whole screen; it was shown to produce detailed game-state knowledge when applied to individual frames (Anand et al., 2019). Of the four games shown in Figure 7, ATC is the only UL algorithm to match the end-to-end RL reference in GRAVITAR and BREAKOUT, and it performs best in SPACE INVADERS." }, { "heading": "4.4 MULTI-TASK ENCODERS", "text": "In the offline setting, we conducted initial explorations into the capability of ATC to learn multi-task encoders, simply by pre-training on demonstrations from multiple environments. We evaluate the encoder by using it, with frozen weights, in separate RL agents learning each downstream task.\nDMControl Figure 8 shows our results in DMControl, where we pretrained using only the four environments in the top row. Although the encoder was never trained on the HOPPER, PENDULUM, nor FINGER domains, the multi-task encoder supports efficient RL in them. PENDULUM-SWINGUP and CARTPOLE-SWINGUP-SPARSE stand out as challenging environments which benefited from cross-domain and cross-task pre-training, respectively. The pretraining was remarkably efficient, requiring only 20,000 updates to the encoder.\nAtari Atari proved a more challenging domain for learning multi-task encoders. Learning all eight games together in Figure 11, in the appendix, resulted in diminished performance relative to single-game pretraining in three of the eight. The decrease was partially alleviated by widening the encoder with twice as many filters per layer, indicating that representation capacity is a limiting factor. To test generalization, we conducted a seven-game pre-training experiment where we test the encoder on the held-out game. Most games suffered diminished performance (although still perform significantly higher than a frozen random encoder), confirming the limited extent to which visual features transfer across these games.\n0.0 12.5 25.0 Agent Steps ×106\n0\n50\n100\n150\n200\n250\nSc or\ne\nbreakout Encoder Pre-Training\nCon: T=16 Con: T=4 Con: T=1 Sim: T=16 Sim: T=1\nFigure 9: BREAKOUT benefits from contrasting against negatives from several neighboring time steps." }, { "heading": "4.5 ABLATIONS AND ENCODER ANALYSIS", "text": "Random Shift in ATC In offline experiments, we discovered random shift augmentations to be helpful in all domains. To our knowledge, this is the first application of random shift to 3D visual environments as in DMLab. In Atari, we found performance in GRAVITAR to suffer from random shift, but reducing the probability of applying random shift to each observation from 1.0 to 0.1 alleviated the effect while still bringing benefits in other games, so we used this setting in our main experiments. Results are shown in Figure 12 in an appendix.\nRandom Shift in RL In DMControl, we found the best results when using random shift during RL, even when training with a frozen encoder. This is evidence that the augmentation regularizes not only the representation but also the policy, which first processes the latent image into a 50- dimensional vector. To unlock computation and memory benefits of replaying only the latent images for the RL agent, we attempted to apply data augmentation to the latent image. But we found the smallest possible random shifts to be too extreme. Instead, we introduce a new augmentation, subpixel random shift, which linearly interpolates among neighboring pixels. As shown in Figure 13 in the appendix, this augmentation restores performance when applied to the latent images, allowing a pre-trained encoder to be entirely bypassed during policy training updates.\nTemporal Contrast on Sequences In BREAKOUT alone, we discovered that composing the UL training batch of trajectory segments, rather than individual transitions, gave a significant benefit. Treating all elements of the training batch independently provides “hard” negatives, since the encoder must distinguish between neighboring time steps. This setting had no effect in the other Atari games tested, and we found equal or better performance using individual transitions in DMControl and DMLab. Figure 9 further shows that using a similarity loss (Grill et al., 2020) does not capture the benefit.\nEncoder Analysis We analyzed the learned encoders in BREAKOUT to further study this ablation effect. Similar to Zagoruyko & Komodakis (2016), we compute spatial attention maps by mean-pooling the absolute values of the activations along the channel dimension and follow with a 2-dimensional spatial softmax. Figure 10 shows the attention of four different encoders on the displayed scene. The poorly performing UL encoder heavily utilizes the paddle to distinguish the observation. The UL encoder trained with random shift and sequence data, however, focuses near the ball, as does the fully-trained RL encoder. (The random encoder mostly highlights the bricks, which are less relevant for control.) In an appendix, we include other example encoder analyses from Atari and DMLab which show ATC-trained encoders attending only to key objects on the game screen, while RL-trained encoders additionally attend to potentially distracting features such as game score." }, { "heading": "5 CONCLUSION", "text": "Reward-free representation learning from images provides flexibility and insights for improving deep RL agents. We have shown a broad range of cases where our new unsupervised learning algorithm can fully replace RL for training convolutional encoders while maintaining or improving\nonline performance. In a small number of environments–a few of the Atari games–including the RL loss for encoder training still surpasses our UL-only method, leaving opportunities for further improvements in UL for RL.\nOur preliminary efforts to use actions as inputs (into the predictor MLP) or as prediction outputs (inverse loss) with ATC did not immediately yield strong improvements. We experimented only with random shift, but other augmentations may be useful, as well. In multi-task encoder training, our technique avoids any need for sophisticated reward-balancing (Hessel et al., 2019), but more advanced training methods may still help when the required features are in conflict, as in Atari, or if they otherwise impact our loss function unequally. On the theoretical side, it may be helpful to analyze the effects of domain shift on the policy when a detached representation is learned online.\nOne obvious application of our offline methodology would be in the batch RL setting, where the agent learns from a fixed data set. Our offline experiments showed that a relatively small number of transitions are sufficient to learn rich representations by UL, and the lower limit could be further explored. Overall, we hope that our algorithm and experiments spur further developments leveraging unsupervised learning for reinforcement learning." }, { "heading": "A APPENDIX", "text": "A.1 ALGORITHMS\nAlgorithm 1 Online RL with decoupled ATC encoder (steps distinct from end-to-end RL in blue) Require: θATC , φπ . ATC model parameters (encoder fθ thru contrast W ), policy parameters\n1: S ← {} . replay buffer of observations 2: θ̄ATC ← θATC . initialize momentum encoder (conv and linear only) 3: repeat 4: Sample environment and policy, through encoder: 5: for 1 to m do . a minibatch 6: a ∼ π(·|fθ(s);φ), s′ ∼ T (s, a), r ∼ R(s, a, s′) 7: S ← S ∪ {s} . store observations (delete oldest if full) 8: s← s′ 9: end for 10: Update policy by given RL formula: . on- or off-policy 11: for 1 to n do . given number RL updates per minibatch 12: φπ ← φπ +RL(s, a, s′, r;φπ) . stop gradient into encoder 13: end for 14: Update encoder (and contrastive model) by ATC: 15: for 1 to p do 16: s, s+ ∼ S . sample observations: anchors and positives 17: θATC ← θATC − λATC∇θATCLATC(s, s+) . ATC gradient update 18: θ̄ATC ← (1− τ)θ̄ATC + τθATC . update momentum encoder (conv and linear only) 19: end for 20: until converged 21: return Encoder fθ and policy πφ\nA.2 ADDITIONAL FIGURES\nIn subpixel random shift, new pixels are a linearly weighted average of the four nearest pixels to a randomly chosen coordinate location. We used uniformly random horizontal and vertical shifts, and tested maximum displacements in (±) {0.1, 0.25, 0.5, 0.75, 1.0} pixels (with “edge” mode padding ±1). We found 0.5 to work well in all tested domains, restoring the performance of raw image augmentation but eliminating convolutions entirely from the RL training updates.\nStacked inputs UL (with shift)Random RL-trained UL (without shift)\nFigure 14: Attention map in BREAKOUT which shows the RL-trained encoder focusing on game score, whereas UL ATC encoder focuses properly on the paddle and ball.\nStacked inputs ATC (ours)Random RL-trainedPixel Control\nFigure 15: Attention map in LASERTAG. UL encoder with pixel control focuses on the score, while UL encoder with the proposed ATC focuses properly on the coin similar to RL-trained encoder.\nA.3 RL SETTINGS\nA.4 ONLINE ATC SETTINGS\nA.5 OFFLINE PRE-TRAINING DETAILS\nWe conducted coarse hyperparameter sweeps to tune each competing UL algorithm. In all cases, the best setting is the one shown in our comparisons.\nWhen our VAEs include a time difference between input and reconstruction observations, we include one hidden layer with action additionally input between the encoder and decoder. We tried both 1.0 and 0.1 KL-divergence weight in the VAE loss, and found 0.1 to perform better in both DMControl and Atari.\nDMControl For the VAE, we experimented with 0 and 1 time step difference between input and reconstruction target observations and training for either 1e4 or 5e4 updates. The best settings were 1-step temporal, and 5e4 updates, with batch size 128. ATC used 1-step temporal, 5e4 updates (although this can be significantly decreased), and batch size 256 (including CHEETAH). The pretraining data set consisted of the first 5e4 transitions from a RAD-SAC agent learning each task, including 5e3 random actions. Within this span, CARTPOLE and BALL IN CUP learned completely, but WALKER and CHEETAH reached average returns of 514 and 630, respectively (collected without the compressive convolution).\nDMLab For Pixel Control, we used the settings from Hessel et al. (2019) (see the appendix therein), except we used only empirical returns, computed offline (without bootstrapping). For CPC, we tried training batch shapes, batch× time in (64, 8), (32, 16), (16, 32), and found the setting with rollouts of length 16 to be best. We contrasted all elements of the batch against each other, rather than only forward constrasts. In all cases we also used 16 steps to warmup the LSTM. For all algorithms we tried learning rates 3e−4 and 1e−3 and both 5e4 and 1.5e5 updates. For ATC and CPC, the lower learning rate and higher number of updates helped in LASERTAG especially. The pretraining data was 125e3 samples from partially trained RL agents receiving average returns of 127 and 6 in EXPLORE GOAL LOCATIONS SMALL and LASERTAG THREE OPPONENTS SMALL, respectively.\nAtari For the VAE, we experimented with 0, 1, and 3 time step difference between input and reconstruction target, and found 3 to work best. For ST-DIM we experimented with 1, 3, and 4 time steps differences, and batch sizes from 64 to 256, learning rates 1e−3 and 5e−4. Likewise, 3-step delay worked best. For the inverse model, we tried 1- and 3-step predictions, with 1-step working better overall, and found random shift augmentation to help. For pixel control, we used the settings in Jaderberg et al. (2017), again with full empirical returns. We ran each algorithm for up to 1e5 updates, although final ATC results used 5e4 updates. We ran each RL agent with and without observation normalization on the latent image and observed no difference in performance. Pretraining data was 125e3 samples sourced from the replay buffer of DQN agents trained for 15e6 steps with epsilon-greedy = 0.1. Evaluation scores were:" } ]
2,020
null
SP:9a92f7adeccc0f66836e3ddfb6bd5af67bdf77e4
[ "This paper develops a framework for unsupervised learning of graphs. The goal is to build graph representation using an encoder that is useful for downstream tasks such as graph classification. The representation is computed with an encoder $E$ applied to a graph data $(X,A)$, containing vertex data $X$ and adjacency matrix $A$. Given a graph $(X,A)$ and a perturbed version of its adjacency matrix $(X,\\tilde{A)}$, the decoder $D$ is tasked with minimizing the conditional entropy, $h(\\Delta A \\vert H,\\tilde{H})$, of the perturbation $\\Delta A = A-\\tilde{A}$ when given the two representations $H=E(X,A)$ and $\\tilde{H}=E(X,\\tilde{A})$. " ]
We present the Topology Transformation Equivariant Representation (TopoTER) learning, a general paradigm of unsupervised learning of node representations of graph data for the wide applicability to Graph Convolutional Neural Networks (GCNNs). We formalize the TopoTER from an information-theoretic perspective, by maximizing the mutual information between topology transformations and node representations before and after the transformations. We derive that maximizing such mutual information can be relaxed to minimizing the cross entropy between the applied topology transformation and its estimation from node representations. In particular, we seek to sample a subset of node pairs from the original graph and flip the edge connectivity between each pair to transform the graph topology. Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs. In experiments, we apply the TopoTER to the downstream node and graph classification tasks, and results show that the TopoTER outperforms the state-of-the-art unsupervised approaches.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Nazanin Alipourfard", "Kristina Lerman", "Hrayr Harutyunyan", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Bijaya Adhikari", "Yao Zhang", "Naren Ramakrishnan", "B Aditya Prakash" ], "title": "Sub2vec: Feature learning for subgraphs", "venue": "In Pacific-Asia Conference on Knowledge Discovery and Data Mining,", "year": 2018 }, { "authors": [ "Karsten M Borgwardt", "Hans-Peter Kriegel" ], "title": "Shortest-path kernels on graphs", "venue": "In IEEE International Conference on Data Mining (ICDM), pp. 8–pp. IEEE,", "year": 2005 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Shaosheng Cao", "Wei Lu", "Qiongkai Xu" ], "title": "Deep neural networks for learning graph representations", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2016 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In Proceedings of the 33rd International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Nicola De Cao", "Thomas Kipf" ], "title": "MolGAN: An implicit generative model for small molecular graphs", "venue": "In International Conference on Machine Learning Workshop on Theoretical Foundations and Applications of Deep Generative Models,", "year": 2018 }, { "authors": [ "Pim de Haan", "Maurice Weiler", "Taco Cohen", "Max Welling" ], "title": "Gauge equivariant mesh cnns: Anisotropic convolutions on geometric graphs", "venue": "arXiv preprint arXiv:2003.05425,", "year": 2020 }, { "authors": [ "Sander Dieleman", "Kyle W Willett", "Joni Dambre" ], "title": "Rotation-invariant convolutional neural networks for galaxy morphology prediction", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 2015 }, { "authors": [ "Sander Dieleman", "Jeffrey De Fauw", "Koray Kavukcuoglu" ], "title": "Exploiting cyclic symmetry in convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Fabian B Fuchs", "Daniel E Worrall", "Volker Fischer", "Max Welling" ], "title": "Se (3)-transformers: 3d rototranslation equivariant attention networks", "venue": "arXiv preprint arXiv:2006.10503,", "year": 2020 }, { "authors": [ "Xiang Gao", "Wei Hu", "Guo-Jun Qi" ], "title": "GraphTER: Unsupervised learning of graph transformation equivariant representations via auto-encoding node-wise transformations", "venue": "In Proceedings of IEEE/CVF Conferences on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Thomas Gärtner", "Peter Flach", "Stefan Wrobel" ], "title": "On graph kernels: Hardness results and efficient alternatives", "venue": "In Learning Theory and Kernel Machines,", "year": 2003 }, { "authors": [ "Robert Gens", "Pedro M Domingos" ], "title": "Deep symmetry networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International Conference on Artificial Neural Networks (ICANN),", "year": 2011 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Variational graph auto-encoders", "venue": "In Proceedings of the NIPS Workshop on Bayesian Deep Learning,", "year": 2016 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Jyri J Kivinen", "Christopher KI Williams" ], "title": "Transformation equivariant boltzmann machines", "venue": "In International Conference on Artificial Neural Networks (ICANN),", "year": 2011 }, { "authors": [ "Risi Kondor", "Horace Pan" ], "title": "The multiscale laplacian graph kernel", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Karel Lenc", "Andrea Vedaldi" ], "title": "Understanding image representations by measuring their equivariance and equivalence", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Jan Eric Lenssen", "Matthias Fey", "Pascal Libuschewski" ], "title": "Group equivariant capsule networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Xiao Liu", "Fanjin Zhang", "Zhenyu Hou", "Zhaoyu Wang", "Li Mian", "Jing Zhang", "Jie Tang" ], "title": "Selfsupervised learning: Generative or contrastive", "venue": null, "year": 2006 }, { "authors": [ "Tengfei Ma", "Jie Chen", "Cao Xiao" ], "title": "Constrained generation of semantically valid graphs via regularizing variational autoencoders", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Annamalai Narayanan", "Mahinthan Chandramohan", "Rajasekar Venkatesan", "Lihui Chen", "Yang Liu", "Shantanu Jaiswal" ], "title": "graph2vec: Learning distributed representations of graphs", "venue": "In Proceedings of the 13th International Workshop on Mining and Learning with Graphs (MLG),", "year": 2017 }, { "authors": [ "Zhen Peng", "Wenbing Huang", "Minnan Luo", "Qinghua Zheng", "Yu Rong", "Tingyang Xu", "Junzhou Huang" ], "title": "Graph representation learning via graphical mutual information maximization", "venue": "In Proceedings of The Web Conference,", "year": 2020 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2014 }, { "authors": [ "Guo-Jun Qi", "Liheng Zhang", "Chang Wen Chen", "Qi Tian" ], "title": "AVT: Unsupervised learning of transformation equivariant representations by autoencoding variational transformations", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Guo-Jun Qi", "Liheng Zhang", "Xiao Wang" ], "title": "Learning generalized transformation equivariant representations via autoencoding transformations", "venue": "arXiv preprint arXiv:1906.08628,", "year": 2019 }, { "authors": [ "Jiezhong Qiu", "Qibin Chen", "Yuxiao Dong", "Jing Zhang", "Hongxia Yang", "Ming Ding", "Kuansan Wang", "Jie Tang" ], "title": "Gcc: Graph contrastive coding for graph neural network pre-training", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Uwe Schmidt", "Stefan Roth" ], "title": "Learning rotation-aware features: From invariant priors to equivariant descriptors", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Prithviraj Sen", "Galileo Namata", "Mustafa Bilgic", "Lise Getoor", "Brian Galligher", "Tina Eliassi-Rad" ], "title": "Collective classification in network data", "venue": "AI magazine,", "year": 2008 }, { "authors": [ "Nino Shervashidze", "SVN Vishwanathan", "Tobias Petri", "Kurt Mehlhorn", "Karsten Borgwardt" ], "title": "Efficient graphlet kernels for large graph comparison", "venue": "In Artificial Intelligence and Statistics,", "year": 2009 }, { "authors": [ "Nino Shervashidze", "Pascal Schweitzer", "Erik Jan Van Leeuwen", "Kurt Mehlhorn", "Karsten M Borgwardt" ], "title": "Weisfeiler-lehman graph kernels", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Graphvae: Towards generation of small graphs using variational autoencoders", "venue": "In International Conference on Artificial Neural Networks (ICANN),", "year": 2018 }, { "authors": [ "Henrik Skibbe" ], "title": "Spherical Tensor Algebra for Biomedical Image Analysis", "venue": "PhD thesis, Verlag nicht ermittelbar,", "year": 2013 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee" ], "title": "Learning invariant representations with local transformations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "Fan-Yun Sun", "Jordan Hoffmann", "Vikas Verma", "Jian Tang" ], "title": "Infograph: Unsupervised and semisupervised graph-level representation learning via mutual information maximization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Ke Sun", "Zhouchen Lin", "Zhanxing Zhu" ], "title": "Multi-stage self-supervised learning for graph convolutional networks on graphs with few labeled nodes", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Liò", "Yoshua Bengio", "R Devon Hjelm" ], "title": "Deep graph infomax", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Daixin Wang", "Peng Cui", "Wenwu Zhu" ], "title": "Structural deep network embedding", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Jiayu Wang", "Wengang Zhou", "Guo-Jun Qi", "Zhongqian Fu", "Qi Tian", "Houqiang Li" ], "title": "Transformation gan for unsupervised image synthesis and representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "WOK Asiri Suranga Wijesinghe", "Qing Wang" ], "title": "Dfnets: Spectral cnns for graphs with feedbacklooped filters", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2019 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Fengwen Chen", "Guodong Long", "Chengqi Zhang", "S Yu Philip" ], "title": "A comprehensive survey on graph neural networks", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Bingbing Xu", "Huawei Shen", "Qi Cao", "Yunqi Qiu", "Xueqi Cheng" ], "title": "Graph wavelet neural network", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Pinar Yanardag", "SVN Vishwanathan" ], "title": "Deep graph kernels", "venue": "In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2015 }, { "authors": [ "Jiaxuan You", "Rex Ying", "Xiang Ren", "William L Hamilton", "Jure Leskovec" ], "title": "Graphrnn: Generating realistic graphs with deep auto-regressive models", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Liheng Zhang", "Guo-Jun Qi", "Liqiang Wang", "Jiebo Luo" ], "title": "AET vs. AED: Unsupervised representation learning by auto-encoding transformations rather than data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Graphs provide a natural and efficient representation for non-Euclidean data, such as brain networks, social networks, citation networks, and 3D point clouds. Graph Convolutional Neural Networks (GCNNs) (Bronstein et al., 2017) have been proposed to generalize the CNNs to learn representations from non-Euclidean data, which has made significant advances in various applications such as node classification (Kipf & Welling, 2017; Veličković et al., 2018; Xu et al., 2019a) and graph classification (Xu et al., 2019b). However, most existing GCNNs are trained in a supervised fashion, requiring a large amount of labeled data for network training. This limits the applications of the GCNNs since it is often costly to collect adequately labeled data, especially on large-scale graphs. Hence, this motivates the proposed research to learn graph feature representations in an unsupervised fashion, which enables the discovery of intrinsic graph structures and thus adapts to various downstream tasks.\nAuto-Encoders (AEs) and Generative Adversarial Networks (GANs) are two most representative unsupervised learning methods. Based on the AEs and GANs, many approaches have sought to learn transformation equivariant representations (TERs) to further improve the quality of unsupervised representation learning. It assumes that the learned representations equivarying to transformations are able to encode the intrinsic structures of data such that the transformations can be reconstructed from the representations before and after transformations (Qi et al., 2019b). Learning TERs traces back to Hinton’s seminal work on learning transformation capsules (Hinton et al., 2011), and embodies a variety of methods developed for Euclidean data (Kivinen & Williams, 2011; Sohn & Lee, 2012; Schmidt & Roth, 2012; Skibbe, 2013; Lenc & Vedaldi, 2015; Gens & Domingos, 2014; Dieleman et al., 2015; 2016; Zhang et al., 2019; Qi et al., 2019a). Further, Gao et al. (2020) extend transformation equivariant representation learning to non-Euclidean domain, which formalizes Graph Transformation Equivariant Representation (GraphTER) learning by auto-encoding nodewise transformations in an unsupervised fashion. Nevertheless, only transformations on node features are explored, while the underlying graph may vary implicitly. The graph topology has not been fully explored yet, which however is crucial in unsupervised graph representation learning.\nTo this end, we propose the Topology Transformation Equivariant Representation (TopoTER) learning to infer unsupervised graph feature representations by estimating topology transformations. In-\nstead of transforming node features as in the GraphTER, the proposed TopoTER studies the transformation equivariant representation learning by transforming the graph topology, i.e., adding or removing edges to perturb the graph structure. Then the same input signals are attached to the resultant graph topologies, resulting in different graph representations. This provides an insight into how the same input signals associated with different graph topologies would lead to equivariant representations enabling the fusion of node feature and graph topology in GCNNs. Formally, we propose the TopoTER from an information-theoretic perspective, aiming to maximize the mutual information between topology transformations and feature representations with respect to the original and transformed graphs. We derive that maximizing such mutual information can be relaxed to the cross entropy minimization between the applied topology transformations and the estimation from the learned representations of graph data under the topological transformations.\nSpecifically, given an input graph and its associated node features, we first sample a subset of node pairs from the graph and flip the edge connectivity between each pair at a perturbation rate, leading to a transformed graph with attached node features. Then, we design a graph-convolutional auto-encoder architecture, where the encoder learns the node-wise representations over the original and transformed graphs respectively, and the decoder predicts the topology transformations of edge connectivity from both representations by minimizing the cross entropy between the applied and estimated transformations. Experimental results demonstrate that the proposed TopoTER model outperforms the state-of-the-art unsupervised models, and even achieves comparable results to the (semi-)supervised approaches in node classification and graph classification tasks at times.\nOur main contributions are summarized as follows. • We propose the Topology Transformation Equivariant Representation (TopoTER) learning to in-\nfer expressive node feature representations in an unsupervised fashion, which can characterize the intrinsic structures of graphs and the associated features by exploring the graph transformations of connectivity topology. • We formulate the TopoTER from an information-theoretic perspective, by maximizing the mutual\ninformation between feature representations and topology transformations, which can be relaxed to the cross entropy minimization between the applied transformations and the prediction in an end-to-end graph-convolutional auto-encoder architecture. • Experiments demonstrate that the proposed TopoTER model outperforms the state-of-the-art un-\nsupervised methods in both node classification and graph classification." }, { "heading": "2 RELATED WORK", "text": "Graph Auto-Encoders. Graph Auto-Encoders (GAEs) are the most representative unsupervised methods. GAEs encode graph data into feature space via an encoder and reconstruct the input graph data from the encoded feature representations via a decoder. GAEs are often used to learn network embeddings and graph generative distributions (Wu et al., 2020). For network embedding learning, GAEs learn the feature representations of each node by reconstructing graph structural information, such as the graph adjacency matrix (Kipf & Welling, 2016) and the positive pointwise mutual information (PPMI) matrix (Cao et al., 2016; Wang et al., 2016). For graph generation, some methods generate nodes and edges of a graph alternately (You et al., 2018), while other methods output an entire graph (Simonovsky & Komodakis, 2018; Ma et al., 2018; De Cao & Kipf, 2018).\nGraph Contrastive Learning. An important paradigm called contrastive learning aims to train an encoder to be contrastive between the representations of positive samples and negative samples. Recent contrastive learning frameworks can be divided into two categories (Liu et al., 2020): context-instance contrast and context-context contrast. Context-instance contrast focuses on modeling the relationships between the local feature of a sample and its global context representation. Deep InfoMax (DIM) (Hjelm et al., 2018) first proposes to maximize the mutual information between a local patch and its global context through a contrastive learning task. Deep Graph InfoMax (DGI) (Velickovic et al., 2019) proposes to learn node-level feature representation by extending DIM to graph-structured data, while InfoGraph (Sun et al., 2020a) aims to use mutual information maximization for unsupervised representation learning on entire graphs. Peng et al. (2020) propose a Graphical Mutual Information (GMI) approach to maximize the mutual information of both features and edges between inputs and outputs. In contrast to context-instance methods, contextcontext contrast studies the relationships between the global representations of different samples. M3S (Sun et al., 2020b) adopts a self-supervised pre-training paradigm as in DeepCluster (Caron et al., 2018) for better semi-supervised prediction in GCNNs. Graph Contrastive Coding (GCC)\n(Qiu et al., 2020) designs the pre-training task as subgraph instance discrimination in and across networks to empower graph neural networks to learn the intrinsic structural representations.\nTransformation Equivariant Representation Learning. Many approaches have sought to learn transformation equivariant representations. Learning transformation equivariant representations has been advocated in Hinton’s seminal work on learning transformation capsules. Following this, a variety of approaches have been proposed to learn transformation equivariant representations (Gens & Domingos, 2014; Dieleman et al., 2015; 2016; Cohen & Welling, 2016; Lenssen et al., 2018). To generalize to generic transformations, Zhang et al. (2019) propose to learn unsupervised feature representations via Auto-Encoding Transformations (AET) by estimating transformations from the learned feature representations of both the original and transformed images, while Qi et al. (2019a) extend AET from an information-theoretic perspective by maximizing the lower bound of mutual information between transformations and representations. Wang et al. (2020) extend the AET to Generative Adversarial Networks (GANs) for unsupervised image synthesis and representation learning. Gao et al. (2020) introduce the GraphTER model that extends AET to graph-structured data, which is formalized by auto-encoding node-wise transformations in an unsupervised manner. de Haan et al. (2020) propose Gauge Equivariant Mesh CNNs which generalize GCNNs to apply anisotropic gauge equivariant kernels. Fuchs et al. (2020) introduce a self-attention mechanism specifically for 3D point cloud data, which adheres to equivariance constraints, improving robustness to nuisance transformations." }, { "heading": "3 METHOD", "text": "" }, { "heading": "3.1 PRELIMINARY", "text": "We consider an undirected graph G = {V, E ,A} composed of a node set V of cardinality |V| = N , an edge set E connecting nodes of cardinality |E| = M . A is a real symmetric N × N matrix that encodes the graph structure, where ai,j = 1 if there exists an edge (i, j) between nodes i and j, and ai,j = 0 otherwise. Graph signal refers to data that reside on the nodes of a graph G, denoted by X ∈ RN×C with the i-th row representing the C-dimensional graph signal on the i-th node of V ." }, { "heading": "3.2 TOPOLOGY TRANSFORMATION", "text": "We define the topology transformation t as adding or removing edges from the original edge set E in graph G. This can be done by sampling, i.i.d., a switch parameter σi,j as in (Velickovic et al., 2019), which determines whether to modify edge (i, j) in the adjacency matrix. Assuming a Bernoulli distribution B(p), where p denotes the probability of each edge being modified, we draw a random matrix Σ = {σi,j}N×N from B(p), i.e., Σ ∼ B(p). We then acquire the perturbed adjacency matrix as à = A⊕ Σ, (1) where ⊕ is the exclusive OR (XOR) operation. This strategy produces a transformed graph through the topology transformation t, i.e., à = t(A). Here, the edge perturbation probability of p = 0 corresponds to a non-transformed adjacency matrix, which is a special case of an identity transformation to A.\nThe transformed adjacency matrix à can also be written as the sum of the original adjacency matrix A and a topology perturbation matrix ∆A:\nà = A + ∆A, (2)\nwhere ∆A = {δai,j}N×N encodes the perturbation of edges, with δai,j ∈ {−1, 0, 1}. As shown in Fig. 1, when δai,j = 0, the edge between node i and node j keeps unchanged (i.e., black solid lines); when δai,j = −1 or 1, it means removing (i.e., orange dotted lines) or adding (i.e., blue solid lines) the edge between node i and node j, respectively." }, { "heading": "3.3 THE FORMULATION OF TOPOTER", "text": "Definition 1 Given a pair of graph signal and adjacency matrix (X,A), and a pair of graph signal and transformed adjacency matrix (X, Ã) by a topology transformation t(·), a function E(·) is transformation equivariant if it satisfies\nE(X, Ã) = E (X, t(A)) = ρ(t) [E(X,A)] , (3)\nwhere ρ(t)[·] is a homomorphism of transformation t in the representation space.\nLet us denote H = E(X,A), and H̃ = E(X, Ã). We seek to learn an encoder E : (X,A) 7→ H; (X, Ã) 7→ H̃ that maps both the original and transformed sample to representations {H, H̃} equivariant to the sampled transformation t, whose information can thus be inferred from the representations via a decoder D : (H̃,H) 7→ ∆̂A as much as possible. From an information-theoretic perspective, this requires (H,∆A) should jointly contain all necessary information about H̃.\nThen a natural choice to formalize the topology transformation equivariance is the mutual information I(H,∆A; H̃) between (H,∆A) and H̃. The larger the mutual information is, the more knowledge about ∆A can be inferred from the representations {H, H̃}. Hence, we propose to maximize the mutual information to learn the topology transformation equivariant representations as follows:\nmax θ I(H,∆A; H̃), (4)\nwhere θ denotes the parameters of the auto-encoder network.\nNevertheless, it is difficult to compute the mutual information directly. Instead, we derive that maximizing the mutual information can be relaxed to minimizing the cross entropy, as described in the following theorem.\nTheorem 1 The maximization of the mutual information I(H,∆A; H̃) can be relaxed to the minimization of the cross entropy H(p ‖ q) between the probability distributions p(∆A, H̃,H) and q(∆̂A|H̃,H):\nmin θ\nH ( p(∆A, H̃,H) ‖ q(∆̂A|H̃,H) ) , − E\np(∆A,H̃,H) log q(∆̂A|H̃,H). (5)\nProof By using the chain rule of mutual information, we have\nI(H,∆A; H̃) = I(∆A; H̃|H) + I(H; H̃) ≥ I(∆A; H̃|H).\nThus the mutual information I(∆A; H̃|H) is the lower bound of the mutual information I(H,∆A; H̃) that attains its minimum value when I(H; H̃) = 0.\nTherefore, we relax the objective to maximizing the lower bound mutual information I(∆A; H̃|H) between the transformed representation H̃ and the topology transformation ∆A:\nI(∆A; H̃|H) = H(∆A|H)−H(∆A|H̃,H),\nwhere H(·) denotes the conditional entropy. Since ∆A and H are independent, we have H(∆A|H) = H(∆A). Hence, maximizing I(∆A; H̃|H) becomes\nmin θ\nH(∆A|H̃,H). (6)\nAccording to the chain rule of conditional entropy, we have\nH(∆A|H̃,H) = H(∆A, H̃,H)−H(H̃,H) ≤ H(∆A, H̃,H),\nwhere the conditional entropy H(∆A|H̃,H) is upper bounded by the joint entropy H(∆A, H̃,H). Thus, the minimization problem in Eq. (6) becomes\nmin θ H(∆A, H̃,H). (7)\nWe next introduce a conditional probability distribution q(∆̂A|H̃,H) to approximate the intractable posterior q̃(∆A|H̃,H) with an estimated ∆̂A. According to the definition of the Kullback-Leibler divergence, we have\nH(∆A, H̃,H) = H(p) = H(p ‖ q)−DKL(p ‖ q) ≤ H(p ‖ q), where DKL(p ‖ q) denotes the Kullback-Leibler divergence of p and q that is non-negative, and H(p ‖ q) is the cross entropy between p and q. Thus, Eq. (6) is converted to minimizing the cross entropy as the upper bound:\nmin θ\nH ( p(∆A, H̃,H) ‖ q(∆̂A|H̃,H) ) , − E\np(∆A,H̃,H) log q(∆̂A|H̃,H).\nHence, we relax the maximization problem in Eq. (4) to the optimization in Eq. (5).\nBased on Theorem 1, we train the decoder D to learn the distribution q(∆̂A|H̃,H) so as to estimate the topology transformation ∆̂A from the encoded {H̃,H}, where the input pairs of original and transformed graph representations {H̃,H} as well as the ground truth target ∆A can be sampled tractably from the factorization of p(∆A, H̃,H) , p(∆A)p(H)p(H̃|∆A,H). This allows us to minimize the cross entropy between p(∆A, H̃,H) and q(∆̂A|H̃,H) as in (5) with the training triplets (H̃,H; ∆A) drawn from the tractable factorization of p(∆A, H̃,H). Hence, we formulate the TopoTER as the joint optimization of the representation encoder E and the transformation decoder D." }, { "heading": "3.4 THE ALGORITHM", "text": "We design a graph-convolutional auto-encoder network for the TopoTER learning, as illustrated in Fig. 2. Given a graph signal X associated with a graph G = {V, E ,A}, the proposed unsupervised learning algorithm for the TopoTER consists of three steps: 1) topology transformation, which samples and perturbs some edges from E to acquire a transformed adjacency matrix Ã; 2) representation encoding, which extracts the feature representations of graph signals before and after the topology transformation; 3) transformation decoding, which estimates the topology transformation parameters from the learned feature representations. We elaborate on the three steps as follows.\nTopology Transformation. We randomly sample a subset of edges from E for topology perturbation—adding or removing edges, which not only enables to characterize local graph structures at various scales, but also reduces the number of edge transformation parameters to estimate for computational efficiency. In practice, in each iteration of training, we sample all the node pairs with connected edges S1, and randomly sample a subset of disconnected node pairs S0, i.e.,\nS0 = { (i, j) ∣∣ai,j = 0} ,S1 = {(i, j)∣∣ai,j = 1} , (8)\nwhere |S0| = |S1| = M . Next, we randomly split S0 and S1 into two disjoint sets, respectively, i.e., Si = { S (1) i ,S (2) i ∣∣ S(1)i ∩ S(2)i = ∅,S(1)i ∪ S(2)i = Si, |S(1)i | = r · |Si|} , i ∈ {0, 1}, (9)\nwhere r is the edge perturbation rate. Then, for each node pair (i, j) in S(1)0 and S (1) 1 , we flip the corresponding entry in the original graph adjacency matrix. That is, if ai,j = 0, then we set ãi,j = 1; otherwise, we set ãi,j = 0. For each node pair (i, j) in S (2) 0 and S (2) 1 , we keep the original connectivities unchanged, i.e., ãi,j = ai,j .\nThis leads to the transformed adjacency matrix Ã, as well as the sampled transformation parameters by accessing ∆A at position (i, j) from S0 and S1. Also, we can category the sampled topology transformation parameters into four types:\n1. add an edge to a disconnected node pair, i.e., {t : ai,j = 0 7→ ãi,j = 1, (i, j) ∈ S(1)0 };\n2. delete the edge between a connected node pair, i.e., {t : ai,j = 1 7→ ãi,j = 0, (i, j) ∈ S(1)1 };\n3. keep the disconnection between node pairs in S(2)0 , i.e., {t : ai,j = 0 7→ ãi,j = 0, (i, j) ∈ S (2) 0 };\n4. keep the connection between node pairs in S(2)1 , i.e., {t : ai,j = 1 7→ ãi,j = 1, (i, j) ∈ S (2) 1 }.\nThus, we cast the problem of estimating transformation parameters in ∆A from (H̃,H) as the classification problem of the transformation parameter types. The percentage of these four types is r : r : (1− r) : (1− r). Representation Encoder. We train an encoder E : (X,A) 7→ E(X,A) to encode the feature representations of each node in the graph. As demonstrated in Fig. 2, we leverage GCNNs with shared weights to extract feature representations of each node in the graph signal. Taking the GCN (Kipf & Welling, 2017) as an example, the graph convolution in the GCN is defined as\nH = E(X,A) = D− 1 2 (A + I)D− 1 2XW, (10)\nwhere D is the degree matrix of A + I, W ∈ RC×F is a learnable parameter matrix, and H = [h1, ...,hN ]\n> ∈ RN×F denotes the node-wise feature matrix with F output channels. Similarly, the node feature of the transformed counterpart is as follows with the shared weights W.\nH̃ = E(X, Ã) = D̃− 1 2 (à + I)D̃− 1 2XW\n= D̃− 1 2 (A + I)D̃− 1 2XW + D̃− 1 2 ∆AD̃− 1 2XW.\n(11)\nWe thus acquire the feature representations H and H̃ of graph signals before and after topology transformations.\nTransformation Decoder. Comparing Eq. (10) and Eq. (11), the prominent difference between H̃ and H lies in the second term of Eq. (11) featuring ∆A. This enables us to train a decoder D : (H̃,H) 7→ ∆̂A to estimate the topology transformation from the joint representations before and after transformation. We first take the difference between the extracted feature representations before and after transformations along the feature channel,\n∆H = H̃−H = [δh1, ..., δhN ]> ∈ RN×F . (12) Thus, we can predict the topology transformation between node i and node j through the node-wise feature difference ∆H by constructing the edge representation as\nei,j = exp{−(δhi − δhj) (δhi − δhj)} ‖ exp{−(δhi − δhj) (δhi − δhj)}‖1 ∈ RF , ∀(i, j) ∈ S0 ∪ S1, (13)\nwhere denotes the Hadamard product of two vectors to capture the feature representation, and ‖ · ‖1 is the `1-norm of a vector for normalization. The edge representation ei,j of node i and j is then fed into several linear layers for the prediction of the topology transformation,\nŷi,j = softmax (linear(ei,j)) , ∀(i, j) ∈ S0 ∪ S1, (14) where softmax(·) is an activation function. According to Eq. (5), the entire auto-encoder network is trained by minimizing the cross entropy\nL = − E (i,j)∈S0∪S1 3∑ f=0 y (f) i,j log ŷ (f) i,j , (15)\nwhere f denotes the transformation type (f ∈ {0, 1, 2, 3}), and y is the ground-truth binary indicator (0 or 1) for each transformation parameter type." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 NODE CLASSIFICATION", "text": "Datasets. We adopt three citation networks to evaluate our model: Cora, Citeseer, and Pubmed (Sen et al., 2008), where nodes correspond to documents and edges represent citations. We follow the standard train/test split in (Kipf & Welling, 2017) to conduct the experiments.\nImplementation Details. In this task, the auto-encoder network is trained via Adam optimizer, and the learning rate is set to 10−4. We use the same early stopping strategy as DGI (Velickovic et al., 2019) on the observed training loss, with a patience of 20 epochs. We deploy one Simple Graph Convolution (SGC) layer (Wu et al., 2019) as our encoder, and the order of the adjacency matrix is set to 2, while we will study the order of the adjacency matrix in Appendix A. The LeakyReLU activation function with a negative slope of 0.1 is employed after the SGC layer. Similar to DGI (Velickovic et al., 2019), we set the output channel F = 512 for Cora and Citeseer dataset, and 256 for Pubmed dataset due to memory limitations. After the encoder, we use one linear layer to classify the transformation types. We set the edge perturbation rate in Eq. (9) as r = {0.7, 0.4, 0.7} for Cora, Citeseer, and Pubmed, respectively. The analysis of the edge perturbation rate will be presented in Appendix B.\nDuring the training procedure of the classifier, the SGC layer in the encoder is used to extract graph feature representations with the weights frozen. After the SGC layer, we apply one linear layer to map the features to the classification scores.\nExperimental Results. We compare the proposed method with five unsupervised methods, including one node embedding method DeepWalk, two graph auto-encoders GAE and VGAE (Kipf & Welling, 2016), and two contrastive learning methods DGI (Velickovic et al., 2019) and GMI (Peng et al., 2020). Additionally, we report the results of Raw Features and DeepWalk+Features (Perozzi et al., 2014) under the same settings. For fair comparison, the results of all other unsupervised methods are reproduced by using the same encoder architecture of the TopoTER except DeepWalk and Raw Features. We report the mean classification accuracy (with standard deviation) on the test nodes for all methods after 50 runs of training. As reported in Tab. 1, the TopoTER outperforms all other competing unsupervised methods on three datasets. Further, the proposed unsupervised method also achieves comparable performance with semi-supervised results. This significantly closes the gap between unsupervised approaches and the semi-supervised methods.\nMoreover, we compare the proposed TopoTER with two contrastive learning methods DGI and GMI in terms of the model complexity, as reported in Tab. 2. The number of parameters in our model is less than that of DGI and even less than half of that of GMI, which further shows the TopoTER model is lightweight." }, { "heading": "4.2 GRAPH CLASSIFICATION", "text": "Datasets. We conduct graph classification experiments on six well-known graph benchmark datasets (Yanardag & Vishwanathan, 2015): MUTAG, PTC, REDDIT-BINARY, REDDIT-MULTI5K, IMDB-BINARY, and IMDB-MULTI.\nImplementation Details. In this task, the entire network is trained via Adam optimizer with a batch size of 64, and the learning rate is set to 10−3. For the encoder architecture, we follow the same encoder settings in the released code of InfoGraph (Sun et al., 2020a), i.e., three Graph Isomorphism Network (GIN) layers (Xu et al., 2019b) with batch normalization. We also use one linear layer to classify the transformation types. We set the sampling rate r = 0.5 for all datasets.\nDuring the evaluation stage, the entire encoder will be frozen to extract node-level feature representations, which will go through a global add pooling layer to acquire global features. We then use LIBSVM to classify these global features to classification scores. We adopt the same procedure of previous works (Sun et al., 2020a) to make a fair comparison and use 10-fold cross validation accuracy to report the classification performance, and the experiments are repeated five times.\nExperimental Results. We take six graph kernel approaches for comparison: Random Walk (RW) (Gärtner et al., 2003), Shortest Path Kernel (SP) (Borgwardt & Kriegel, 2005), Graphlet Kernel (GK) (Shervashidze et al., 2009), Weisfeiler-Lehman Sub-tree Kernel (WL) (Shervashidze et al., 2011), Deep Graph Kernels (DGK) (Yanardag & Vishwanathan, 2015), and Multi-Scale Laplacian Kernel (MLG) (Kondor & Pan, 2016). Aside from graph kernel methods, we also compare with three unsupervised graph-level representation learning methods: node2vec (Grover & Leskovec, 2016), sub2vec (Adhikari et al., 2018), and graph2vec (Narayanan et al., 2017), and one contrastive learning method: InfoGraph (Sun et al., 2020a). The experimental results of unsupervised graph classification are preseted in Tab. 3. The proposed TopoTER outperforms all unsupervised baseline methods on the first five datasets, and achieves comparable results on the other dataset. Also, the proposed approach reaches the performance of supervised methods at times, thus validating the effectiveness of the TopoTER model." }, { "heading": "5 CONCLUSION", "text": "We propose Topology Transformation Equivariant Representation (TopoTER) for learning unsupervised representations on graph data. By maximizing the mutual information between topology transformations and feature representations before and after transformations, the TopoTER enforces the encoder to learn intrinsic graph feature representations that contain sufficient information about structures under applied topology transformations. We apply the TopoTER model to node classification and graph classification tasks, and results demonstrate that the TopoTER outperforms stateof-the-art unsupervised approaches and reaches the performance of supervised methods at times." }, { "heading": "A EXPERIMENTS ON DIFFERENT ORDERS OF THE ADJACENCY MATRIX", "text": "As presented in Sec. 3.2, we perturb the 1-hop neighborhoods via the proposed topology transformations, leading to possibly significant changes in the graph topology. This increases the difficulties of predicting the topology transformations when using one-layer GCN (Kipf & Welling, 2017) by aggregating the 1-hop neighborhood information. Therefore, we employ one Simple Graph Convolution (SGC) layer (Wu et al., 2019) with order k as our encoder E(·), where the output feature representations aggregate multi-hop neighborhood information. Formally, the SGC layer is defined as\nH = E(X,A) = ( D− 1 2 (A + I)D− 1 2 )k XW, (16)\nwhere D is the degree matrix of A + I, W ∈ RC×F is a learnable parameter matrix, and k is the order of the normalized adjacency matrix.\nTo study the influence of different orders of the adjacency matrix, we adopt five orders from 1 to 5 to train five models on the node classification task. Fig. 3 presents the node classification accuracy under different orders of the adjacency matrix for TopoTER and DGI respectively. As we can see, the proposed TopoTER achieves best classification performance when k = {4, 2, 3} on the three datasets respectively. When k = 1, our model still achieves reasonable results although it is difficult to predict the topology transformations from 1-hop neighborhood information; when k > 1, our proposed TopoTER outperforms DGI by a large margin on Cora and Pubmed dataset, and achieves comparable results to DGI on Citeseer dataset. This is because DGI adopts feature shuffling to generate negative samples, which is insufficient to learn contrastive feature representations when aggregating multi-hop neighborhood information, while TopoTER takes advantage of multi-hop neighborhood information to predict the topology transformations, leading to improved performance." }, { "heading": "B EXPERIMENTS ON DIFFERENT EDGE PERTURBATION RATES", "text": "Further, we evaluate the influence of the edge perturbation rate in Eq. (9) on the node classification task. We choose 11 edge perturbation rates from 0.0 to 1.0 at an interval of 0.1 to train the proposed TopoTER. We use one SGC layer as our encoder E(·), where the order of the adjacency matrix is set to 1. As presented in Fig. 4, the blue solid line with error bar shows the classification accuracy of our TopoTER under different edge perturbation rates. We also provide the classification accuracy on feature representations of graphs from a randomly initialized encoder E(·), denoted as Random Init., which serves as the lower bound of the performance.\nAs we can see, the classification performance reaches the best when the graph is perturbed under a reasonable edge perturbation rate, e.g., r = {0.6, 0.5, 0.6} for the Cora, Citeseer, and Pubmed dataset, respectively. When the edge perturbation rate r = 0.0, the unsupervised training task of TopoTER becomes link prediction, which cannot take advantage of the proposed method by predicting the topology transformations; when the edge perturbation rate r = 1.0, our TopoTER still achieves reasonable classification results, which shows the stability of our model under high edge perturbation rates. At the same time, we observe that the proposed TopoTER outperforms Random Init. by a large margin, which validates the effectiveness of the proposed unsupervised training strategy." } ]
2,020
null
SP:ce42fb1c2d8241aeb60250b7a0229411a2dcfa81
[ "This paper presents a framework for performing both differentiable physics simulations and differentiable rendering. This fully differentiable simulation and rendering pipeline is then employed to perform system identification tasks, directly from video frames, being able to match or outperform both visual-based and state-based baselines. Moreover, the potential of this framework to be applied for visuomotor control is also demonstrated." ]
We consider the problem of estimating an object’s physical properties such as mass, friction, and elasticity directly from video sequences. Such a system identification problem is fundamentally ill-posed due to the loss of information during image formation. Current solutions require precise 3D labels which are labor-intensive to gather, and infeasible to create for many systems such as deformable solids or cloth. We present ∇Sim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering to jointly model the evolution of scene dynamics and image formation. This novel combination enables backpropagation from pixels in a video sequence through to the underlying physical attributes that generated them. Moreover, our unified computation graph – spanning from the dynamics and through the rendering process – enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision, while obtaining performance competitive to or better than techniques that rely on precise 3D labels.
[ { "affiliations": [], "name": "Krishna Murthy Jatavallabhula" }, { "affiliations": [], "name": "Miles Macklin" }, { "affiliations": [], "name": "Florian Golemo" }, { "affiliations": [], "name": "Vikram Voleti" }, { "affiliations": [], "name": "Linda Petrini" }, { "affiliations": [], "name": "Martin Weiss" }, { "affiliations": [], "name": "Breandan Considine" }, { "affiliations": [], "name": "Jérôme Parent-Lévesque" }, { "affiliations": [], "name": "Kevin Xie" }, { "affiliations": [], "name": "Kenny Erleben" }, { "affiliations": [], "name": "Liam Paull" }, { "affiliations": [], "name": "Florian Shkurti" }, { "affiliations": [], "name": "Derek Nowrouzezahrai" }, { "affiliations": [], "name": "Sanja Fidler" } ]
[ { "authors": [ "Vanhoucke", "Vijay Vasudevan", "Fernanda Viégas", "Oriol Vinyals", "Pete Warden", "Martin Wattenberg", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: Large-scale machine learning on heterogeneous systems, 2015", "venue": "URL http://tensorflow.org/. Software available from tensorflow.org", "year": 2015 }, { "authors": [ "Pulkit Agrawal", "Ashvin Nair", "Pieter Abbeel", "Jitendra Malik", "Sergey Levine" ], "title": "Learning to poke by poking: Experiential learning of intuitive physics", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Hassan Abu Alhaija", "Siva Karthik Mustikovela", "Andreas Geiger", "Carsten Rother" ], "title": "Geometric image synthesis", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Martin Asenov", "Michael Burke", "Daniel Angelov", "Todor Davchev", "Kartic Subr", "Subramanian Ramamoorthy" ], "title": "Vid2Param: Modelling of dynamics parameters from video", "venue": "IEEE Robotics and Automation Letters,", "year": 2019 }, { "authors": [ "Peter W. Battaglia", "Jessica B. Hamrick", "Joshua B. Tenenbaum" ], "title": "Simulation as an engine of physical scene understanding", "venue": "Proceedings of the National Academy of Sciences,", "year": 2013 }, { "authors": [ "Kiran S Bhat", "Steven M Seitz", "Jovan Popović", "Pradeep K Khosla" ], "title": "Computing the physical parameters of rigid-body motion from video", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2002 }, { "authors": [ "Kiran S Bhat", "Christopher D Twigg", "Jessica K Hodgins", "Pradeep Khosla", "Zoran Popovic", "Steven M Seitz" ], "title": "Estimating cloth simulation parameters from video", "venue": "In ACM SIGGRAPH/Eurographics Symposium on Computer Animation,", "year": 2003 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "Skye Wanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Robert Bridson", "Sebastian Marino", "Ronald Fedkiw" ], "title": "Simulation of clothing with folds and wrinkles", "venue": "In ACM SIGGRAPH 2005 Courses,", "year": 2005 }, { "authors": [ "Marcus Brubaker", "David Fleet", "Aaron Hertzmann" ], "title": "Physics-based person tracking using the anthropomorphic walker", "venue": "International Journal of Computer Vision, 87:140–155,", "year": 2010 }, { "authors": [ "Marcus A Brubaker", "Leonid Sigal", "David J Fleet" ], "title": "Estimating contact dynamics", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2009 }, { "authors": [ "Arunkumar Byravan", "Dieter Fox" ], "title": "SE3-Nets: Learning rigid body motion using deep neural networks", "venue": "IEEE International Conference on Robotics and Automation (ICRA),", "year": 2017 }, { "authors": [ "Angel X Chang", "Thomas Funkhouser", "Leonidas Guibas", "Pat Hanrahan", "Qixing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su" ], "title": "ShapeNet: An information-rich 3d model repository", "venue": "arXiv preprint arXiv:1512.03012,", "year": 2015 }, { "authors": [ "Michael B. Chang", "Tomer Ullman", "Antonio Torralba", "Joshua B. Tenenbaum" ], "title": "A compositional object-based approach to learning physical dynamics", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David K Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wenzheng Chen", "Jun Gao", "Huan Ling", "Edward Smith", "Jaakko Lehtinen", "Alec Jacobson", "Sanja Fidler" ], "title": "Learning to predict 3d objects with an interpolation-based differentiable renderer", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Zhengdao Chen", "Jianyu Zhang", "Martin Arjovsky", "Léon Bottou" ], "title": "Symplectic recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhiqin Chen", "Hao Zhang" ], "title": "Learning implicit fields for generative shape modeling", "venue": "Proceedings of Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Erwin Coumans", "Yunfei Bai" ], "title": "PyBullet, a python module for physics simulation for games, robotics and machine learning", "venue": "http://pybullet.org, 2016–2019", "year": 2019 }, { "authors": [ "Kyle Cranmer", "Johann Brehmer", "Gilles Louppe" ], "title": "The frontier of simulation-based inference", "venue": "In National Academy of Sciences (NAS),", "year": 2020 }, { "authors": [ "Miles Cranmer", "Sam Greydanus", "Stephan Hoyer", "Peter Battaglia", "David Spergel", "Shirley Ho" ], "title": "Lagrangian neural networks", "venue": "In ICLR Workshops,", "year": 2020 }, { "authors": [ "Filipe de Avila Belbute-Peres", "Kevin Smith", "Kelsey Allen", "Josh Tenenbaum", "J. Zico Kolter" ], "title": "End-to-end differentiable physics for learning and control", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jonas Degrave", "Michiel Hermans", "Joni Dambre", "Francis Wyffels" ], "title": "A differentiable physics engine for deep learning in robotics", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sébastien Ehrhardt", "Aron Monszpart", "Niloy J. Mitra", "Andrea Vedaldi" ], "title": "Learning a physical long-term predictor. arXiv, 2017", "venue": null, "year": 2017 }, { "authors": [ "Sébastien Ehrhardt", "Aron Monszpart", "Niloy J. Mitra", "Andrea Vedaldi" ], "title": "Unsupervised intuitive physics from visual observations", "venue": "Asian Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Kiana Ehsani", "Shubham Tulsiani", "Saurabh Gupta", "Ali Farhadi", "Abhinav Gupta" ], "title": "Use the Force, Luke! learning to predict physical forces by simulating effects", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tom Erez", "Yuval Tassa", "Emanuel Todorov" ], "title": "Simulation tools for model-based robotics: Comparison of Bullet, Havok, MuJoCo, ODE, and PhysX", "venue": "In IEEE International Conference on Robotics and Automation (ICRA),", "year": 2015 }, { "authors": [ "S.M. Ali Eslami", "Danilo Jimenez Rezende", "Frederic Besse", "Fabio Viola", "Ari S. Morcos", "Marta Garnelo", "Avraham Ruderman", "Andrei A. Rusu", "Ivo Danihelka", "Karol Gregor", "David P. Reichert", "Lars Buesing", "Theophane Weber", "Oriol Vinyals", "Dan Rosenbaum", "Neil Rabinowitz", "Helen King", "Chloe Hillier", "Matt Botvinick", "Daan Wierstra", "Koray Kavukcuoglu", "Demis Hassabis" ], "title": "Neural scene representation and rendering. Science, 2018", "venue": null, "year": 2018 }, { "authors": [ "Katerina Fragkiadaki", "Pulkit Agrawal", "Sergey Levine", "Jitendra Malik" ], "title": "Learning visual predictive models of physics for playing billiards", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Sam Greydanus", "Misko Dzamba", "Jason Yosinski" ], "title": "Hamiltonian neural networks", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andreas Griewank", "Andrea Walther" ], "title": "Introduction to automatic differentiation", "venue": "PAMM,", "year": 2003 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G. Kim", "Bryan C. Russell", "Mathieu Aubry" ], "title": "Atlasnet: A papier-mâché approach to learning 3d surface generation", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Radek Grzeszczuk", "Demetri Terzopoulos", "Geoffrey Hinton" ], "title": "Neuroanimator: Fast neural network emulation and control of physics-based models", "venue": "In Proceedings of the 25th annual conference on Computer graphics and interactive techniques,", "year": 1998 }, { "authors": [ "Vincent Le Guen", "Nicolas Thome" ], "title": "Disentangling physical dynamics from unknown factors for unsupervised video prediction", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Abhinav Gupta", "Alexei A. Efros", "Martial Hebert" ], "title": "Blocks world revisited: Image understanding using qualitative geometry and mechanics", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2010 }, { "authors": [ "Ernst Hairer", "Christian Lubich", "Gerhard Wanner" ], "title": "Geometric numerical integration: structurepreserving algorithms for ordinary differential equations, volume 31", "venue": "Springer Science & Business Media,", "year": 2006 }, { "authors": [ "Eric Heiden", "David Millard", "Hejia Zhang", "Gaurav S. Sukhatme" ], "title": "Interactive differentiable simulation", "venue": "In arXiv,", "year": 2019 }, { "authors": [ "Philipp Henzler", "Niloy J. Mitra", "Tobias Ritschel" ], "title": "Escaping plato’s cave using adversarial training: 3d shape from unstructured 2d image collections", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yuanming Hu", "Yu Fang", "Ziheng Ge", "Ziyin Qu", "Yixin Zhu", "Andre Pradhana", "Chenfanfu Jiang" ], "title": "A moving least squares material point method with displacement discontinuity and two-way rigid body coupling", "venue": "ACM Transactions on Graphics,", "year": 2018 }, { "authors": [ "Yuanming Hu", "Jiancheng Liu", "Andrew Spielberg", "Joshua B. Tenenbaum", "William T. Freeman", "Jiajun Wu", "Daniela Rus", "Wojciech Matusik" ], "title": "Chainqueen: A real-time differentiable physical simulator for soft robotics", "venue": "In IEEE International Conference on Robotics and Automation (ICRA),", "year": 2019 }, { "authors": [ "Yuanming Hu", "Luke Anderson", "Tzu-Mao Li", "Qi Sun", "Nathan Carr", "Jonathan Ragan-Kelley", "Frédo Durand" ], "title": "DiffTaichi: Differentiable programming for physical simulation", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Carlo Innamorati", "Bryan Russell", "Danny Kaufman", "Niloy Mitra" ], "title": "Neural re-simulation for generating bounces in single images", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Michael Janner", "Sergey Levine", "William T. Freeman", "Joshua B. Tenenbaum", "Chelsea Finn", "Jiajun Wu" ], "title": "Reasoning about physical interactions with object-oriented prediction and planning", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Miguel Jaques", "Michael Burke", "Timothy M. Hospedales" ], "title": "Physics-as-inverse-graphics: Joint unsupervised learning of objects and physics from video", "venue": "International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Krishna Murthy Jatavallabhula", "Edward Smith", "Jean-Francois Lafleche", "Clement Fuji Tsang", "Artem Rozantsev", "Wenzheng Chen", "Tommy Xiang", "Rev Lebaredian", "Sanja Fidler" ], "title": "Kaolin: A pytorch library for accelerating 3d deep learning research", "venue": "In arXiv,", "year": 2019 }, { "authors": [ "Hiroharu Kato", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Neural 3d mesh renderer", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "David Kirk" ], "title": "Nvidia cuda software and gpu parallel computing architecture", "venue": "In ISMM,", "year": 2007 }, { "authors": [ "Krzysztof Kozlowski" ], "title": "Modelling and Identification in Robotics", "venue": "Advances in Industrial Control. Springer, London,", "year": 1998 }, { "authors": [ "T.D. Kulkarni", "P. Kohli", "J.B. Tenenbaum", "V. Mansinghka" ], "title": "Picture: A probabilistic programming language for scene perception", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Tzu-Mao Li", "Miika Aittala", "Frédo Durand", "Jaakko Lehtinen" ], "title": "Differentiable monte carlo ray tracing through edge sampling", "venue": "SIGGRAPH Asia,", "year": 2018 }, { "authors": [ "Yunzhu Li", "Jiajun Wu", "Russ Tedrake", "Joshua B Tenenbaum", "Antonio Torralba" ], "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yunzhu Li", "Toru Lin", "Kexin Yi", "Daniel Bear", "Daniel L.K. Yamins", "Jiajun Wu", "Joshua B. Tenenbaum", "Antonio Torralba" ], "title": "Visual grounding of learned physical models", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Junbang Liang", "Ming Lin", "Vladlen Koltun" ], "title": "Differentiable cloth simulation for inverse problems", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yiyi Liao", "Katja Schwarz", "Lars Mescheder", "Andreas Geiger" ], "title": "Towards unsupervised learning of generative models for 3d controllable image synthesis", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "C Karen Liu", "Aaron Hertzmann", "Zoran Popović" ], "title": "Learning physics-based motion style with nonlinear inverse optimization", "venue": "ACM Transactions on Graphics (TOG),", "year": 2005 }, { "authors": [ "Shichen Liu", "Tianye Li", "Weikai Chen", "Hao Li" ], "title": "Soft rasterizer: A differentiable renderer for image-based 3d reasoning", "venue": "Proceedings of International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Matthew M. Loper", "Michael J. Black" ], "title": "Opendr: An approximate differentiable renderer", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2014 }, { "authors": [ "Miles Macklin", "Matthias Müller", "Nuttapong Chentanez", "Tae-Yong Kim" ], "title": "Unified particle physics for real-time applications", "venue": "ACM Transactions on Graphics (TOG),", "year": 2014 }, { "authors": [ "Jeffrey Mahler", "Jacky Liang", "Sherdil Niyaz", "Michael Laskey", "Richard Doan", "Xinyu Liu", "Juan Aparicio Ojea", "Ken Goldberg" ], "title": "Dex-net 2.0: Deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics", "venue": "In Robotics Science and Systems,", "year": 2017 }, { "authors": [ "Richard Mann", "Allan Jepson", "Jeffrey Mark Siskind" ], "title": "The computational perception of scene dynamics", "venue": "Computer Vision and Image Understanding,", "year": 1997 }, { "authors": [ "C Charles" ], "title": "Margossian. A review of automatic differentiation and its efficient implementation", "venue": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery,", "year": 2019 }, { "authors": [ "Viraj Mehta", "Ian Char", "Willie Neiswanger", "Youngseog Chung", "Andrew Oakleigh Nelson", "Mark D Boyer", "Egemen Kolemen", "Jeff Schneider" ], "title": "Neural dynamical systems: Balancing structure and flexibility in physical prediction", "venue": "ICLR Workshops,", "year": 2020 }, { "authors": [ "Lars Mescheder", "Michael Oechsle", "Michael Niemeyer", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Occupancy networks: Learning 3d reconstruction in function space", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Mateusz Michalkiewicz", "Jhony K. Pontes", "Dominic Jack", "Mahsa Baktashmotlagh", "Anders Eriksson" ], "title": "Implicit surface representations as layers in neural networks", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Ben Mildenhall", "Pratul P. Srinivasan", "Rodrigo Ortiz-Cayon", "Nima Khademi Kalantari", "Ravi Ramamoorthi", "Ren Ng", "Abhishek Kar" ], "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "Ben Mildenhall", "Pratul P. Srinivasan", "Matthew Tancik", "Jonathan T. Barron", "Ravi Ramamoorthi", "Ren Ng" ], "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Roozbeh Mottaghi", "Hessam Bagherinezhad", "Mohammad Rastegari", "Ali Farhadi" ], "title": "Newtonian image understanding: Unfolding the dynamics of objects in static images", "venue": "Proceedings of Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Roozbeh Mottaghi", "Mohammad Rastegari", "Abhinav Gupta", "Ali Farhadi" ], "title": "what happens if...\" learning to predict the effect of forces in images", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "D.J. Murray-Smith" ], "title": "The inverse simulation approach: a focused review of methods and applications", "venue": "Mathematics and Computers in Simulation,", "year": 2000 }, { "authors": [ "Thu Nguyen-Phuoc", "Chuan Li", "Stephen Balaban", "Yong-Liang Yang" ], "title": "Rendernet: A deep convolutional network for differentiable rendering from 3d shapes", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Michael Niemeyer", "Lars Mescheder", "Michael Oechsle", "Andreas Geiger" ], "title": "Differentiable volumetric rendering: Learning implicit 3d representations without 3d supervision", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Merlin Nimier-David", "Delio Vicini", "Tizian Zeltner", "Wenzel Jakob" ], "title": "Mitsuba 2: A retargetable forward and inverse renderer", "venue": "Transactions on Graphics (Proceedings of SIGGRAPH Asia),", "year": 2019 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard A. Newcombe", "Steven Lovegrove" ], "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Despoina Paschalidou", "Ali Osman Ulusoy", "Carolin Schmitt", "Luc van Gool", "Andreas Geiger" ], "title": "Raynet: Learning volumetric 3d reconstruction with ray potentials", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Matt Pharr", "Wenzel Jakob", "Greg Humphreys" ], "title": "Physically Based Rendering: From Theory to Implementation", "venue": "ISBN 0128006455", "year": 2016 }, { "authors": [ "Yi-Ling Qiao", "Junbang Liang", "Vladlen Koltun", "Ming C Lin" ], "title": "Scalable differentiable physics for learning and control", "venue": "International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Fabio Ramos", "Rafael Carvalhaes Possas", "Dieter Fox" ], "title": "Bayessim: adaptive domain randomization via probabilistic inference for robotics simulators", "venue": "Robotics Science and Systems,", "year": 2019 }, { "authors": [ "Nikhila Ravi", "Jeremy Reizenstein", "David Novotny", "Taylor Gordon", "Wan-Yen Lo", "Justin Johnson", "Georgia Gkioxari" ], "title": "Accelerating 3d deep learning with pytorch3d", "venue": "arXiv preprint arXiv:2007.08501,", "year": 2020 }, { "authors": [ "Danilo Jimenez Rezende", "SM Ali Eslami", "Shakir Mohamed", "Peter Battaglia", "Max Jaderberg", "Nicolas Heess" ], "title": "Unsupervised learning of 3d structure from images", "venue": "In Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Mathieu Salzmann", "Raquel Urtasun" ], "title": "Physically-based motion models for 3d tracking: A convex formulation", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2011 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Victor Bapst", "Kyle Cranmer", "Peter Battaglia" ], "title": "Hamiltonian graph networks with ode integrators", "venue": "In arXiv,", "year": 2019 }, { "authors": [ "Connor Schenck", "Dieter Fox" ], "title": "Spnets: Differentiable fluid dynamics for deep neural networks", "venue": "In International Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Eftychios Sifakis", "Jernej Barbic" ], "title": "Fem simulation of 3d deformable solids: a practitioner’s guide to theory, discretization and model reduction", "venue": "In ACM SIGGRAPH 2012 courses,", "year": 2012 }, { "authors": [ "Breannan Smith", "Fernando De Goes", "Theodore Kim" ], "title": "Stable neo-hookean flesh simulation", "venue": "ACM Transactions on Graphics,", "year": 2018 }, { "authors": [ "Edward Smith", "Scott Fujimoto", "David Meger" ], "title": "Multi-view silhouette and depth decomposition for high resolution 3d object representation", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Edward J. Smith", "Scott Fujimoto", "Adriana Romero", "David Meger" ], "title": "Geometrics: Exploiting geometric structure for graph-encoded", "venue": "objects. International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Changkyu Song", "Abdeslam Boularias" ], "title": "Identifying mechanical models through differentiable simulations", "venue": "In Learning for Dynamical Systems and Control (L4DC),", "year": 2020 }, { "authors": [ "Changkyu Song", "Abdeslam Boularias" ], "title": "Learning to slide unknown objects with differentiable physics simulations", "venue": "In Robotics Science and Systems,", "year": 2020 }, { "authors": [ "Jos Stam" ], "title": "Stable fluids", "venue": "Proceedings of the 26th annual conference on Computer graphics and interactive techniques,", "year": 1999 }, { "authors": [ "Trevor Standley", "Ozan Sener", "Dawn Chen", "Silvio Savarese" ], "title": "image2mass: Estimating the mass of an object from its image", "venue": "In International Conference on Robot Learning,", "year": 2017 }, { "authors": [ "Giovanni Sutanto", "Austin S. Wang", "Yixin Lin", "Mustafa Mukadam", "Gaurav S. Sukhatme", "Akshara Rai", "Franziska Meier" ], "title": "Encoding physical constraints in differentiable newton-euler algorithm. In Learning for Dynamical systems and Control (L4DC), 2020", "venue": null, "year": 2020 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "EfficientNet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Emanuel Todorov" ], "title": "Convex and analytically-invertible dynamics with contacts and constraints: Theory and implementation in mujoco", "venue": "In IEEE International Conference on Robotics and Automation (ICRA),", "year": 2014 }, { "authors": [ "Peter Toth", "Danilo Jimenez Rezende", "Andrew Jaegle", "Sébastien Racanière", "Aleksandar Botev", "Irina Higgins" ], "title": "Hamiltonian generative networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Marc Toussaint", "Kelsey Allen", "Kevin Smith", "Joshua Tenenbaum" ], "title": "Differentiable physics and stable modes for tool-use and manipulation planning", "venue": "In Robotics Science and Systems,", "year": 2018 }, { "authors": [ "Bart van Merriënboer", "Alexander B Wiltschko", "Dan Moldovan" ], "title": "Tangent: automatic differentiation using source code transformation in python", "venue": "In Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Bin Wang", "Paul G. Kry", "Yuanmin Deng", "Uri M. Ascher", "Hui Huang", "Baoquan Chen" ], "title": "Neural material: Learning elastic constitutive material and damping models from sparse data. arXiv, 2018a", "venue": null, "year": 2018 }, { "authors": [ "Kun Wang", "Mridul Aanjaneya", "Kostas Bekris" ], "title": "A first principles approach for data-efficient system identification of spring-rod systems via differentiable physics engines", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "Nanyang Wang", "Yinda Zhang", "Zhuwen Li", "Yanwei Fu", "Wei Liu", "Yu-Gang Jiang" ], "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "venue": "In Proceedings of the European Conference on Computer Vision,", "year": 2018 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual interaction networks: Learning a physics simulator from video", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "P.M. Wensing", "S. Kim", "J.E. Slotine" ], "title": "Linear matrix inequalities for physically consistent inertial parameter identification: A statistical perspective on the mass distribution", "venue": "IEEE Robotics and Automation Letters,", "year": 2018 }, { "authors": [ "Ronald J. Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine Learning,", "year": 1992 }, { "authors": [ "Jiajun Wu", "Ilker Yildirim", "Joseph J Lim", "William T Freeman", "Joshua B Tenenbaum" ], "title": "Galileo: Perceiving physical object properties by integrating a physics engine with deep learning", "venue": "In Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Jiajun Wu", "Joseph J Lim", "Hongyi Zhang", "Joshua B Tenenbaum", "William T Freeman" ], "title": "Physics 101: Learning physical object properties from unlabeled videos", "venue": "In British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Jiajun Wu", "Erika Lu", "Pushmeet Kohli", "William T Freeman", "Joshua B Tenenbaum" ], "title": "Learning to see physics via visual de-animation", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiajun Wu", "Joshua B Tenenbaum", "Pushmeet Kohli" ], "title": "Neural scene de-rendering", "venue": "In Proceedings of Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Qiangeng Xu", "Weiyue Wang", "Duygu Ceylan", "Radomir Mech", "Ulrich Neumann" ], "title": "Disn: Deep implicit surface network for high-quality single-view 3d reconstruction", "venue": "In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Zhenjia Xu", "Jiajun Wu", "Andy Zeng", "Joshua B. Tenenbaum", "Shuran Song" ], "title": "Densephysnet: Learning dense physical object representations via multi-step dynamic interactions", "venue": "In Robotics Science and Systems,", "year": 2019 }, { "authors": [ "Ilker Yildirim", "Tejas Kulkarni", "Winrich Freiwald", "Joshua Tenenbaum" ], "title": "Efficient analysis-bysynthesis in vision: A computational framework, behavioral tests, and comparison with neural representations", "venue": "In CogSci,", "year": 2015 }, { "authors": [ "Ilker Yildirim", "Michael Janner", "Mario Belledonne", "Christian Wallraven", "W.A. Freiwald", "Joshua B. Tenenbaum" ], "title": "Causal and compositional generative models in online perception", "venue": "In CogSci,", "year": 2017 }, { "authors": [ "Ilker Yildirim", "Mario Belledonne", "Winrich Freiwald", "Josh Tenenbaum" ], "title": "Efficient inverse graphics in biological face processing", "venue": "Science Advances,", "year": 2020 }, { "authors": [ "L. Yu", "N. Duncan", "S. Yeung" ], "title": "Fill and transfer: A simple physics-based approach for containability reasoning", "venue": "In Proceedings of International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Yaofeng Desmond Zhong", "Biswadip Dey", "Amit Chakraborty" ], "title": "Symplectic ode-net: Learning hamiltonian dynamics with control", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Qian-Yi Zhou", "Jaesik Park", "Vladlen Koltun" ], "title": "Open3D: A modern library for 3D data processing", "venue": null, "year": 2018 }, { "authors": [ "Paszke" ], "title": "2018), program transformation approaches such as DiffTaichi, and Google Tangent Hu et al", "venue": null, "year": 2018 }, { "authors": [], "title": "We also include a lift/drag model that approximates the effect of the surrounding air on the surface of mesh. Rigid Bodies: We represent the state of a 3D rigid body as qb = [x, r] consisting of a position x ∈ R3, and a quaternion r ∈ R4. The generalized velocity of a body is ub = [v, ω] and the dynamics", "venue": null, "year": 2005 }, { "authors": [ "Hairer" ], "title": "For this reason, many real-time physics engines employ a semiimplicit (symplectic) Euler integration scheme Erez et al. (2015), due to its ease of implementation and numerical stability in most meaningful scenarios (conserves energy for systems where the Hamiltonian is time-invariant)", "venue": null, "year": 2006 }, { "authors": [ "Liu" ], "title": "For solids, we use a tetrahedral FEM model as illustrated in Figure 8b. Both these models include a per-element activation parameter α, which for thin-shells, allows us to control the relative dihedral angle between two connected faces", "venue": null, "year": 2019 }, { "authors": [ "de Avila Belbute-Peres" ], "title": "We also implement simple and double pendula, as toy examples of well-behaved and chaotic systems respectively, and estimate the parameters of the system (i.e., the length(s) of the rod(s) and initial angular displacement(s)), by comparing the rendered videos (assuming uniformly random initial guesses) with the true videos. As pendula have extensively been studied in the context of differentiable physics simulation", "venue": "Degrave et al", "year": 2020 }, { "authors": [ "Rezende" ], "title": "vertices, for faster collision detection times. technique to acquire an approximate gradient through the otherwise non-differentiable environment. The implementation was inspired by Wu et al", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Accurately predicting the dynamics and physical characteristics of objects from image sequences is a long-standing challenge in computer vision. This end-to-end reasoning task requires a fundamental understanding of both the underlying scene dynamics and the imaging process. Imagine watching a short video of a basketball bouncing off the ground and ask: “Can we infer the mass and elasticity of the ball, predict its trajectory, and make informed decisions, e.g., how to pass and shoot?” These seemingly simple questions are extremely challenging to answer even for modern computer vision models. The underlying physical attributes of objects and the system dynamics need to be modeled and estimated, all while accounting for the loss of information during 3D to 2D image formation.\nDepending on the assumptions on the scene structre and dynamics, three types of solutions exist: black, grey, or white box. Black box methods (Watters et al., 2017; Xu et al., 2019b; Janner et al., 2019; Chang et al., 2016) model the state of a dynamical system (such as the basketball’s trajectory in time) as a learned embedding of its states or observations. These methods require few prior assumptions about the system itself, but lack interpretability due to entangled variational factors (Chen et al., 2016) or due to the ambiguities in unsupervised learning (Greydanus et al., 2019; Cranmer et al., 2020b). Recently, grey box methods (Mehta et al., 2020) leveraged partial knowledge about the system dynamics to improve performance. In contrast, white box methods (Degrave et al., 2016; Liang et al., 2019; Hu et al., 2020; Qiao et al., 2020) impose prior knowledge by employing explicit dynamics models, reducing the space of learnable parameters and improving system interpretability.\n∗Equal contribution\nMost notably in our context, all of these approaches require precise 3D labels – which are laborintensive to gather, and infeasible to generate for many systems such as deformable solids or cloth.\nWe eliminate the dependence of white box dynamics methods on 3D supervision by coupling explicit (and differentiable) models of scene dynamics with image formation (rendering)1.\nExplicitly modeling the end-to-end dynamics and image formation underlying video observations is challenging, even with access to the full system state. This problem has been treated in the vision, graphics, and physics communities (Pharr et al., 2016; Macklin et al., 2014), leading to the development of robust forward simulation models and algorithms. These simulators are not readily usable for solving inverse problems, due in part to their non-differentiability. As such, applications of black-box forward processes often require surrogate gradient estimators such as finite differences or REINFORCE (Williams, 1992) to enable any learning. Likelihood-free inference for black-box forward simulators (Ramos et al., 2019; Cranmer et al., 2020a; Kulkarni et al., 2015; Yildirim et al., 2017; 2015; 2020; Wu et al., 2017b) has led to some improvements here, but remains limited in terms of data efficiency and scalability to high dimensional parameter spaces. Recent progress in differentiable simulation further improves the learning dynamics, however we still lack a method for end-to-end differentiation through the entire simulation process (i.e., from video pixels to physical attributes), a prerequisite for effective learning from video frames alone.\nWe present∇Sim, a versatile end-to-end differentiable simulator that adopts a holistic, unified view of differentiable dynamics and image formation(cf. Fig. 1,2). Existing differentiable physics engines only model time-varying dynamics and require supervision in state space (usually 3D tracking). We additionally model a differentiable image formation process, thus only requiring target information specified in image space. This enables us to backpropagate (Griewank & Walther, 2003) training signals from video pixels all the way to the underlying physical and dynamical attributes of a scene.\nOur main contributions are:\n• ∇Sim, a differentiable simulator that demonstrates the ability to backprop from video pixels to the underlying physical attributes (cf. Fig. 2).\n• We demonstrate recovering many physical properties exclusively from video observations, including friction, elasticity, deformable material parameters, and visuomotor controls (sans 3D supervision)\n• A PyTorch framework facilitating interoperability with existing machine learning modules.\nWe evaluate ∇Sim’s effectiveness on parameter identification tasks for rigid, deformable and thinshell bodies, and demonstrate performance that is competitive, or in some cases superior, to current physics-only differentiable simulators. Additionally, we demonstrate the effectiveness of the gradients provided by ∇Sim on challenging visuomotor control tasks involving deformable solids and cloth." }, { "heading": "2 RELATED WORK", "text": "Differentiable physics simulators have seen significant attention and activity, with efforts centered around embedding physics structure into autodifferentiation frameworks. This has enabled differentiation through contact and friction models (Toussaint et al., 2018; de Avila Belbute-Peres et al.,\n1Dynamics refers to the laws governing the motion and deformation of objects over time. Rendering refers to the interaction of these scene elements – including their material properties – with scene lighting to form image sequences as observed by a virtual camera. Simulation refers to a unified treatment of these two processes.\n2018; Song & Boularias, 2020b;a; Degrave et al., 2016; Wu et al., 2017a; Research, 2020 (accessed May 15, 2020), latent state models (Guen & Thome, 2020; Schenck & Fox, 2018; Jaques et al., 2020; Heiden et al., 2019), volumetric soft bodies (Hu et al., 2019; 2018; Liang et al., 2019; Hu et al., 2020), as well as particle dynamics (Schenck & Fox, 2018; Li et al., 2019; 2020; Hu et al., 2020). In contrast,∇Sim addresses a superset of simulation scenarios, by coupling the physics simulator with a differentiable rendering pipeline. It also supports tetrahedral FEM-based hyperelasticity models to simulate deformable solids and thin-shells.\nRecent work on physics-based deep learning injects structure in the latent space of the dynamics using Lagrangian and Hamiltonian operators (Greydanus et al., 2019; Chen et al., 2020; Toth et al., 2020; Sanchez-Gonzalez et al., 2019; Cranmer et al., 2020b; Zhong et al., 2020), by explicitly conserving physical quantities, or with ground truth supervision (Asenov et al., 2019; Wu et al., 2016; Xu et al., 2019b).\nSensor readings have been used to predicting the effects of forces applied to an object in models of learned (Fragkiadaki et al., 2016; Byravan & Fox, 2017) and intuitive physics (Ehsani et al., 2020; Mottaghi et al., 2015; 2016; Gupta et al., 2010; Ehrhardt et al., 2018; Yu et al., 2015; Battaglia et al., 2013; Mann et al., 1997; Innamorati et al., 2019; Standley et al., 2017). This also includes approaches that learn to model multi-object interactions (Watters et al., 2017; Xu et al., 2019b; Janner et al., 2019; Ehrhardt et al., 2017; Chang et al., 2016; Agrawal et al., 2016). In many cases, intuitive physics approaches are limited in their prediction horizon and treatment of complex scenes, as they do not sufficiently accurately model the 3D geometry nor the object properties. System identification based on parameterized physics models (Salzmann & Urtasun, 2011; Brubaker et al., 2010; Kozlowski, 1998; Wensing et al., 2018; Brubaker et al., 2009; Bhat et al., 2003; 2002; Liu et al., 2005; Grzeszczuk et al., 1998; Sutanto et al., 2020; Wang et al., 2020; 2018a) and inverse simulation (Murray-Smith, 2000) are closely related areas.\nThere is a rich literature on neural image synthesis, but we focus on methods that model the 3D scene structure, including voxels (Henzler et al., 2019; Paschalidou et al., 2019; Smith et al., 2018b; Nguyen-Phuoc et al., 2018), meshes (Smith et al., 2020; Wang et al., 2018b; Groueix et al., 2018; Alhaija et al., 2018), and implicit shapes (Xu et al., 2019a; Chen & Zhang, 2019; Michalkiewicz et al., 2019; Niemeyer et al., 2020; Park et al., 2019; Mescheder et al., 2019). Generative models condition the rendering process on samples of the 3D geometry (Liao et al., 2019). Latent factors determining 3D structure have also been learned in generative models (Chen et al., 2016; Eslami et al., 2018). Additionally, implicit neural representations that leverage differentiable rendering have been proposed (Mildenhall et al., 2020; 2019) for realistic view synthesis. Many of these representations have become easy to manipulate through software frameworks like Kaolin (Jatavallabhula et al., 2019), Open3D (Zhou et al., 2018), and PyTorch3D (Ravi et al., 2020).\nDifferentiable rendering allows for image gradients to be computed w.r.t. the scene geometry, camera, and lighting inputs. Variants based on the rasterization paradigm (NMR (Kato et al., 2018), OpenDR (Loper & Black, 2014), SoftRas (Liu et al., 2019)) blur the edges of scene triangles prior to image projection to remove discontinuities in the rendering signal. DIB-R (Chen et al., 2019) applies this idea to background pixels and proposes an interpolation-based rasterizer for foreground pixels. More sophisticated differentiable renderers can treat physics-based light transport processes (Li et al., 2018; Nimier-David et al., 2019) by ray tracing, and more readily support higher-order effects such as shadows, secondary light bounces, and global illumination.\n3 ∇Sim: A UNIFIED DIFFERENTIABLE SIMULATION ENGINE Typically, physics estimation and rendering have been treated as disjoint, mutually exclusive tasks. In this work, we take on a unified view of simulation in general, to compose physics estimation and rendering. Formally, simulation is a function Sim : RP × [0, 1] 7→ RH × RW ; Sim(p, t) = I. Here p ∈ RP is a vector representing the simulation state and parameters (objects, their physical properties, their geometries, etc.), t denotes the time of simulation (conveniently reparameterized to be in the interval [0, 1]). Given initial conditions p0, the simulation function produces an image I of height H and width W at each timestep t. If this function Sim were differentiable, then the gradient of Sim(p, t) with respect to the simulation parameters p provides the change in the output of the simulation from I to I + ∇Sim(p, t)δp due to an infinitesimal perturbation of p by δp . This construct enables a gradient-based optimizer to estimate physical parameters from video , by defining a loss function over the image space L(I, .), and descending this loss landscape along a\ndirection parallel to −∇Sim(.) . To realise this, we turn to the paradigms of computational graphs and differentiable programming.\n∇Sim comprises two main components: a differentiable physics engine that computes the physical states of the scene at each time instant, and a differentiable renderer that renders the scene to a 2D image. Contrary to existing differentiable physics (Toussaint et al., 2018; de Avila Belbute-Peres et al., 2018; Song & Boularias, 2020b;a; Degrave et al., 2016; Wu et al., 2017a; Research, 2020 (accessed May 15, 2020; Hu et al., 2020; Qiao et al., 2020) or differentiable rendering (Loper & Black, 2014; Kato et al., 2018; Liu et al., 2019; Chen et al., 2019) approaches, we adopt a holistic view and construct a computational graph spanning them both." }, { "heading": "3.1 DIFFERENTIABLE PHYSICS ENGINE", "text": "Under Lagrangian mechanics, the state of a physical system can be described in terms of generalized coordinates q, generalized velocities q̇ = u, and design/model parameters θ. For the purpose of exposition, we make no distinction between rigid bodies, or deformable solids, or thin-shell models of cloth, etc. Although the specific choices of coordinates and parameters vary, the simulation procedure is virtually unchanged. We denote the combined state vector by s(t) = [q(t),u(t)].\nThe dynamic evolution of the system is governed by second order differential equations (ODEs) of the form M(s, θ )ṡ = f(s, θ), where M is a mass matrix that depends on the state and parameters. The forces on the system may be parameterized by design parameters (e.g. Young’s modulus). Solutions to these ODEs may be obtained through black box numerical integration methods, and their derivatives calculated through the continuous adjoint method (Chen et al., 2018). However, we instead consider our physics engine as a differentiable operation that provides an implicit relationship between a state vector s− = s(t) at the start of a time step, and the updated state at the end of the time step s+ = s(t+ ∆t). An arbitrary discrete time integration scheme can be then be abstracted as the function g(s−, s+, θ) = 0 , relating the initial and final system state and the model parameters θ .\nGradients through this dynamical system can be computed by graph-based autodiff frameworks (Paszke et al., 2019; Abadi et al., 2015; Bradbury et al., 2018), or by program transformation approaches (Hu et al., 2020; van Merriënboer et al., 2018). Our framework is agnostic to the specifics of the differentiable physics engine, however in Appendices A through D we detail an efficient approach based on the source-code transformation of parallel kernels, similar to DiffTaichi (Hu et al., 2020). In addition, we describe extensions to this framework to support mesh-based tetrahedral finite-element models (FEMs) for deformable and thin-shell solids. This is important since we require surface meshes to perform differentiable rasterization as described in the following section." }, { "heading": "3.2 DIFFERENTIABLE RENDERING ENGINE", "text": "A renderer expects scene description inputs and generates color image outputs, all according to a sequence of image formation stages defined by the forward graphics pipeline. The scene description includes a complete geometric descriptor of scene elements, their associated material/reflectance properties, light source definitions, and virtual camera parameters. The rendering process is not generally differentiable, as visibility and occlusion events introduce discontinuities. Most interactive renderers, such as those used in real-time applications, employ a rasterization process to project 3D geometric primitives onto 2D pixel coordinates, resolving these visibility events with nondifferentiable operations.\nOur experiments employ two differentiable alternatives to traditional rasterization, SoftRas (Liu et al., 2019) and DIB-R (Chen et al., 2019), both of which replace discontinuous triangle mesh edges with smooth sigmoids. This has the effect of blurring triangle edges into semi-transparent boundaries, thereby removing the non-differentiable discontinuity of traditional rasterization. DIBR distinguishes between foreground pixels (associated to the principal object being rendered in the scene) and background pixels (for all other objects, if any). The latter are rendered using the same technique as SoftRas while the former are rendered by bilinearly sampling a texture using differentiable UV coordinates.\n∇Sim performs differentiable physics simulation and rendering at independent and adjustable rates, allowing us to trade computation for accuracy by rendering fewer frames than dynamics updates." }, { "heading": "4 EXPERIMENTS", "text": "We conducted multiple experiments to test the efficacy of∇Sim on physical parameter identification from video and visuomotor control, to address the following questions:\n• Can we accurately identify physical parameters by backpropagating from video pixels, through the simulator? (Ans: Yes, very accurately, cf. 4.1)\n• What is the performance gap associated with using ∇Sim (2D supervision) vs. differentiable physics-only engines (3D supervision)? (Ans: ∇Sim is competitive/superior, cf. Tables 1, 2, 3)\n• How do loss landscapes differ across differentiable simulators (∇Sim) and their non-differentiable counterparts? (Ans: Loss landscapes for∇Sim are smooth, cf. 4.1.3)\n• Can we use∇Sim for visuomotor control tasks? (Ans: Yes, without any 3D supervision, cf. 4.2) • How sensitive is ∇Sim to modeling assumptions at system level? (Ans: Moderately, cf. Table 4) Each of our experiments comprises an environment E that applies a particular set of physical forces and/or constraints, a (differentiable) loss function L that implicitly specifies an objective, and an initial guess θ0 of the physical state of the simulation. The goal is to recover optimal physics parameters θ∗ that minimize L, by backpropagating through the simulator." }, { "heading": "4.1 PHYSICAL PARAMETER ESTIMATION FROM VIDEO", "text": "First, we assess the capabilities of∇Sim to accurately identify a variety of physical attributes such as mass, friction, and elasticity from image/video observations. To the best of our knowledge, ∇Sim is the first study to jointly infer such fine-grained parameters from video observations. We also implement a set of competitive baselines that use strictly more information on the task.\n4.1.1 RIGID BODIES (RIGID)\nOur first environment–rigid–evaluates the accuracy of estimating of physical and material attributes of rigid objects from videos. We curate a dataset of 10000 simulated videos generated from variations of 14 objects, comprising primitive shapes such as boxes, cones, cylinders, as well as non-convex shapes from ShapeNet (Chang et al., 2015) and DexNet (Mahler et al., 2017). With uniformly sampled initial dimensions, poses, velocities, and physical properties (density, elasticity, and friction parameters), we apply a known impulse to the object and record a video of the resultant trajectory. Inference with∇Sim is done by guessing an initial mass (uniformly random in the range [2, 12]kg/m3), unrolling a differentiable simulation using this guess, comparing the rendered out video with the true video (pixelwise mean-squared error - MSE), and performing gradient descent updates. We refer the interested reader to the appendix (Sec. G) for more details. Table 1 shows the results for predicting the mass of an object from video, with a known impulse applied\nto it. We use EfficientNet (B0) (Tan & Le, 2019) and resize input frames to 64× 64. Feature maps at a resoluition of 4 × 4 × 32 are concatenated for all frames and fed to an MLP with 4 linear layers, and trained with an MSE loss. We compare ∇Sim with three other baselines: PyBullet + REINFORCE (Ehsani et al., 2020; Wu et al., 2015), diff. physics only (requiring 3D supervision), and a ConvLSTM baseline adopted from Xu et al. (2019b) but with a stronger backbone. The DiffPhysics baseline is a strict subset of ∇Sim, it only inolves the differentiable physics engine. However, it needs precise 3D states as supervision, which is the primary factor for its superior performance. Nevertheless, ∇Sim is able to very precisely estimate mass from video, to a absolute relative error of 9.01e-5, nearly two orders of magnitude better than the ConvLSTM baseline. Two other baselines are also used: the “Average” baseline always predicts the dataset mean and the “Random” baseline\npredicts a random parameter value from the test distribution. All baselines and training details can be found in Sec. H of the appendix.\nTo investigate whether analytical differentiability is required, our PyBullet + REINFORCE baseline applies black-box gradient estimation (Williams, 1992) through a non-differentiable simulator (Coumans & Bai, 2016–2019), similar to Ehsani et al. (2020). We find this baseline particularly sensitive to several simulation parameters, and thus worse-performing. In Table 2, we jointly estimate friction and elasticity parameters of our compliant contact model from video observations alone. The trend is similar to Table 1, and ∇Sim is able to precisely recover the parameters of the simulation. A few examples can be seen in Fig. 3.\n4.1.2 DEFORMABLE BODIES (DEFORMABLE)\nWe conduct a series of experiments to investigate the ability of ∇Sim to recover physical parameters of deformable solids and thin-shell solids (cloth). Our physical model is parameterized by the perparticle mass, and Lamé elasticity parameters, as described in in Appendix C.1. Fig. 3 illustrates the recovery of the elasticity parameters of a beam hanging under gravity by matching the deformation given by an input video sequence. We found our method is able to accurately recover the parameters of 100 instances of deformable objects (cloth, balls, beams) as reported in Table 3 and Fig. 3.\n4.1.3 SMOOTHNESS OF THE LOSS LANDSCAPE IN ∇Sim Since∇Sim is a complex combination of differentiable non-linear components, we analyze the loss landscape to verify the validity of gradients through the system. Fig. 4 illustrates the loss landscape when optimizing for the mass of a rigid body when all other physical properties are known.\nWe examine the image-space mean-squared error (MSE) of a unit-mass cube (1 kg) for a range of initializations (0.1 kg to 5 kg). Notably, the loss landscape of∇Sim is well-behaved and conducive to momentum-based optimizers. Applying MSE to the first and last frames of the predicted and true videos provides the best gradients. However, for a naive gradient estimator applied to a nondifferentiable simulator (PyBullet + REINFORCE), multiple local minima exist resulting in a very narrow region of convergence. This explains∇Sim’s superior performance in Tables 1, 2, 3." }, { "heading": "4.2 VISUOMOTOR CONTROL", "text": "To investigate whether the gradients computed by∇Sim are meaningful for vision-based tasks, we conduct a range of visuomotor control experiments involving the actuation of deformable objects towards a visual target pose (a single image). In all cases, we evaluate against diffphysics, which uses a goal specification and a reward, both defined over the 3D state-space.\n4.2.1 DEFORMABLE SOLIDS (CONTROL-WALKER , CONTROL-FEM)\nThe first example (control-walker) involves a 2D walker model. Our goal is to train a neural network (NN) control policy to actuate the walker to reach a target pose on the right-hand side of an image. Our NN consists of one fully connected layer and a tanh activation. The network input is a set of 8 time-varying sinusoidal signals, and the output is a scalar activation value per-tetrahedron. ∇Sim is able to solve this environment within three iterations of gradient descent, by minimizing a pixelwise MSE between the last frame of the rendered video and the goal image as shown in Fig. 5 (lower left).\nIn our second test, we formulate a more challenging 3D control problem (control-fem) where the goal is to actuate a soft-body FEM object (a gear) consisting of 1152 tetrahedral elements to move to a target position as shown in Fig. 5 (center). We use the same NN architecture as in the 2D walker example, and use the Adam (Kingma & Ba, 2015) optimizer\nto minimize a pixelwise MSE loss. We also train a privileged baseline (diffphysics) that uses strong supervision and minimizes the MSE between the target position and the precise 3D location of the center-of-mass (COM) of the FEM model at each time step (i.e. a dense reward). We test both diffphysics and∇Sim against a naive baseline that generates random activations and plot convergence behaviors in Fig. 6a.\nWhile diffphysics appears to be a strong performer on this task, it is important to note that it uses explicit 3D supervision at each timestep (i.e. 30 FPS). In contrast, ∇Sim uses a single image as an implicit target, and yet manages to achieve the goal state, albeit taking a longer number of iterations.\n4.2.2 CLOTH (CONTROL-CLOTH)\nWe design an experiment to control a piece of cloth by optimizing the initial velocity such that it reaches a pre-specified target. In each episode, a random cloth is spawned, comprising between 64 and 2048 triangles, and a new start/goal combination is chosen.\nIn this challenging setup, we notice that state-based MPC (diffphysics) is often unable to accurately reach the target. We believe this is due to the underdetermined nature of the problem, since, for objects such as cloth, the COM by itself does not uniquely determine the configuration of the object. Visuomotor control on the other hand, provides a more well-defined problem. An illustration of the task is presented in Fig. 5 (column 3), and the convergence of the methods shown in Fig. 6b.\n(a) Results of various approaches on the control-fem environment (6 randomseeds; each randomseed corresponds to a different goal configuration). While diffphysics performs well, it assumes strong 3D supervision. In contrast, ∇Sim is able to solve the task by using just a single image of the target configuration. (b) Results on control-cloth environment (5 randomseeds; each controls the dimensions and initial/target poses of the cloth). diffphysics converges to a suboptimal solution due to ambiguity in specifying the pose of a cloth via its center-of-mass. ∇Sim solves the environment using a single target image.\nFigure 6: Convergence Analysis: Performance of ∇Sim on visuomotor control using image-based supervision, 3D supervision, and random policies." }, { "heading": "4.3 IMPACT OF IMPERFECT DYNAMICS AND RENDERING MODELS", "text": "Being a white box method, the performance of∇Sim relies on the choice of dynamics and rendering models employed. An immediate question that arises is “how would the performance of ∇Sim be impacted (if at all) by such modeling choices.” We conduct multiple experiments targeted at investigating modelling errors and summarize them in Table 4 (left).\nWe choose a dataset comprising 90 objects equally representing rigid, deformable, and cloth types. By not modeling specific dynamics and rendering phenomena, we create the following 5 variants of our simulator.\n1. Unmodeled friction: We model all collisions as being frictionless. 2. Unmodeled elasticity: We model all collisions as perfectly elastic. 3. Rigid-as-deformable: All rigid objects in the dataset are modeled as deformable objects. 4. Deformable-as-rigid: All deformable objects in the dataset are modeled as rigid objects. 5. Photorealistic render: We employ a photorealistic renderer—as opposed to ∇Sim’s differentiable\nrasterizers—in generating the target images.\nIn all cases, we evaluate the accuracy with which the mass of the target object is estimated from a target video sequence devoid of modeling discrepancies. In general, we observe that imperfect dynamics models (i.e. unmodeled friction and elasticity, or modeling a rigid object as deformable or vice-versa) have a more profound impact on parameter identification compared to imperfect renderers." }, { "heading": "4.3.1 UNMODELED DYNAMICS PHENOMENON", "text": "From Table 4 (left), we observe a noticeable performance drop when dynamics effects go unmodeled. Expectedly, the repurcussions of incorrect object type modeling (Rigid-as-deformable, Deformableas-rigid) are more severe compared to unmodeled contact parameters (friction, elasticity). Modeling a deformable body as a rigid body results in irrecoverable deformation parameters and has the most severe impact on the recovered parameter set." }, { "heading": "4.3.2 UNMODELED RENDERING PHENOMENON", "text": "We also independently investigate the impact of unmodeled rendering effects (assuming perfect dynamics). We indepenently render ground-truth images and object foreground masks from a photorealistic renderer (Pharr et al., 2016). We use these photorealistic renderings for ground-truth and perform physical parameter estimation from video. We notice that the performance obtained under this setting is superior compared to ones with dynamics model imperfections." }, { "heading": "4.3.3 IMPACT OF SHADING AND TEXTURE CUES", "text": "Although our work does not attempt to bridge the reality gap, we show early prototypes to assess phenomena such as shading/texture. Fig. 7 shows the accuracy over time for mass estimation from video. We evaluate three variants of the renderer - “Only color”, “Shading”, and “Texture”. The “Only color” variant renders each mesh element in the same color regardless of the position and orientation of the light source. The “Shading” variant implements a Phong shading model and can model specular and diffuse reflections. The “Texture” variant also applies a non-uniform texture sampled from ShapeNet (Chang et al., 2015). We notice that shading and texture cues significantly improve convergence speed. This is expected, as vertex colors often have very little appearance cues inside the object boundaries, leading to poor correspondences between the rendered and ground-truth images. Furthermore, textures seem to offer slight improvements in convergence speed over shaded models, as highlighted by the inset (log scale) plot in Fig. 7.\n4.3.4 TIMING ANALYSIS\nTable 4 (right) shows simulation rates for the forward and backward passes of each module. We report forward and backward pass rates separately for the differentiable physics (DP) and the differentiable rendering (DR) modules. The time complexity of ∇Sim is a function of the number of tetrahedrons and/or triangles. We illustrate the arguably more complex case of deformable object simulation for varying numbers of tetrahedra (ranging from 100 to 10000). Even in the case of 10000 tetrahedra—enough to contruct complex mesh models of multiple moving objects—∇Sim enables faster-than-realtime simulation (1500 steps/second)." }, { "heading": "5 CONCLUSION", "text": "We presented ∇Sim, a versatile differentiable simulator that enables system identification from videos by differentiating through physical processes governing dyanmics and image formation. We demonstrated the benefits of such a holistic approach by estimating physical attributes for timeevolving scenes with complex dynamics and deformations, all from raw video observations. We also demonstrated the applicability of this efficient and accurate estimation scheme on end-to-end visuomotor control tasks. The latter case highlights ∇Sim’s efficient integration with PyTorch, facilitating interoperability with existing machine learning modules. Interesting avenues for future work include extending our differentiable simulation to contact-rich motion, articulated bodies and higher-fidelity physically-based renderers – doing so takes us closer to operating in the real-world." }, { "heading": "ACKNOWLEDGEMENTS", "text": "KM and LP thank the IVADO fundamental research project grant for funding. FG thanks CIFAR for project funding under the Catalyst program. FS and LP acknowledge partial support from NSERC." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A DIFFERENTIABLE PHYSICS ENGINE", "text": "Under Lagrangian mechanics, the state of a physical system can be described in terms of generalized coordinates q, generalized velocities q̇ = u, and design, or model parameters θ. For the purposes of exposition, we make no distinction between rigid-bodies, deformable solids, or thin-shell models of cloth and other bodies. Although the specific choices of coordinates and parameters vary, the simulation procedure is virtually unchanged. We denote the combined state vector by s(t) = [q(t),u(t)].\nThe dynamic evolution of the system is governed by a second order differential equations (ODE) of the form Ms̈ = f(s), where M is a mass matrix that may also depend on our state and design parameters θ. Solutions to ODEs of this type may be obtained through black box numerical integration methods, and their derivatives calculated through the continuous adjoint method Chen et al. (2018). However, we instead consider our physics engine as a differentiable operation that provides an implicit relationship between a state vector s− = s(t) at the start of a time step, and the updated state at the end of the time step s+ = s(t + ∆t). An arbitrary discrete time integration scheme can be then be abstracted as the function g(s−, s+, θ) = 0, relating the initial and final system state and the model parameters θ. By the implicit function theorem, if we can specify a loss function l at the output of the simulator, we can compute ∂l∂s− as c T ∂g ∂s− , where c is the solution to the linear system\n∂g ∂s+ T c = − ∂l∂s+ T , and likewise for the model parameters θ.\nWhile the partial derivatives ∂g∂s− , ∂g ∂s+ , ∂g ∂θ can be computed by graph-based automatic differentation frameworks Paszke et al. (2019); Abadi et al. (2015); Bradbury et al. (2018), program transformation approaches such as DiffTaichi, and Google Tangent Hu et al. (2020); van Merriënboer et al. (2018) are particularly well-suited to simulation code. We use an embedded subset of Python syntax, which computes the adjoint of each simulation kernel at runtime, and generates C++/CUDA Kirk et al. (2007) code. Kernels are wrapped as custom autograd operations on PyTorch tensors, which allows users to focus on the definition of physical models, and leverage the PyTorch tape-based autodiff to track the overall program flow. While this formulation is general enough to represent explicit, multi-step, or fully implicit time-integration schemes, we employ semi-implicit Euler integration, which is the preferred integration scheme for most simulators Erez et al. (2015).\nA.1 PHYSICAL MODELS\nWe now discuss some of the physical models available in∇Sim. Deformable Solids: In contrast with existing simulators that use grid-based methods for differentiable soft-body simulation Hu et al. (2019; 2020), we adopt a finite element (FEM) model with constant strain tetrahedral elements common in computer graphics Sifakis & Barbic (2012). We use the stable Neo-Hookean constitutive model of Smith et al. Smith et al. (2018a) that derives per-element forces from the following strain energy density:\nΨ(q, θ) = µ\n2 (IC − 3) +\nλ 2 (J − α)2 − µ 2 log(IC + 1), (1)\nwhere IC , J are invariants of strain, θ = [µ, λ] are the Lamé parameters, and α is a per-element actuation value that allows the element to expand and contract.\nNumerically integrating the energy density over each tetrahedral mesh element with volume Ve gives the total elastic potential energy, U(q, θ) = ∑ VeΨe. The forces due to this potential fe(s, θ) = −∇qU(q, θ), can computed analytically, and their gradients obtained using the adjoint method (cf. Section 3.1).\nDeformable Thin-Shells: To model thin-shells such as clothing, we use constant strain triangular elements embedded in 3D. The Neo-Hookean constitutive model above is applied to model in-plane elastic deformation, with the addition of a bending energy fb(s, θ) = kbsin(φ2 + α)d, where kb is the bending stiffness, φ is the dihedral angle between two triangular faces, α is a per-edge actuation value that allows the mesh to flex inwards or outwards, and d is the force direction given by Bridson\net al. (2005). We also include a lift/drag model that approximates the effect of the surrounding air on the surface of mesh.\nRigid Bodies: We represent the state of a 3D rigid body as qb = [x, r] consisting of a position x ∈ R3, and a quaternion r ∈ R4. The generalized velocity of a body is ub = [v, ω] and the dynamics of each body is given by the Newton-Euler equations,[\nm 0 0 I ] [ v̇ ω̇ ] = [ f τ ] − [ 0 ω × Iω ] (2)\nwhere the mass m and inertia matrix I (expressed at the center of mass) are considered design parameters θ.\nContact: We adopt a compliant contact model that associates elastic and damping forces with each nodal contact point. The model is parameterized by four scalars θ = [ke, kd, kf , µ], corresponding to elastic stiffness, damping, frictional stiffness, and friction coefficient respectively. To prevent interpenetration we use a proportional penalty-based force, fn(s, θ) = −n[keC(q)+kdĊ(u)], where n is a contact normal, and C is a gap function measure of overlap projected on to R+. We model friction using a relaxed Coulomb model Todorov (2014) ff (s, θ) = −D[min(µ|fn|, kfus)], where D is a basis of the contact plane, and us = DTu is the sliding velocity at the contact point. While these forces are only C0 continuous, we found that this was sufficient for optimization over a variety of objectives.\nMore physical simulations: We also implement a number of other differentiable simulations such as pendula, mass-springs, and incompressible fluids Stam (1999). We note these systems have already been demonstrated in prior art, and thus focus on the more challenging systems in our paper." }, { "heading": "B DISCRETE ADJOINT METHOD", "text": "Above, we presented a formulation of time-integration using the discrete adjoint method that represents an arbitrary time-stepping scheme through the implicit relation,\ng(s−, s+, θ) = 0. (3)\nThis formulation is general enough to represent both explicit or implicit time-stepping methods. While explicit methods are often simple to implement, they may require extremely small time-steps for stability, which is problematic for reverse-mode automatic differentiation frameworks that must explicitly store the input state for each discrete timestep invocation of the integration routine. On the other hand, implicit methods can introduce computational overhead or unwanted numerical dissipation Hairer et al. (2006). For this reason, many real-time physics engines employ a semiimplicit (symplectic) Euler integration scheme Erez et al. (2015), due to its ease of implementation and numerical stability in most meaningful scenarios (conserves energy for systems where the Hamiltonian is time-invariant).\nWe now give a concrete example of the discrete adjoint method applied to semi-implicit Euler. For the state variables defined above, the integration step may be written as follows,\ng(s−, s+, θ) = [ u+ − u− −∆tM−1f(s−) q+ − q− −∆tu+ ] = 0. (4)\nNote that in general, the mass matrix M is a function of q and θ. For conciseness we only consider the dependence on θ, although the overall procedure is unchanged in the general case. We provide a brief sketch of computing the gradients of g(s−, s+, θ). In the case of semi-implicit integration above, these are given by the following equations:\n∂g\n∂s− =\n[ −∆tM−1 ∂f∂q(t) −I−∆tM −1 ∂f ∂u(t)\n−I 0\n] ∂g\n∂s+ = [ 0 I I −∆tI ] ∂g ∂θ = [ −∆t∂M −1 ∂θ 0 ] .\n(5)\nIn the case of semi-implicit Euler, the triangular structure of these Jacobians allows the adjoint variables to be computed explicitly. For fully implicit methods such as backwards Euler, the Jacobians may create a linear system that must be first solved to generate adjoint variables." }, { "heading": "C PHYSICAL MODELS", "text": "We now undertake a more detailed discussion of the physical models implemented in∇Sim.\nC.1 FINITE ELEMENT METHOD\nAs described in section 3.2 (\"Physical models\"), we use a hyperelastic constitutive model based on the neo-Hookean model of Smith et al. Smith et al. (2018a):\nΨ(q, θ) = µ\n2 (IC − 3) +\nλ 2 (J − α)2 − µ 2 log(IC + 1). (6)\nThe Lamé parameters, λ, µ, control the element’s resistance to shearing and volumetric strains. These may be specified on a per-element basis, allowing us to represent heterogeneous materials. In contrast to other work using particle-based models Hu et al. (2020), we adopt a mesh-based discretization for deformable shells and solids. For thin-shells, such as cloth, the surface is represented by a triangle mesh as in Figure 8a, enabling straightforward integration with our triangle mesh-based differentiable rasterizer Liu et al. (2019); Chen et al. (2019). For solids, we use a tetrahedral FEM model as illustrated in Figure 8b. Both these models include a per-element activation parameter α, which for thin-shells, allows us to control the relative dihedral angle between two connected faces. For tetrahedral meshes, this enables changing the element’s volume, enabling locomotion, as in the control-fem example.\nC.2 CONTACT\nImplicit contact methods based on linear complementarity formulations (LCP) of contact may be used to maintain hard non-penetration constraints de Avila Belbute-Peres et al. (2018). However, we found relaxed models of contact—used in typical physics engines Erez et al. (2015)—were sufficient for our experiments. In this approach, contact forces are derived from a one-sided quadratic potential, giving rise to penalty forces of the form 9a. While Coulomb friction may also be modeled as an LCP, we use a relaxed model where the stick regime is represented by a stiff quadratic potential around the origin, and a linear portion in the slip regime, as shown in Figure 9b. To generate contacts, we test each vertex of a mesh against a collision plane and introduce a contact within some distance threshold d.\nC.3 PENDULA\nWe also implement simple and double pendula, as toy examples of well-behaved and chaotic systems respectively, and estimate the parameters of the system (i.e., the length(s) of the rod(s) and initial angular displacement(s)), by comparing the rendered videos (assuming uniformly random initial guesses) with the true videos. As pendula have extensively been studied in the context of differentiable physics simulation Degrave et al. (2016); de Avila Belbute-Peres et al. (2018); Cranmer et al. (2020b); Toth et al. (2020); Greydanus et al. (2019); Sanchez-Gonzalez et al. (2019), we focus on more challenging systems which have not been studied in prior art.\nC.4 INCOMPRESSIBLE FLUIDS\nAs an example of incompressible fluid simulation, we implement a smoke simulator following the popular semi-Lagrangian advection scheme of Stam et al. Stam (1999). At 2:20 in our supplementary video attachment, we show an experiment which optimizes the initial velocities of smoke particles to form a desired pattern. Similar schemes have already been realized differentiably, e.g. in DiffTaichi Hu et al. (2020) and autograd Maclaurin et al. (2015)." }, { "heading": "D SOURCE-CODE TRANSFORMATION FOR AUTOMATIC DIFFERENTIATION", "text": "The discrete adjoint method requires computing gradients of physical quantities with respect to state and design parameters. To do so, we adopt a source code transformation approach to perform reverse mode automatic differentiation Hu et al. (2020); Margossian (2019). We use a domain-specific subset of the Python syntax extended with primitves for representing vectors, matrices, and quaternions. Each type includes functions for acting on them, and the corresponding adjoint method. An example simulation kernel is then defined as follows:\n1 @kernel 2 def integrate_particles( 3 x : tensor(float3), 4 v : tensor(float3), 5 f : tensor(float3), 6 w : tensor(float), 7 gravity : tensor(float3), 8 dt : float, 9 x_new : tensor(float3),\n10 v_new : tensor(float3) 11 ): 12 13 # Get thread ID 14 thread_id = tid() 15 16 # Load state variables and parameters 17 x0 = load(x, thread_id)\n18 v0 = load(v, thread_id) 19 f0 = load(f, thread_id) 20 inv_mass = load(w, thread_id) 21 22 # Load external forces 23 g = load(gravity, 0) 24 25 # Semi-implicit Euler 26 v1 = v0 + (f0 * inv_mass - g * step(inv_mass)) * dt 27 x1 = x0 + v1 * dt 28 29 # Store results 30 store(x_new, thread_id, x1) 31 store(v_new, thread_id, v1)\nListing 1: Particle Integration Kernel\nAt runtime, the kernel’s abstract syntax tree (AST) is parsed using Python’s built-in ast module. We then generate C++ kernel code for forward and reverse mode, which may be compiled to a CPU or GPU executable using the PyTorch torch.utils.cpp_extension mechanism.\nThis approach allows writing imperative code, with fine-grained indexing and implicit operator fusion (since all operations in a kernel execute as one GPU kernel launch). Each kernel is wrapped as a PyTorch autograd operation so that it fits natively into the larger computational graph." }, { "heading": "E MPC CONTROLLER ARCHITECTURE", "text": "For our model predictive control examples, we use a simple 3-layer neural network architecture illustrated in Figure 10. With simulation time t as input we generate N phase-shifted sinusoidal signals which are passed to a fully-connected layer (zero-bias), and a final activation layer. The output is a vector of per-element activation values as described in the previous section." }, { "heading": "F LOSS LANDSCAPES FOR PARAMETER ESTIMATION OF DEFORMABLE SOLIDS", "text": "∇Sim integrates several functional blocks, many of which contain nonlinear operations. Furthermore, we employ a pixelwise mean-squared error (MSE) loss function for estimating physical parameters from video. To demonstrate whether the gradients obtained from∇Sim are relevant for the task of physical parameter estimation, in Figure 2 of the main paper, we present an analysis of the MSE loss landscape for mass estimation.\nF.1 ELASTICITY PARAMETER\nWe now present a similar analysis for elasticity parameter estimation in deformable solids. Figure 11a shows the loss landscape when optimizing for the Lamé parameters of a deformable solid FEM. In this case, both parameters λ and µ are set to 1000. As can be seen in the plot, the loss landscape has\na unique, dominant minimum at 1000. We believe the well-behaved nature of our loss landscape is a key contributing factor to the precise physical-parameter estimation ability of∇Sim.\nF.2 LOSS LANDSCAPE IN PYBULLET (REINFORCE)\nFigure 11 shows how optimization using REINFORCE can introduce complications. As the simulation becomes unstable with masses close to zero, poor local optimum can arise near the mean of the current estimated mass. This illustrates that optimization through REINFORCE is only possible after careful tuning of step size, sampling noise and sampling range. This reduces the utility of this method in a realistic setting where these hyperparameters are not known a priori.\nF.3 IMPACT OF THE LENGTH OF A VIDEO SEQUENCE\nTo assess the impact of the length of a video on the quality of our solution, we plot the loss landscapes for videos of varying lengths in Fig. 12. We find that shorter videos tend to have steeper loss landscapes compared to longer ones. The frame-rate also has an impact on the steepness of the landscape. In all cases though, the loss landscape is smooth and has the same unique minimum." }, { "heading": "G DATASET DETAILS", "text": "For the rigid-body task of physical parameter estimation from video, we curated a dataset comprising of 14 meshes, as shown in Fig. 13. The objects include a combination of primitive shapes, fruits and vegetables, animals, office objects, and airplanes. For each experiment, we select an object at random, and sample its physical attributes from a predefined range: densities from the range [2, 12] kg/m3, contact parameters ke, kd, kf from the range [1, 500], and a coefficient of friction µ from the range [0.2, 1.0]. The positions, orientations, (anisotropic) scale factors, and initial velocities are sampled uniformly at random from a cube of side-length 13m centered on the camera. Across all rigid-body experiments, we use 800 objects for training and 200 objects for testing." }, { "heading": "H BASELINES", "text": "In this section, we present implementation details of the baselines used in our experiments.\nH.1 PYBULLET + REINFORCE\nTo explore whether existing non-differentiable simulators can be employed for physical parameter estimation, we take PyBullet Coumans & Bai (2016–2019) – a popular physics engine – and make it trivially differentiable, by gradient estimation. We employ the REINFORCE Williams (1992)\ntechnique to acquire an approximate gradient through the otherwise non-differentiable environment. The implementation was inspired by Wu et al. (2015) and Rezende et al. (2016). In concurrent work, a similar idea was explored in Ehsani et al. (2020).\nIn PyBullet, the mass parameter of the object is randomly initialized in the range [0, Nv], where Nv is the number of vertices, the object is set to the same starting position and orientation as in the dataset, and the camera parameters are identical to those used in the dataset. This configuration ensures that if the mass was correct, the video frames rendered out by PyBullet would perfectly align with those generated by ∇Sim. Each episode is rolled out for the same duration as in the dataset (60 frames, corresponding to 2 seconds of motion). In PyBullet this is achieved by running the simulation at 240 Hz and skipping 7 frames between observations. The REINFORCE reward is calculated by summing the individual L2 losses between ground truth frames and PyBullet frames, then multiplying each by −1 to establish a global maximum at the correct mass, in contrast with a global minimum as in∇Sim. When all individual frame rewards have been calculated, all trajectory rewards are normalized before calculating the loss. This ensures that the reward is scaled correctly with respect to REINFORCE’s negative sample log likelihood, but when the mass value approaches the local optimum, this leads to instability in the optimization process. To mitigate this instability,\nwe introduce reward decay, which a hyperparameter that slowly decreases the reward values as optimization progresses, in a similar manner to learning rate decay. Before each optimization step, all normalized frame reward values are multiplied by reward_decay. After the optimization step, the decay is updated by reward_decay = reward_decay ∗ decay_factor. The hyperparameters used in this baseline can be found in Table 5.\nH.2 CNN FOR DIRECT PARAMETER ESTIMATION\nIn the rigid-body parameter estimation experiments, we train a ConvNet baseline, building on the EfficientNet-B0 architecture Tan & Le (2019). The ConvNet consists of two convolutional layers with parameters (PyTorch convention): (1280, 128, 1), (128, 32, 1), followed by linear layers and ReLU activations with sizes [7680, 1024, 100, 100, 100, 5]. No activation is applied over the output of the ConvNet. We train the model to minimize the mean-squared error between the estimated and the true parameters, and use the Adam optimizer Kingma & Ba (2015) with learning rate of 0.0001. Each model was trained for 100 epochs on a V 100 GPU. The input image frames were preprocessed by resizing them to 64 × 64 pixels (to reduce GPU memory consumption) and the features were extracted with a pretrained EfficientNet-B0." }, { "heading": "I COMPUTE AND TIMING DETAILS", "text": "Most of the models presented in ∇Sim can be trained and evaluated on modern laptops equipped with graphics processing units (GPUs). We find that, on a laptop with an Intel i7 processor and a GeForce GTX 1060 GPU, parameter estimation experiments for rigid/nonrigid bodies can be run in under 5-20 minutes per object on CPU and in under 1 minute on the GPU. The visuomotor control experiments (control-fem, control-cloth) take about 30 minutes per episode on the CPU and under 5 minutes per episode on the GPU." }, { "heading": "J OVERVIEW OF AVAILABLE DIFFERENTIABLE SIMULATIONS", "text": "Table 6 presents an overview of the differentiable simulations implemented in∇Sim, and the optimizable parameters therein." }, { "heading": "K LIMITATIONS", "text": "While providing a wide range of previously inaccessible capabilities,∇Sim has a few limitations that we discuss in this section. These shortcomings also form interesting avenues for subsequent research.\n• ∇Sim (and equivalently∇PyBullet) are inept at handling tiny masses (100g and less). Optimizing for physical parameters for such objects requires a closer look at the design of physics engine and possibly, numerical stability.\n• Articulated bodies are not currently implemented in ∇Sim. Typically, articulated bodies are composed of multiple prismatic joints which lend additional degrees of freedom to the system.\n• While capable of modeling contacts with simple geometries (such as between arbitrary triangle meshes and planar surfaces), ∇Sim has limited capability to handle contact-rich motion that introduces a large number of discontinuities. One way to handle contacts differentiably could be to employ more sophisticated contact detection techniques and solve a linear complementarity problem (LCP) at each step, as done in de Avila Belbute-Peres et al. (2018).\n• Aside from the aforementioned drawbacks, we note that physics engines are adept at modeling phenomena which can be codified. However, there are several unmodeled physical phenomena that occur in real-world videos which must be studied in order for∇Sim to evolve as a scalable framework capable of operating in the wild." }, { "heading": "L BROADER IMPACT", "text": "Much progress has been made on end-to-end learning in visual domains. If successful, image and video understanding promises far-reaching applications from safer autonomous vehicles to more realistic computer graphics, but relying on these tools for planning and control poses substantial risk.\nNeural information processing systems have shown experimentally promising results on visuomotor tasks, yet fail in unpredictable and unintuitive ways when deployed in real-world applications. If embodied learning agents are to play a broader role in the physical world, they must be held to a higher standard of interpretability. Establishing trust requires not just empirical, but explanatory evidence in the form of physically grounded models.\nOur work provides a bridge between gradient- and model-based optimization. Explicitly modeling visual dynamics using well-understood physical principles has important advantages for human explainability and debuggability.\nUnlike end-to-end neural architectures which distribute bias across a large set of parameters, ∇Sim trades their flexibility for physical interpretability. This does not eliminate the risk of bias in simulation, but allows us to isolate bias to physically grounded variables. Where discrepancy occurs, users can probe the model to obtain end-to-end gradients with respect to variation in physical orientation and material properties, or pixelwise differences. Differentiable simulators like ∇Sim afford a number of opportunities for use and abuse. We envision the following scenarios.\n• A technician could query a trained model, \"What physical parameters is the steering controller most sensitive to?\", or \"What happens if friction were slightly lower on that stretch of roadway?\"\n• An energy-conscious organization could use ∇Sim to accelerate convergence of reinforcement learning models, reducing the energy consumption required for training.\n• Using differentiable simulation, an adversary could efficiently construct a physically plausible scene causing the model to produce an incorrect prediction or take an unsafe action.\nVideo understanding is a world-building exercise with inherent modeling bias. Using physically well-studied models makes those modeling choices explicit, however mitigating the risk of bias still requires active human participation in the modeling process. While a growing number of physically-based rendering and animation efforts are currently underway, our approach does require a high upfront engineering cost in simulation infrastructure. To operationalize these tools, we anticipate practitioners will need to devote significant effort to identifying and replicating unmodeled dynamics from real world-trajectories. Differentiable simulation offers a computationally tractable and physically interpretable pathway for doing so, allowing users to estimate physical trajectories and the properties which govern them." } ]
2,021
∇Sim: DIFFERENTIABLE SIMULATION FOR SYSTEM IDENTIFICATION AND VISUOMOTOR CONTROL https://gradsim.github.io
SP:f958cd0237ec397729161f29cab903af40716fd3
[ "In this paper, the authors analyze the convergence of a proximal gradient descent ascent (GDA) method when applied to non-convex strongly concave functions. To establish convergence results, the authors show that proximal-GDA admits a novel Lyapunov function that monotonically decreases at every iteration. Along with KL-parametrized local geometry, the Lyapunov function was used to establish the convergence of decision variables to a critical point. Moreover, the rate of convergence of the algorithm was computed for various ranges of KL-parameter. " ]
The gradient descent-ascent (GDA) algorithm has been widely applied to solve minimax optimization problems. In order to achieve convergent policy parameters for minimax optimization, it is important that GDA generates convergent variable sequences rather than convergent sequences of function values or gradient norms. However, the variable convergence of GDA has been proved only under convexity geometries, and there lacks understanding for general nonconvex minimax optimization. This paper fills such a gap by studying the convergence of a more general proximal-GDA for regularized nonconvex-strongly-concave minimax optimization. Specifically, we show that proximal-GDA admits a novel Lyapunov function, which monotonically decreases in the minimax optimization process and drives the variable sequence to a critical point. By leveraging this Lyapunov function and the KŁ geometry that parameterizes the local geometries of general nonconvex functions, we formally establish the variable convergence of proximal-GDA to a critical point x∗, i.e., xt → x∗, yt → y∗(x∗). Furthermore, over the full spectrum of the KŁ-parameterized geometry, we show that proximal-GDA achieves different types of convergence rates ranging from sublinear convergence up to finite-step convergence, depending on the geometry associated with the KŁ parameter. This is the first theoretical result on the variable convergence for nonconvex minimax optimization.
[ { "affiliations": [], "name": "KŁ GEOMETRY" }, { "affiliations": [], "name": "Ziyi Chen" }, { "affiliations": [], "name": "Yi Zhou" }, { "affiliations": [], "name": "Tengyu Xu" }, { "affiliations": [], "name": "Yingbin Liang" } ]
[ { "authors": [ "L. Adolphs", "H. Daneshmand", "A. Lucchi", "T. Hofmann" ], "title": "Local saddle point optimization: A curvature exploitation approach", "venue": "Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), pages 486–495.", "year": 2019 }, { "authors": [ "H. Attouch", "J. Bolte" ], "title": "On the convergence of the proximal algorithm for nonsmooth functions involving analytic features", "venue": "Mathematical Programming, 116(1-2):5–16.", "year": 2009 }, { "authors": [ "H. Attouch", "J. Bolte", "B.F. Svaiter" ], "title": "Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods", "venue": "Mathematical Programming, 137(1-2):91–129.", "year": 2013 }, { "authors": [ "P. Bernhard", "A. Rapaport" ], "title": "On a theorem of danskin with an application to a theorem of von neumann-sion", "venue": "Nonlinear Analysis: Theory, Methods & Applications, 24(8):1163–1181.", "year": 1995 }, { "authors": [ "J. Bolte", "A. Daniilidis", "A. Lewis" ], "title": "The Łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems", "venue": "SIAM Journal on Optimization, 17:1205–1223.", "year": 2007 }, { "authors": [ "J. Bolte", "S. Sabach", "M. Teboulle" ], "title": "Proximal alternating linearized minimization for nonconvex and nonsmooth problems", "venue": "Mathematical Programming, 146(1-2):459–494.", "year": 2014 }, { "authors": [ "R.I. Boţ", "A. Böhm" ], "title": "Alternating proximal-gradient steps for (stochastic) nonconvexconcave minimax problems", "venue": "ArXiv:2007.13605.", "year": 2020 }, { "authors": [ "A. Cherukuri", "B. Gharesifard", "J. Cortes" ], "title": "Saddle-point dynamics: conditions for asymptotic stability of saddle points", "venue": "SIAM Journal on Control and Optimization, 55(1):486–511.", "year": 2017 }, { "authors": [ "C. Daskalakis", "I. Panageas" ], "title": "The limit points of (optimistic) gradient descent in min-max optimization", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 9236–9246.", "year": 2018 }, { "authors": [ "S.S. Du", "W. Hu" ], "title": "Linear convergence of the primal-dual gradient method for convexconcave saddle point problems without strong convexity", "venue": "Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), pages 196–205.", "year": 2019 }, { "authors": [ "M.A.M. Ferreira", "M. Andrade", "M.C.P. Matos", "J.A. Filipe", "M.P. Coelho" ], "title": "Minimax theorem and nash equilibrium", "venue": null, "year": 2012 }, { "authors": [ "P. Frankel", "G. Garrigos", "J. Peypouquet" ], "title": "Splitting methods with variable metric for Kurdyka–Łojasiewicz functions and general convergence rates", "venue": "Journal of Optimization Theory and Applications, 165(3):874–900.", "year": 2015 }, { "authors": [ "I. Goodfellow", "J. Pouget-Abadie", "M. Mirza", "B. Xu", "D. Warde-Farley", "S. Ozair", "A. Courville", "Y. Bengio" ], "title": "Generative adversarial nets", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 2672–2680.", "year": 2014 }, { "authors": [ "J. Ho", "S. Ermon" ], "title": "Generative adversarial imitation learning", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 4565–4573.", "year": 2016 }, { "authors": [ "F. Huang", "S. Gao", "J. Pei", "H. Huang" ], "title": "Accelerated zeroth-order momentum methods from mini to minimax optimization", "venue": "ArXiv:2008.08170.", "year": 2020 }, { "authors": [ "C. Jin", "P. Netrapalli", "M.I. Jordan" ], "title": "What is local optimality in nonconvex-nonconcave minimax optimization", "venue": null, "year": 2020 }, { "authors": [ "H. Karimi", "J. Nutini", "M. Schmidt" ], "title": "Linear Convergence of Gradient and ProximalGradient Methods Under the Polyak-Łojasiewicz Condition, pages 795–811", "venue": null, "year": 2016 }, { "authors": [ "A.Y. Kruger" ], "title": "On fréchet subdifferentials", "venue": "Journal of Mathematical Sciences, 116(3):3325– 3358.", "year": 2003 }, { "authors": [ "Q. Li", "Y. Zhou", "Y. Liang", "P.K. Varshney" ], "title": "Convergence analysis of proximal gradient with momentum for nonconvex optimization", "venue": "Proc. International Conference on Machine Learning (ICML), volume 70, pages 2111–2119.", "year": 2017 }, { "authors": [ "T. Lin", "C. Jin", "M.I. Jordan" ], "title": "On gradient descent ascent for nonconvex-concave minimax problems", "venue": null, "year": 2020 }, { "authors": [ "Lions", "P.-L.", "B. Mercier" ], "title": "Splitting algorithms for the sum of two nonlinear operators", "venue": "SIAM Journal on Numerical Analysis, 16(6):964–979.", "year": 1979 }, { "authors": [ "S. Łojasiewicz" ], "title": "A topological property of real analytic subsets", "venue": "Coll. du CNRS, Les equations aux derivees partielles, page 87–89.", "year": 1963 }, { "authors": [ "S. Lu", "I. Tsaknakis", "M. Hong", "Y. Chen" ], "title": "Hybrid block successive approximation for one-sided non-convex min-max problems: algorithms and applications", "venue": "IEEE Transactions on Signal Processing.", "year": 2020 }, { "authors": [ "A. Mokhtari", "A. Ozdaglar", "S. Pattathil" ], "title": "A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach", "venue": "Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), pages 1497–1507.", "year": 2020 }, { "authors": [ "A. Nedić", "A. Ozdaglar" ], "title": "Subgradient methods for saddle-point problems", "venue": "Journal of optimization theory and applications, 142(1):205–228.", "year": 2009 }, { "authors": [ "Neumann", "J. v." ], "title": "Zur theorie der gesellschaftsspiele", "venue": "Mathematische annalen, 100(1):295–320.", "year": 1928 }, { "authors": [ "D. Noll", "A. Rondepierre" ], "title": "Convergence of linesearch and trust-region methods using the Kurdyka–Łojasiewicz inequality", "venue": "Proc. Computational and Analytical Mathematics, pages 593–611.", "year": 2013 }, { "authors": [ "M. Nouiehed", "M. Sanjabi", "T. Huang", "J.D. Lee", "M. Razaviyayn" ], "title": "Solving a class of non-convex min-max games using iterative first order methods", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 14934–14942.", "year": 2019 }, { "authors": [ "S. Qiu", "Z. Yang", "X. Wei", "J. Ye", "Z. Wang" ], "title": "Single-timescale stochastic nonconvexconcave optimization for smooth nonlinear td learning", "venue": "ArXiv:2008.10103.", "year": 2020 }, { "authors": [ "J. Robinson" ], "title": "An iterative method of solving a game", "venue": "Annals of mathematics, 54(2):296–301.", "year": 1951 }, { "authors": [ "R.T. Rockafellar" ], "title": "Convex analysis", "venue": "Number 28. Princeton university press.", "year": 1970 }, { "authors": [ "R.T. Rockafellar", "Wets", "R.J.-B." ], "title": "Variational analysis, volume 317", "venue": "Springer Science & Business Media.", "year": 2009 }, { "authors": [ "A. Sinha", "H. Namkoong", "J.C. Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "Proc. International Conference on Learning Representations (ICLR).", "year": 2017 }, { "authors": [ "J. Song", "H. Ren", "D. Sadigh", "S. Ermon" ], "title": "Multi-agent generative adversarial imitation learning", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 7461– 7472.", "year": 2018 }, { "authors": [ "G. Xie", "L. Luo", "Y. Lian", "Z. Zhang" ], "title": "Lower complexity bounds for finite-sum convexconcave minimax optimization problems", "venue": null, "year": 2020 }, { "authors": [ "T. Xu", "Z. Wang", "Y. Liang", "H.V. Poor" ], "title": "Enhanced first and zeroth order variance reduced algorithms for min-max optimization", "venue": "ArXiv:2006.09361.", "year": 2020 }, { "authors": [ "Z. Xu", "H. Zhang", "Y. Xu", "G. Lan" ], "title": "A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex-nonconcave minimax problems", "venue": "ArXiv:2006.02032.", "year": 2020 }, { "authors": [ "J. Yang", "N. Kiyavash", "N. He" ], "title": "Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems", "venue": null, "year": 2020 }, { "authors": [ "M. Yue", "Z. Zhou", "M. So" ], "title": "On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition", "venue": "ArXiv:1801.09387v1.", "year": 2018 }, { "authors": [ "G. Zhang", "Y. Wang" ], "title": "On the suboptimality of negative momentum for minimax optimization", "venue": "ArXiv:2008.07459.", "year": 2020 }, { "authors": [ "Y. Zhou", "Y. Liang" ], "title": "Characterization of Gradient Dominance and Regularity Conditions for Neural Networks", "venue": "ArXiv:1710.06910v2.", "year": 2017 }, { "authors": [ "Y. Zhou", "Y. Liang", "Y. Yu", "W. Dai", "E.P. Xing" ], "title": "Distributed Proximal Gradient Algorithm for Partially Asynchronous Computer Clusters", "venue": "Journal of Machine Learning Research (JMLR), 19(19):1–32.", "year": 2018 }, { "authors": [ "Y. Zhou", "Z. Wang", "K. Ji", "Y. Liang", "V. Tarokh" ], "title": "Proximal gradient algorithm with momentum and flexible parameter restart for nonconvex optimization", "venue": "Proc. International Joint Conference on Artificial Intelligence, IJCAI-20, pages 1445–1451.", "year": 2020 }, { "authors": [ "Y. Zhou", "Z. Wang", "Y. Liang" ], "title": "Convergence of cubic regularization for nonconvex optimization under kl property", "venue": "Proc. Advances in Neural Information Processing Systems (NeurIPS), pages 3760–3769.", "year": 2018 }, { "authors": [ "Y. Zhou", "Y. Yu", "W. Dai", "Y. Liang", "P. Xing" ], "title": "On convergence of model parallel proximal gradient algorithm for stale synchronous parallel system", "venue": "Proc. International Conference on Artificial Intelligence and Statistics (AISTATS, volume 51, pages 713–722.", "year": 2016 }, { "authors": [ "Y. Zhou", "H. Zhang", "Y. Liang" ], "title": "Geometrical properties and accelerated gradient solvers of non-convex phase retrieval", "venue": "Proc. 54th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 331–335.", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Minimax optimization is a classical optimization framework that has been widely applied in various modern machine learning applications, including game theory Ferreira et al. (2012), generative adversarial networks (GANs) Goodfellow et al. (2014), adversarial training Sinha et al. (2017), reinforcement learning Qiu et al. (2020), imitation learning Ho and Ermon (2016); Song et al. (2018), etc. A typical minimax optimization problem is shown below, where f is a differentiable function.\nmin x∈X max y∈Y f(x, y).\nA popular algorithm for solving the above minimax problem is gradient descent-ascent (GDA), which performs a gradient descent update on the variable x and a gradient ascent update on the variable y alternatively in each iteration. Under the alternation between descent and ascent updates, it is much desired that GDA generates sequences of variables that converge to a certain optimal point, i.e., the minimax players obtain convergent optimal policies. In the existing literature, many studies have established the convergence of GDA-type algorithms under various global geometries of the objective function, e.g., convex-concave geometry (f is convex in x and concave in y) Nedić and Ozdaglar (2009), bi-linear geometry Neumann (1928); Robinson (1951) and Polyak-Łojasiewicz (PŁ) geometry Nouiehed et al. (2019); Yang et al. (2020). Some other work studied GDA under stronger global geometric conditions of f such as convex-strongly-concave geometry Du and Hu (2019) and strongly-convex-strongly-concave geometry Mokhtari et al. (2020); Zhang and Wang (2020), under which GDA is shown to generate convergent variable sequences. However, these special global function geometries do not hold for modern machine learning problems that usually have complex models and nonconvex geometry.\nRecently, many studies characterized the convergence of GDA in nonconvex minimax optimization, where the objective function is nonconvex in x. Specifically, Lin et al. (2020); Nouiehed et al. (2019); Xu et al. (2020b); Boţ and Böhm (2020) studied the convergence of GDA in the nonconvex-concave setting and Lin et al. (2020); Xu et al. (2020b) studied the nonconvex-strongly-concave setting. In these general nonconvex settings, it has been shown that GDA converges to a certain stationary point at a sublinear rate, i.e., ‖G(xt)‖ ≤ t−α for some α > 0, where G(xt) corresponds to a certain notion of gradient. Although such a gradient convergence result implies the stability of the algorithm, namely, limt→∞ ‖xt+1 − xt‖ = 0, it does not guarantee the convergence of the variable sequences {xt}t, {yt}t generated by GDA. So far, the variable convergence of GDA has not been established for nonconvex problems, but only under (strongly) convex function geometries that are mentioned previously Du and Hu (2019); Mokhtari et al. (2020); Zhang and Wang (2020). Therefore, we want to ask the following fundamental question:\n• Q1: Does GDA have guaranteed variable convergence in nonconvex minimax optimization? If so, where do they converge to?\nIn fact, proving the variable convergence of GDA in the nonconvex setting is highly nontrivial due to the following reasons: 1) the algorithm alternates between a minimization step and a maximization step; 2) It is well understood that strong global function geometry leads to the convergence of GDA. However, in general nonconvex setting, the objective functions typically do not have an amenable global geometry. Instead, they may satisfy different types of local geometries around the critical points. Hence, it is natural and much desired to exploit the local geometries of functions in analyzing the convergence of GDA. The Kurdyka-Łojasiewicz (KŁ) geometry provides a broad characterization of such local geometries for nonconvex functions.\nThe Kurdyka-Łojasiewicz (KŁ) geometry (see Section 2 for details) Bolte et al. (2007; 2014) parameterizes a broad spectrum of the local nonconvex geometries and has been shown to hold for a broad class of practical functions. Moreover, it also generalizes other global geometries such as strong convexity and PŁ geometry. In the existing literature, the KŁ geometry has been exploited extensively to analyze the convergence rate of various gradient-based algorithms in nonconvex optimization, e.g., gradient descent Attouch and Bolte (2009); Li et al. (2017) and its accelerated version Zhou et al. (2020) as well as the distributed version Zhou et al. (2016a). Hence, we are highly motivated to study the convergence rate of variable convergence of GDA in nonconvex minimax optimization under the KŁ geometry. In particular, we want to address the following question:\n• Q2: How does the local function geometry captured by the KŁ parameter affects the variable convergence rate of GDA?\nIn this paper, we provide comprehensive answers to these questions. We develop a new analysis framework to study the variable convergence of GDA in nonconvex-strongly-concave minimax optimization under the KŁ geometry. We also characterize the convergence rates of GDA in the full spectrum of the parameterization of the KŁ geometry." }, { "heading": "1.1 OUR CONTRIBUTIONS", "text": "We consider the following regularized nonconvex-strongly-concave minimax optimization problem\nmin x∈Rm max y∈Y\nf(x, y) + g(x)− h(y), (P)\nwhere f is a differentiable and nonconvex-strongly-concave function, g is a general nonconvex regularizer and h is a convex regularizer. Both g and h can be possibly nonsmooth. To solve the above regularized minimax problem, we study a proximal-GDA algorithm that leverages the forward-backward splitting update Lions and Mercier (1979); Attouch et al. (2013).\nWe study the variable convergence property of proximal-GDA in solving the minimax problem (P). Specifically, we show that proximal-GDA admits a novel Lyapunov function H(x, y) (see Proposition 2), which is monotonically decreasing along the trajectory of proximal GDA, i.e., H(xt+1, yt+1) < H(xt, yt). Based on the monotonicity of this Lyapunov function, we show that every limit point of the variable sequences generated by proximal-GDA is a critical point of the objective function.\nMoreover, by exploiting the ubiquitous KŁ geometry of the Lyapunov function, we prove that the entire variable sequence of proximal-GDA has a unique limit point, or equivalently speaking, it\nconverges to a certain critical point x∗, i.e., xt → x∗, yt → y∗(x∗) (see the definition of y∗ in Section 2). To the best of our knowledge, this is the first variable convergence result of GDA-type algorithms in nonconvex minimax optimization.\nFurthermore, we characterize the asymptotic convergence rates of both the variable sequences and the function values of proximal-GDA in different parameterization regimes of the KŁ geometry. Depending on the value of the KŁ parameter θ, we show that proximal-GDA achieves different types of convergence rates ranging from sublinear convergence up to finite-step convergence, as we summarize in Table 1 below." }, { "heading": "1.2 RELATED WORK", "text": "Deterministic GDA algorithms: Yang et al. (2020) studied an alternating gradient descent-ascent (AGDA) algorithm in which the gradient ascent step uses the current variable xt+1 instead of xt. Boţ and Böhm (2020) extended the AGDA algorithm to an alternating proximal-GDA (APGDA) algorithm for a regularized minimax optimization. Xu et al. (2020b) studied an alternating gradient projection algorithm which applies `2 regularizer to the local objective function of GDA followed by projection onto the constraint sets. Daskalakis and Panageas (2018); Mokhtari et al. (2020); Zhang and Wang (2020) analyzed optimistic gradient descent-ascent (OGDA) which applies negative momentum to accelerate GDA. Mokhtari et al. (2020) also studied an extra-gradient algorithm which applies two-step GDA in each iteration. Nouiehed et al. (2019) studied multi-step GDA where multiple gradient ascent steps are performed, and they also studied the momentum-accelerated version. Cherukuri et al. (2017); Daskalakis and Panageas (2018); Jin et al. (2020) studied GDA in continuous time dynamics using differential equations. Adolphs et al. (2019) analyzed a second-order variant of the GDA algorithm.\nStochastic GDA algorithms: Lin et al. (2020); Yang et al. (2020); Boţ and Böhm (2020) analyzed stochastic GDA, stochastic AGDA and stochastic APGDA, which are direct extensions of GDA, AGDA and APGDA to the stochastic setting respectively. Variance reduction techniques have been applied to stochastic minimax optimization, including SVRG-based Du and Hu (2019); Yang et al. (2020), SPIDER-based Xu et al. (2020a), STORM Qiu et al. (2020) and its gradient free version Huang et al. (2020). Xie et al. (2020) studied the complexity lower bound of first-order stochastic algorithms for finite-sum minimax problem.\nKŁ geometry: The KŁ geometry was defined in Bolte et al. (2007). The KŁ geometry has been exploited to study the convergence of various first-order algorithms for solving minimization problems, including gradient descent Attouch and Bolte (2009), alternating gradient descent Bolte et al. (2014), distributed gradient descent Zhou et al. (2016a; 2018a), accelerated gradient descent Li et al. (2017). It has also been exploited to study the convergence of second-order algorithms such as Newton’s method Noll and Rondepierre (2013); Frankel et al. (2015) and cubic regularization method Zhou et al. (2018b)." }, { "heading": "2 PROBLEM FORMULATION AND KŁ GEOMETRY", "text": "In this section, we introduce the problem formulation, technical assumptions and the KurdykaŁojasiewicz (KŁ) geometry. We consider the following regularized minimax optimization problem.\nmin x∈Rm max y∈Y\nf(x, y) + g(x)− h(y), (P)\nwhere f : Rm × Rn → R is a differentiable and nonconvex-strongly-concave loss function, Y ⊂ Rn is a compact and convex set, and g, h are the regularizers that are possibly non-smooth. In particular, define Φ(x) := maxy∈Y f(x, y)− h(y), and then the problem (P) is equivalent to the minimization problem minx∈Rm Φ(x) + g(x).\nThroughout the paper, we adopt the following standard assumptions on the problem (P). Assumption 1. The objective function of the problem (P) satisfies:\n1. Function f(·, ·) is L-smooth and function f(x, ·) is µ-strongly concave; 2. Function (Φ + g)(x) is bounded below, i.e., infx∈Rm(Φ + g)(x) > −∞; 3. For any α ∈ R, the sub-level set {x : (Φ + g)(x) ≤ α} is compact; 4. Function h is proper and convex, and function g is proper and lower semi-continuous.\nTo elaborate, item 1 considers the class of nonconvex-strongly-concave functions f that has been widely studied in the existing literature Lin et al. (2020); Jin et al. (2020); Xu et al. (2020b;a); Lu et al. (2020). Items 2 and 3 guarantee that the minimax problem (P) has at least one solution, and the variable sequences generated by the proximal-GDA algorithm (See Algorithm 1) are bounded. Item 4 requires the regularizer h to be convex (possibly non-smooth), which includes many normbased popular regularizers such as `p (p ≥ 1), elastic net, nuclear norm, spectral norm, etc. On the other hand, the other regularizer g can be nonconvex but lower semi-continuous, which includes all the aforementioned convex regularizers, `p (0 ≤ p < 1), Schatten-p norm, rank, etc. Hence, our formulation of the problem (P) covers a rich class of nonconvex objective functions and regularizers and is more general than the existing nonconvex minimax formulation in Lin et al. (2020), which does not consider any regularizer. Remark 1. We note that the strong concavity of f(x, ·) in item 1 can be relaxed to concavity, provided that the regularizer h(y) is µ-strongly convex. In this case, we can add −µ2 ‖y‖\n2 to both f(x, y) and h(y) such that Assumption 1 still holds. For simplicity, we will omit the discussion on this case.\nBy strong concavity of f(x, ·), it is clear that the mapping y∗(x) := arg maxy∈Y f(x, y) − h(y) is uniquely defined for every x ∈ Rm. In particular, if x∗ is the desired minimizer of Φ(x), then (x∗, y∗(x∗)) is the desired solution of the minimax problem (P).\nNext, we present some important properties regarding the function Φ(x) and the mapping y∗(x). The following proposition from Boţ and Böhm (2020) generalizes the Lemma 4.3 of Lin et al. (2020) to the regularized setting. The proof can be found in Appendix A. Throughout, we denote κ = L/µ as the condition number and denote∇1f(x, y),∇2f(x, y) as the gradients with respect to the first and the second input argument, respectively. For example, with this notation,∇1f(x, y∗(x)) denotes the gradient of f(x, y∗(x)) with respect to only the first input argument x, and the x in the second input argument y∗(x) is treated as a constant. Proposition 1 (Lipschitz continuity of y∗(x) and∇Φ(x)). Let Assumption 1 hold. Then, the mapping y∗(x) and the function Φ(x) satisfy\n1. Mapping y∗(x) is κ-Lipschitz continuous; 2. Function Φ(x) is L(1 + κ)-smooth with∇Φ(x) = ∇1f(x, y∗(x)).\nAs an intuitive explanation of Proposition 1, since the function f(x, y) − h(y) is L-smooth with respect to x, both the maximizer y∗(x) and the corresponding maximum function value Φ(x) should not change substantially with regard to a small change of x.\nRecall that the minimax problem (P) is equivalent to the standard minimization problem minx∈Rm Φ(x) + g(x), which, according to item 2 of Proposition 1, includes a smooth nonconvex function Φ(x) and a lower semi-continuous regularizer g(x). Hence, we can define the optimization goal of the minimax problem (P) as finding a critical point x∗ of the nonconvex function Φ(x) + g(x) that satisfies the necessary optimality condition 0 ∈ ∂(Φ + g)(x∗) for minimizing nonconvex functions. Here, ∂ denotes the notion of subdifferential as we elaborate below.\nDefinition 1. (Subdifferential and critical point, Rockafellar and Wets (2009)) The Frechét subdifferential ∂̂h of function h at x ∈ domh is the set of u ∈ Rd defined as\n∂̂h(x) := { u : lim inf\nz 6=x,z→x h(z)− h(x)− uᵀ(z − x) ‖z − x‖\n≥ 0 } ,\nand the limiting subdifferential ∂h at x ∈ domh is the graphical closure of ∂̂h defined as:\n∂h(x) := {u : ∃xk → x, h(xk)→ h(x), uk ∈ ∂̂h(xk), uk → u}.\nThe set of critical points of h is defined as crith := {x : 0 ∈ ∂h(x)}.\nThroughout, we refer to the limiting subdifferential as subdifferential. We note that subdifferential is a generalization of gradient (when h is differentiable) and subgradient (when h is convex) to the nonconvex setting. In particular, any local minimizer of h must be a critical point.\nNext, we introduce the Kurdyka-Łojasiewicz (KŁ) geometry of a function h. Throughout, the point-to-set distance is denoted as distΩ(x) := infu∈Ω ‖x− u‖. Definition 2 (KŁ geometry, Bolte et al. (2014)). A proper and lower semi-continuous function h is said to have the KŁ geometry if for every compact set Ω ⊂ domh on which h takes a constant value hΩ ∈ R, there exist ε, λ > 0 such that for all x̄ ∈ Ω and all x ∈ {z ∈ Rm : distΩ(z) < ε, hΩ < h(z) < hΩ + λ}, the following condition holds:\nϕ′ (h(x)− hΩ) · dist∂h(x)(0) ≥ 1, (1)\nwhere ϕ′ is the derivative of function ϕ : [0, λ)→ R+, which takes the form ϕ(t) = cθ t θ for certain universal constant c > 0 and KŁ parameter θ ∈ (0, 1].\nThe KŁ geometry characterizes the local geometry of a nonconvex function around the set of critical points. To explain, consider the case where h is a differentiable function so that ∂h(x) = ∇h(x). Then, the KŁ inequality in eq. (1) becomes h(x)− hΩ ≤ O(‖∇h(x)‖ 1 1−θ ), which generalizes the Polyak-Łojasiewicz (PL) condition h(x)− hΩ ≤ O(‖∇h(x)‖2) Łojasiewicz (1963); Karimi et al. (2016) (i.e., KŁ parameter θ = 12 ). Moreover, the KŁ geometry has been shown to hold for a large class of functions including sub-analytic functions, logarithm and exponential functions and semi-algebraic functions. These function classes cover most of the nonconvex objective functions encountered in practical machine learning applications Zhou et al. (2016b); Yue et al. (2018); Zhou and Liang (2017); Zhou et al. (2018b).\nThe KŁ geometry has been exploited extensively to analyze the convergence of various first-order algorithms, e.g., gradient descent Attouch and Bolte (2009); Li et al. (2017), alternating minimization Bolte et al. (2014) and distributed gradient methods Zhou et al. (2016a). It has also been exploited to study the convergence of second-order algorithms such cubic regularization Zhou et al. (2018b). In these works, it has been shown that the variable sequences generated by these algorithms converge to a desired critical point in nonconvex optimization, and the convergence rates critically depend on the parameterization θ of the KŁ geometry. In the subsequent sections, we provide a comprehensive understanding of the convergence and convergence rate of proximal-GDA under the KŁ geometry." }, { "heading": "3 PROXIMAL-GDA AND GLOBAL CONVERGENCE ANALYSIS", "text": "In this section, we study the following proximal-GDA algorithm that leverages the forward-backward splitting updates Lions and Mercier (1979); Attouch et al. (2013) to solve the regularized minimax problem (P) and analyze its global convergence properties. In particular, the proximal-GDA algorithm is a generalization of the GDA Du and Hu (2019) and projected GDA Nedić and Ozdaglar (2009) algorithms. The algorithm update rule is specified in Algorithm 1, where the two proximal gradient steps are formally defined as\nproxηxg ( xt − ηx∇1f(xt, yt) ) :∈ argmin\nu∈Rm\n{ g(u) + 1\n2ηx ‖u− xt + ηx∇1f(xt, yt)‖2\n} , (2)\nproxηyh ( yt + ηy∇2f(xt, yt) ) := argmin\nv∈Y\n{ h(v) + 1\n2ηy ‖v − yt − ηy∇2f(xt, yt)‖2\n} , (3)\nAlgorithm 1 Proximal-GDA Input: Initialization x0, y0, learning rates ηx, ηy . for t = 0, 1, 2, . . . , T − 1 do\nxt+1 ∈ proxηxg ( xt − ηx∇1f(xt, yt) ) ,\nyt+1 = proxηyh ( yt + ηy∇2f(xt, yt) ) .\nend Output: xT , yT .\nRecall that our goal is to obtain a critical point of the minimization problem minx∈Rm Φ(x) + g(x). Unlike the gradient descent algorithm which generates a sequence of monotonically decreasing function values, the function value (Φ + g)(xk) along the variable sequence generated by proximalGDA is generally oscillating due to the alternation between the gradient descent and gradient ascent steps. Hence, it seems that proximal-GDA is less stable than gradient descent. However, our next result shows that, for the problem (P), the proximal-GDA admits a special Lyapunov function that monotonically decreases in the optimization process. The proof of Proposition 2 is in Appendix B.\nProposition 2. Let Assumption 1 hold and define the Lyapunov function H(z) := Φ(x) + g(x) + (1− 14κ2 )‖y−y ∗(x)‖2 with z := (x, y). Choose the learning rates such that ηx ≤ 1κ3(L+3)2 , ηy ≤ 1 L . Then, the variables zt = (xt, yt) generated by proximal-GDA satisfy, for all t = 0, 1, 2, ...\nH(zt+1) ≤ H(zt)− 2‖xt+1 − xt‖2 − 1 4κ2 ( ‖yt+1 − y∗(xt+1)‖2 + ‖yt − y∗(xt)‖2 ) . (4)\nWe first explain how this Lyapunov function is introduced in the proof. By eq. (19) in the supplementary material, we established a recursive inequality on the objective function (Φ + g)(xt+1). One can see that the right hand side of eq. (19) contains a negative term −‖xt+1 − xt‖2 and an undesired positive term ‖y∗(xt)− yt‖2. Hence, the objective function (Φ + g)(xt+1) may be oscillating and cannot serve as a proper Lyapunov function. In the subsequent analysis, we break this positive term into a difference of two terms ‖y∗(xt)−yt‖2−‖y∗(xt+1)−yt+1‖2, by leveraging the update of yt+1 for solving the strongly concave maximization problem. After proper rearranging, this difference term contributes to the quadratic term in the Lyapunov function.\nWe note that the Lyapunov function H(z) is the objective function Φ(x) + g(x) regularized by the additional quadratic term (1− 14κ2 )‖y− y\n∗(x)‖2, and such a Lyapunov function clearly characterizes our optimization goal. To elaborate, consider a desired case where the sequence xt converges to a certain critical point x∗ and the sequence yt converges to the corresponding point y∗(x∗). In this case, it can be seen that the Lyapunov function H(zt) converges to the desired function value (Φ + g)(x∗). Hence, solving the minimax problem (P) is equivalent to minimizing the Lyapunov function. More importantly, Proposition 2 shows that the Lyapunov function value sequence {H(zt)}t is monotonically decreasing in the optimization process of proximal-GDA, implying that the algorithm continuously makes optimization progress. We also note that the coefficient (1− 14κ2 ) in the Lyapunov function is chosen in a way so that eq. (4) can be proven to be strictly decreasing. This monotonic property is the core of our analysis of proximal-GDA.\nBased on Proposition 2, we obtain the following asymptotic properties of the variable sequences generated by proximal-GDA. The proof can be found in Appendix C.\nCorollary 1. Based on Proposition 2, the sequences {xt, yt}t generated by proximal-GDA satisfy\nlim t→∞ ‖xt+1 − xt‖ = 0, lim t→∞ ‖yt+1 − yt‖ = 0, lim t→∞ ‖yt − y∗(xt)‖ = 0.\nThe above result shows that the variable sequences generated by proximal-GDA in solving the problem (P) are asymptotically stable. In particular, the last two equations show that yt asymptotically approaches the corresponding maximizer y∗(xt) of the objective function f(xt, y) + g(xt)− h(y). Hence, if xt converges to a certain critical point, yt will converge to the corresponding maximizer.\nDiscussion: We note that the monotonicity property in Proposition 2 further implies the convergence rate result min0≤k≤t ‖xk+1 − xk‖ ≤ O(t−1/2) (by telescoping over t). When there is no regularizer,\nthis convergence rate result can be shown to further imply that min0≤k≤t ‖∇Φ(xk)‖ ≤ O(t−1/2), which reduces to the Theorem 4.4 of Lin et al. (2020). However, such a convergence rate result does not imply the convergence of the variable sequences {xt}t, {yt}t. To explain, we can apply the convergence rate result ‖xt+1 − xt‖ ≤ O(t−1/2) to bound the trajectory norm as ‖xT ‖ ≤ ‖x0‖ + ∑T−1 t=0 ‖xt+1 − xt‖ ≈ √ T , which diverges to +∞ as T → ∞. Therefore, such a type of convergence rate does not even imply the boundedness of the trajectory. In this paper, our focus is to establish the convergence of the variable sequences generated by proximal-GDA.\nAll the results in Corollary 1 imply that the alternating proximal gradient descent & ascent updates of proximal-GDA can achieve stationary points, which we show below to be critical points. Theorem 1 (Global convergence). Let Assumption 1 hold and choose the learning rates ηx ≤\n1 κ3(L+3)2 , ηy ≤ 1 L . Then, proximal-GDA satisfies the following properties.\n1. The function value sequence {(Φ + g)(xt)}t converges to a finite limit H∗ > −∞; 2. The sequences {xt}t, {yt}t are bounded and have compact sets of limit points. Moreover, (Φ + g)(x∗) ≡ H∗ for any limit point x∗ of {xt}t; 3. Every limit point of {xt}t is a critical point of (Φ + g)(x).\nThe proof of Theorem 1 is presented in Appendix D. The above theorem establishes the global convergence property of proximal-GDA. Specifically, item 1 shows that the function value sequence {(Φ + g)(xt)}t converges to a finite limit H∗, which is also the limit of the Lyapunov function sequence {H(zt)}t. Moreover, items 2 & 3 further show that all the converging subsequences of {xt}t converge to critical points of the problem, at which the function Φ + g achieves the constant value H∗. These results show that proximal-GDA can properly find critical points of the minimax problem (P). Furthermore, based on these results, the variable sequences generated by proximal-GDA are guaranteed to enter a local parameter region where the Kurdyka-Łojasiewicz geometry holds, which we exploit in the next section to establish stronger convergence results of the algorithm." }, { "heading": "4 VARIABLE CONVERGENCE OF PROXIMAL-GDA UNDER KŁ GEOMETRY", "text": "We note that Theorem 1 only shows that every limit point of {xt}t is a critical point, and the sequences {xt, yt}t may not necessarily be convergent. In this section, we exploit the local KŁ geometry of the Lyapunov function to formally prove the convergence of these sequences. Throughout this section, we adopt the following assumption. Assumption 2. Regarding the mapping y∗(x), the function ‖y∗(x)− y‖2 has a non-empty subdifferential, i.e., ∂x(‖y∗(x)− y‖2) 6= ∅.\nNote that in many practical scenarios y∗(x) is sub-differentiable. In addition, Assumption 2 ensures the sub-differentiability of the Lyapunov function H(z) := Φ(x) + g(x) + (1− 14κ2 )‖y − y\n∗(x)‖2. We obtain the following variable convergence result of proximal-GDA under the KŁ geometry. The proof is presented in Appendix E. Theorem 2 (Variable convergence). Let Assumption 1 & 2 hold and assume that H has the KŁ geometry. Choose the learning rates ηx ≤ 1κ3(L+3)2 and ηy ≤ 1 L . Then, the sequence {(xt, yt)}t generated by proximal-GDA converges to a certain critical point (x∗, y∗(x∗)) of (Φ + g)(x), i.e.,\nxt t→ x∗, yt t→ y∗(x∗).\nTheorem 2 formally shows that proximal-GDA is guaranteed to converge to a certain critical point (x∗, y∗(x∗)) of the minimax problem (P), provided that the Lyapunov function belongs to the large class of KŁ functions. To the best of our knowledge, this is the first variable convergence result of GDA-type algorithms in nonconvex minimax optimization. The proof logic of Theorem 2 can be summarized as the following two key steps.\nStep 1: By leveraging the monotonicity property of the Lyapunov function in Proposition 2, we first show that the variable sequences of proximal-GDA eventually enter a local region where the KŁ geometry holds;\nStep 2: Then, combining the KŁ inequality in eq. (1) and the monotonicity property of the Lyapunov function in eq. (4), we show that the variable sequences of proximal-GDA are Cauchy sequences and hence converge to a certain critical point." }, { "heading": "5 CONVERGENCE RATE OF PROXIMAL-GDA UNDER KŁ GEOMETRY", "text": "In this section, we exploit the parameterization of the KŁ geometry to establish various types of asymptotic convergence rates of proximal-GDA.\nWe obtain the following asymptotic convergence rates of proximal-GDA under different parameter regimes of the KŁ geometry. The proof is presented in Appendix F. In the sequel, we denote t0 as a sufficiently large positive integer, denote c > 0 as the constant in Definition 2 and also define\nM := max {1\n2 ( 1 ηx + (L+ 4κ2)(1 + κ) )2 , 4κ2(L+ 4κ)2 } . (5)\nTheorem 3 (Funtion value convergence rate). Under the same conditions as those of Theorem 2, the Lyapunov function value sequence {H(zt)}t converges to the limit H∗ at the following rates. 1. If KŁ geometry holds with θ = 1, then H(zt) ↓ H∗ within finite number of iterations; 2. If KŁ geometry holds with θ ∈ ( 12 , 1), then H(zt) ↓ H ∗ super-linearly as\nH(zt)−H∗ ≤ (2Mc2)− 1 2θ−1 exp ( − ( 1\n2(1− θ)\n)t−t0) , ∀t ≥ t0; (6)\n3. If KŁ geometry holds with θ = 12 , then H(zt) ↓ H ∗ linearly as H(zt)−H∗ ≤ ( 1 + 1\n2Mc2\n)t0−t (H(zt0)−H∗), ∀t ≥ t0; (7)\n4. If KŁ geometry holds with θ ∈ (0, 12 ), then H(zt) ↓ H ∗ sub-linearly as\nH(zt)−H∗ ≤ [ C(t− t0) ]− 11−2θ , ∀t ≥ t0. (8) where C = min [ 1−2θ 8Mc2 , d −(1−2θ) t0 ( 1− 2−(1−2θ) )] > 0.\nIt can be seen from the above theorem that the convergence rate of the Lyapunov function of proximal-GDA is determined by the KŁ parameter θ. A larger θ implies that the local geometry of H is ‘sharper’, and hence the corresponding convergence rate is orderwise faster. In particular, the algorithm converges at a linear rate when the KŁ geometry holds with θ = 12 (see the item 3), which is a generalization of the Polyak-Łojasiewicz (PL) geometry. As a comparison, in the existing analysis of GDA, such a linear convergence result is established under stronger geometries, e.g., convex-strongly-concave Du and Hu (2019), strongly-convex-strongly-concave Mokhtari et al. (2020); Zhang and Wang (2020) and two-sided PL condition Yang et al. (2020). In summary, the above theorem provides a full characterization of the fast convergence rates of proximal-GDA in the full spectrum of the KŁ geometry.\nMoreover, we also obtain the following asymptotic convergence rates of the variable sequences that are generated by proximal-GDA under different parameterization of the KŁ geometry. The proof is presented in Appendix G. Theorem 4 (Variable convergence rate). Under the same conditions as those of Theorem 2, the sequences {xt, yt}t converge to their limits x∗, y∗(x∗) respectively at the following rates. 1. If KŁ geometry holds with θ = 1, then (xt, yt) → (x∗, y∗(x∗)) within finite number of itera-\ntions; 2. If KŁ geometry holds with θ ∈ ( 12 , 1), then (xt, yt)→ (x ∗, y∗(x∗)) super-linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O ( exp ( − ( 1 2(1− θ) )t−t0)) , ∀t ≥ t0; (9)\n3. If KŁ geometry holds with θ = 12 , then (xt, yt)→ (x ∗, y∗(x∗)) linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O (( min { 2, 1 + 1\n2Mc2\n})(t0−t)/2) , ∀t ≥ t0; (10)\n4. If KŁ geometry holds with θ ∈ (0, 12 ), then (xt, yt)→ (x ∗, y∗(x∗)) sub-linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O ( (t− t0)− θ 1−2θ ) , ∀t ≥ t0. (11)\nTo the best of our knowledge, this is the first characterization of the variable convergence rates of proximal-GDA in the full spectrum of the KŁ geometry. It can be seen that, similar to the convergence rate results of the function value sequence, the convergence rate of the variable sequences is also affected by the parameterization of the KŁ geometry." }, { "heading": "6 CONCLUSION", "text": "In this paper, we develop a new analysis framework for the proximal-GDA algorithm in nonconvexstrongly-concave optimization. Our key observation is that proximal-GDA has a intrinsic Lyapunov function that monotonically decreases in the minimax optimization process. Such a property demonstrates the stability of the algorithm. Moreover, we establish the formal variable convergence of proximal-GDA to a critical point of the objective function under the ubiquitous KŁ geometry. Our results fully characterize the impact of the parameterization of the KŁ geometry on the convergence rate of the algorithm. In the future study, we will leverage such an analysis framework to explore the convergence of stochastic GDA algorithms and their variance-reduced variants." }, { "heading": "ACKNOWLEDGEMENT", "text": "The work of T. Xu and Y. Liang was supported partially by the U.S. National Science Foundation under the grants CCF-1900145 and CCF-1909291." }, { "heading": "A PROOF OF PROPOSITION 1", "text": "Proposition 1 (Lipschitz continuity of y∗(x) and∇Φ(x)). Let Assumption 1 hold. Then, the mapping y∗(x) and the function Φ(x) satisfy\n1. Mapping y∗(x) is κ-Lipschitz continuous; 2. Function Φ(x) is L(1 + κ)-smooth with∇Φ(x) = ∇1f(x, y∗(x)).\nProof. We first prove item 1. Since f(x, y) is strongly concave in y for every x and h(y) is convex, the mapping y∗(x) = arg maxy∈Y f(x, y)− h(y) is uniquely defined. We first show that y∗(x) is a Lipschitz mapping. Consider two arbitrary points x1, x2. The optimality conditions of y∗(x1) and y∗(x2) imply that\n〈y − y∗(x1),∇2f(x1, y∗(x1))− u1〉 ≤ 0, ∀y ∈ Y, u1 ∈ ∂h(y∗(x1)), (12) 〈y − y∗(x2),∇2f(x2, y∗(x2))− u2〉 ≤ 0, ∀y ∈ Y, u2 ∈ ∂h(y∗(x2)). (13)\nSetting y = y∗(x2) in eq. (12), y = y∗(x1) in eq. (13) and summing up the two inequalities, we obtain that\n〈y∗(x2)− y∗(x1),∇2f(x1, y∗(x1))−∇2f(x2, y∗(x2))− u1 + u2〉 ≤ 0. (14) Since ∂h is a monotone operator (by convexity), we know that 〈u2 − u1, y∗(x2) − y∗(x1)〉 ≥ 0. Hence, the above inequality further implies that\n〈y∗(x2)− y∗(x1),∇2f(x1, y∗(x1))−∇2f(x2, y∗(x2))〉 ≤ 0. (15) Next, by strong concavity of f(x1, ·), we have that 〈y∗(x2)− y∗(x1),∇2f(x1, y∗(x2))−∇2f(x1, y∗(x1))〉+ µ‖y∗(x1)− y∗(x2)‖2 ≤ 0. (16)\nAdding up the above two inequalities yields that µ‖y∗(x1)− y∗(x2)‖2 ≤ 〈y∗(x2)− y∗(x1),∇2f(x2, y∗(x2))−∇2f(x1, y∗(x2))〉\n≤ ‖y∗(x2)− y∗(x1)‖‖∇2f(x2, y∗(x2))−∇2f(x1, y∗(x2))‖ ≤ L‖y∗(x2)− y∗(x1)‖‖x2 − x1‖.\nThe above inequality shows that ‖y∗(x1)− y∗(x2)‖ ≤ κ‖x2 − x1‖, and item 1 is proved. Next, we will prove item 2.\nConsider An = {y∗(x) : x ∈ Rm, ‖x‖ ≤ n} ⊂ Y . Since h is proper and convex, h(y0) < +∞ for some y0 ∈ Y . Since f is L-smooth, its value is finite everywhere. Hence, for any x ∈ Rm, Φ(x) = maxy∈Y f(x, y)− h(y) ≥ f(x, y0)− h(y0) > −∞, so h(y∗(x)) = f(x, y∗(x))−Φ(x) < +∞. Therefore, based on Corollary 10.1.1 of Rockafellar (1970), h(y) is continuous on An and thus f(x, y)− h(y) is continuous in (x, y) ∈ Rm ×An. Also,∇1f(x, y) is continuous in (x, y) ∈ Rm × An since f is L-smooth. For any sequence {xk} such that ‖xk‖ ≤ n and y∗(xk)→ y ∈ Y , y = y∗(x′) for any limit point x′ of {xk} (there is at least one such limit point since ‖xk‖ ≤ n) since we have proved that y∗ is continuous. As ‖x′‖ ≤ n, y ∈ An. Hence, An is closed. As An is included in bounded Y , An is compact. Therefore, based on the Danskin theorem Bernhard and Rapaport (1995), the function Φn(x) := arg maxy∈An f(x, y) − h(y) is differntialable with ∇Φn(x) = ∇1f(x, y∗(x)). On one hand, Φn(x) ≤ Φ(x) since An ⊂ Y . On the other hand, when ‖x‖ ≤ n, y∗(x) ∈ An, so Φ(x) = f(x, y∗(x)) − h(y∗(x)) ≤ Φn(x). Hence, when ‖x‖ ≤ n, Φ(x) = Φn(x) and thus ∇Φ(x) = ∇Φn(x) = ∇1f(x, y∗(x)). Since n can be arbitrarily large, ∇Φ(x) = ∇1f(x, y∗(x)) for any x ∈ Rm. Next, consider any x1, x2 ∈ Rm, we obtain that\n‖∇Φ(x2)−∇Φ(x1)‖ =‖∇1f(x2, y∗(x2))−∇1f(x1, y∗(x1))‖ ≤L‖x2 − x1‖+ L‖y∗(x2)− y∗(x1)‖ ≤L‖x2 − x1‖+ Lκ‖x2 − x1‖ =L(1 + κ)‖x2 − x1‖,\nwhich implies that Φ(x) is L(1 + κ)-smooth." }, { "heading": "B PROOF OF PROPOSITION 2", "text": "Proposition 2. Let Assumption 1 hold and define the Lyapunov function H(z) := Φ(x) + g(x) + (1− 14κ2 )‖y−y ∗(x)‖2 with z := (x, y). Choose the learning rates such that ηx ≤ 1κ3(L+3)2 , ηy ≤ 1 L . Then, the variables zt = (xt, yt) generated by proximal-GDA satisfy, for all t = 0, 1, 2, ...\nH(zt+1) ≤ H(zt)− 2‖xt+1 − xt‖2 − 1 4κ2 ( ‖yt+1 − y∗(xt+1)‖2 + ‖yt − y∗(xt)‖2 ) . (4)\nProof. Consider the t-th iteration of proximal-GDA. By smoothness of Φ we obtain that\nΦ(xt+1) ≤ Φ(xt) + 〈xt+1 − xt,∇Φ(xt)〉+ L(1 + κ)\n2 ‖xt+1 − xt‖2. (17)\nOn the other hand, by the definition of the proximal gradient step of xt, we have\ng(xt+1) + 1\n2ηx ‖xt+1 − xt + ηx∇1f(xt, yt)‖2 ≤ g(xt) +\n1\n2ηx ‖ηx∇1f(xt, yt)‖2,\nwhich further simplifies to\ng(xt+1) ≤ g(xt)− 1\n2ηx ‖xt+1 − xt‖2 − 〈xt+1 − xt,∇1f(xt, yt)〉. (18)\nAdding up eq. (17) and eq. (18) yields that\nΦ(xt+1) + g(xt+1) ≤ Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2\n) ‖xt+1 − xt‖2 + 〈xt+1 − xt,∇Φ(xt)−∇1f(xt, yt)〉\n= Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2\n) ‖xt+1 − xt‖2 + ‖xt+1 − xt‖‖∇Φ(xt)−∇1f(xt, yt)‖\n= Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2\n) ‖xt+1 − xt‖2 + ‖xt+1 − xt‖‖∇1f(xt, y∗(xt))−∇1f(xt, yt)‖\n≤ Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2\n) ‖xt+1 − xt‖2 + L‖xt+1 − xt‖‖y∗(xt)− yt‖.\n≤ Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2 − L\n2κ2\n2\n) ‖xt+1 − xt‖2 + 1\n2κ2 ‖y∗(xt)− yt‖2 (19)\nNext, consider the term ‖y∗(xt) − yt‖ in the above inequality. Note that y∗(xt) is the unique minimizer of the strongly concave function f(xt, y)− h(y), and yt+1 is obtained by applying one proximal gradient step on it starting from yt. Hence, by the convergence rate of proximal gradient ascent algorithm under strong concavity, we conclude that with ηy ≤ 1L ,\n‖yt+1 − y∗(xt)‖2 ≤ ( 1− κ−1 ) ‖yt − y∗(xt)‖2. (20)\nHence, we further obtain that ‖y∗(xt+1)− yt+1‖2 ≤ ( 1 + κ−1 ) ‖yt+1 − y∗(xt)‖2 + (1 + κ)‖y∗(xt+1)− y∗(xt)‖2\n≤ ( 1− κ−2 ) ‖yt − y∗(xt)‖2 + κ2(1 + κ)‖xt+1 − xt‖2. (21)\nAdding eqs. (19) & (21), we obtain\nΦ(xt+1) + g(xt+1) ≤ Φ(xt) + g(xt)− ( 1\n2ηx − L(1 + κ) 2 − L\n2κ2 2 − κ2(1 + κ)\n) ‖xt+1 − xt‖2\n+ (\n1− 1 2κ2\n) ‖y∗(xt)− yt‖2 − ‖y∗(xt+1)− yt+1‖2\nRearranging the equation above and recalling the definition of the Lyapunov function H(z) := Φ(x) + g(x) + ( 1− 14κ2 ) ‖y − y∗(x)‖2, we have\nH(zt+1) ≤H(zt)− ( 1\n2ηx − L(1 + κ) 2 − L\n2κ2 2 − κ2(1 + κ)\n) ‖xt+1 − xt‖2\n− 1 4κ2 (‖y∗(xt)− yt‖2 + ‖y∗(xt+1)− yt+1‖2) (22)\nWhen ηx < κ−3(L+ 3)−2, using κ ≥ 1 yields that\n1 2ηx − L(1 + κ) 2 − L\n2κ2 2 − κ2(1 + κ)\n≥ 1 2 κ3(L+ 3)2 − L 2 (2κ)κ2 − L\n2κ3 2 − κ2(2κ)\n= 1\n2 κ3[(L+ 3)2 − 2L− L2 − 4]\n= 1\n2 κ3(4L+ 5) > 2 (23)\nAs a result, eq. (4) can be concluded by substituting eq. (23) into eq. (22)." }, { "heading": "C PROOF OF COROLLARY 1", "text": "Corollary 1. Based on Proposition 2, the sequences {xt, yt}t generated by proximal-GDA satisfy\nlim t→∞ ‖xt+1 − xt‖ = 0, lim t→∞ ‖yt+1 − yt‖ = 0, lim t→∞ ‖yt − y∗(xt)‖ = 0.\nProof. To prove the first and third items of Corollary 1, summing the inequality of Proposition 2 over t = 0, 1, ..., T − 1, we obtain that for all T ≥ 1,\nT−1∑ t=0 [ 2‖xt+1 − xt‖2 + 1 4κ2 (‖yt+1 − y∗(xt+1)‖2 + ‖yt − y∗(xt)‖2) ] ≤ H(z0)−H(zT ) ≤ H(z0)− [ Φ(xT ) + g(xT )\n] ≤ H(z0)− inf\nx∈Rm\n( Φ(x) + g(x) ) < +∞.\nLetting T →∞, we conclude that ∞∑ t=0 [ 2‖xt+1 − xt‖2 + 1 4κ2 (‖yt+1 − y∗(xt+1)‖2 + ‖yt − y∗(xt)‖2) ] < +∞.\nTherefore, we must have limt→∞ ‖xt+1 − xt‖ = limt→∞ ‖yt − y∗(xt)‖ =0. To prove the second item, note that\n‖yt+1 − yt‖ ≤ ‖yt+1 − y∗(xt)‖+ ‖yt − y∗(xt)‖ eq. (20) ≤ ( √ 1− κ−1 + 1)‖yt − y∗(xt)‖ t→ 0." }, { "heading": "D PROOF OF THEOREM 1", "text": "Theorem 1 (Global convergence). Let Assumption 1 hold and choose the learning rates ηx ≤ 1 κ3(L+3)2 , ηy ≤ 1 L . Then, proximal-GDA satisfies the following properties.\n1. The function value sequence {(Φ + g)(xt)}t converges to a finite limit H∗ > −∞; 2. The sequences {xt}t, {yt}t are bounded and have compact sets of limit points. Moreover, (Φ + g)(x∗) ≡ H∗ for any limit point x∗ of {xt}t; 3. Every limit point of {xt}t is a critical point of (Φ + g)(x).\nProof. We first prove some useful results on the Lyapunov functionH(z). By Assumption 1 we know that Φ+g is bounded below and have compact sub-level sets, and we first show thatH(z) also satisfies these conditions. First, note that H(z) = Φ(x) + g(x) + ( 1− 14κ2 ) ‖y − y∗(x)‖2 ≥ Φ(x) + g(x). Taking infimum over x, y on both sides we obtain that infx,yH(z) ≥ infx Φ(x) + g(x) > −∞. This shows that H(z) is bounded below. Second, consider the sub-level set Zα := {z = (x, y) : H(z) ≤ α} for any α ∈ R. This set is equivalent to {(x, y) : Φ(x) + g(x) + ( 1− 14κ2 ) ‖y − y∗(x)‖2 ≤ α}. For any point (x, y) ∈ Zα, the x part is included in the compact set {x : Φ(x) + g(x) ≤ α}. Therefore, the x in this set must be compact. Also, the y in this set should also be compact as it is inside the co-coercive function ‖y − y∗(x)‖2. Hence, we have shown that H(z) is bounded below and have compact sub-level set.\nWe first show that {(Φ + g)(xt)}t has a finite limit. We have shown in Proposition 2 that {H(zt)}t is monotonically decreasing. Since H(z) is bounded below, we conclude that {H(zt)}t has a finite limit H∗ > −∞, i.e., limt→∞(Φ + g)(xt) + ( 1− 14κ2 ) ‖yt − y∗(xt)‖2 = H∗. Moreover, since\n‖yt − y∗(xt)‖ t→ 0, we further conclude that limt→∞(Φ + g)(xt) = H∗.\nNext, we prove the second item. Since {H(zt)}t is monotonically decreasing and H(z) has compact sub-level set, we conclude that {xt}t, {yt}t are bounded and hence have compact sets of limit points. Next, we derive a bound on the subdifferential. By the optimality condition of the proximal gradient update of xt and the summation rule of subdifferential in Corollary 1.12.2 of Kruger (2003), we have\n0 ∈ ∂g(xt+1) + 1\nηx\n( xt+1 − xt + ηx∇1f(xt, yt) ) .\nThen, we obtain that 1\nηx\n( xt − xt+1 ) −∇1f(xt, yt) +∇Φ(xt+1) ∈ ∂(Φ + g)(xt+1), (24)\nwhich further implies that\ndist∂(Φ+g)(xt+1)(0) ≤ 1\nηx ‖xt+1 − xt‖+ ‖∇1f(xt, yt)−∇Φ(xt+1)‖\n= 1\nηx ‖xt+1 − xt‖+ ‖∇1f(xt, yt)−∇1f(xt+1, y∗(xt+1))‖\n≤ 1 ηx ‖xt+1 − xt‖+ L(‖xt+1 − xt‖+ ‖y∗(xt+1)− yt‖)\n≤ ( 1 ηx + L ) ‖xt+1 − xt‖+ L ( ‖y∗(xt+1)− y∗(xt)‖+ ‖y∗(xt)− yt‖ ) ≤ ( 1 ηx + L(1 + κ) ) ‖xt+1 − xt‖+ L‖y∗(xt)− yt‖.\nSince we have shown that ‖xt+1 − xt‖ t→ 0, ‖y∗(xt) − yt‖ t→ 0, we conclude from the above inequality that dist∂(Φ+g)(xt)(0)\nt→ 0. Therefore, we have shown that 1\nηx\n( xt−1 − xt ) −∇1f(xt−1, yt−1) +∇Φ(xt) ∈ ∂(Φ + g)(xt),\nand 1\nηx\n( xt−1 − xt ) −∇1f(xt−1, yt−1) +∇Φ(xt) t→ 0. (25)\nNow consider any limit point x∗ of xt so that xt(j) j→ x∗ along a subsequence. By the proximal update of xt(j), we have\ng(xt(j)) + 1\n2ηx ‖xt(j) − xt(j)−1‖2 + 〈xt(j) − xt(j)−1,∇1f(xt(j)−1, yt(j)−1)〉\n≤ g(x∗) + 1 2ηx ‖x∗ − xt(j)−1‖2 + 〈x∗ − xt(j)−1,∇1f(xt(j)−1, yt(j)−1)〉.\nTaking limsup on both sides of the above inequality and noting that {xt}t, {yt}t are bounded, ∇f is Lipschitz, ‖xt+1 − xt‖\nt→ 0 and xt(j) → x∗, we conclude that lim supj g(xt(j)) ≤ g(x∗). Since g is lower-semicontinuous, we know that lim infj g(xt(j)) ≥ g(x∗). Combining these two inequalities yields that limj g(xt(j)) = g(x∗). By continuity of Φ, we further conclude that limj(Φ+g)(xt(j)) = (Φ + g)(x∗). Since we have shown that the entire sequence {(Φ + g)(xt)}t converges to a certain finite limit H∗, we conclude that (Φ + g)(x∗) ≡ H∗ for all the limit points x∗ of {xt}t.\nNext, we prove the third item. To this end, we have shown that for every subsequence xt(j) j→ x∗, we have that (Φ + g)(xt(j)) j→ (Φ + g)(x∗) and there exists ut ∈ ∂(Φ + g)(xt) such that ut\nt→ 0 (by eq. (25)). Recall the definition of limiting sub-differential, we conclude that every limit point x∗ of {xt}t is a critical point of (Φ + g)(x), i.e., 0 ∈ ∂(Φ + g)(x∗)." }, { "heading": "E PROOF OF THEOREM 2", "text": "Theorem 2 (Variable convergence). Let Assumption 1 & 2 hold and assume that H has the KŁ geometry. Choose the learning rates ηx ≤ 1κ3(L+3)2 and ηy ≤ 1 L . Then, the sequence {(xt, yt)}t generated by proximal-GDA converges to a certain critical point (x∗, y∗(x∗)) of (Φ + g)(x), i.e.,\nxt t→ x∗, yt t→ y∗(x∗).\nProof. We first derive a bound on ∂H(z). Recall thatH(z) = Φ(x)+g(x)+ ( 1− 14κ2 ) ‖y−y∗(x)‖2, and that ‖y∗(x)− y‖2 has non-empty subdifferential ∂x(‖y∗(x)− y‖2). We therefore have\n∂xH(z) ⊃ ∂(Φ + g)(x) + (\n1− 1 4κ2\n) ∂x(‖y∗(x)− y‖2),\n∇yH(z) = − (\n2− 1 2κ2\n)( y∗(x)− y ) ,\nwhere the first inclusion follows from the scalar multiplication rule and sum rule of sub-differential, see Proposition 1.11 & 1.12 of Kruger (2003). Next, we derive upper bounds on these sub-differentials. Based on Definition 1, we can take any u ∈ ∂̂x(‖y∗(x)− y‖2) and obtain that\n0 ≤ lim inf z 6=x,z→x ‖y∗(z)− y‖2 − ‖y∗(x)− y‖2 − uᵀ(z − x) ‖z − x‖\n≤ lim inf z 6=x,z→x [y∗(z)− y∗(x)]ᵀ[y∗(z) + y∗(x)− 2y]− uᵀ(z − x) ‖z − x‖\n≤ lim inf z 6=x,z→x ‖y∗(z)− y∗(x)‖‖y∗(z) + y∗(x)− 2y‖ − uᵀ(z − x) ‖z − x‖\n(i) ≤ lim inf z 6=x,z→x\n[ κ‖y∗(z) + y∗(x)− 2y‖ − u\nᵀ(z − x) ‖z − x‖ ] (ii) = 2κ‖y∗(x)− y‖ − lim sup\nz 6=x,z→x\nuᵀ(z − x) ‖z − x‖\n(iii) = 2κ‖y∗(x)− y‖ − ‖u‖ (26)\nwhere (i) and (ii) use the fact that y∗ is κ-Lipschitz based on Proposition 1, and the limsup in (iii) is achieved by letting z = x+σu with σ → 0+ in (ii). Hence, we conclude that ‖u‖ ≤ 2κ‖y∗(x)− y‖. Since ∂x(‖y∗(x)− y‖2) is the graphical closure of ∂̂x(‖y∗(x)− y‖2), we have that\ndist∂x(‖y∗(x)−y‖2)(0) ≤ 2κ‖y ∗(x)− y‖.\nThen, utilizing the characterization of ∂(Φ + g)(x) in eq. (24), we obtain that\ndist∂H(zt+1)(0)\n≤ dist∂xH(zt+1)(0) + ‖∇yH(zt+1)‖ ≤ dist∂(Φ+g)(xt+1)(0) + (\n1− 1 4κ2\n) dist∂x(‖y∗(xt+1)−yt+1‖2)(0) + ( 2− 1\n2κ2\n) ‖y∗(xt+1)− yt+1‖\n≤ 1 ηx ‖xt+1 − xt‖+ ‖∇1f(xt, yt)−∇Φ(xt+1)‖+\n( 2− 1\n2κ2\n)( 1 + κ ) ‖y∗(xt+1)− yt+1‖\n(i) ≤ ( 1 ηx + L ) ‖xt+1 − xt‖+ L‖y∗(xt+1)− yt‖+ 2 ( 1 + κ ) ‖y∗(xt+1)− yt+1‖\n(ii) ≤ ( 1 ηx + L(1 + κ) ) ‖xt+1 − xt‖+ L‖y∗(xt)− yt‖\n+ 2 ( 1 + κ )[√ 1− κ−2‖y∗(xt)− yt‖+ κ √ (1 + κ)‖xt+1 − xt‖ ] (iii)\n≤ ( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt+1 − xt‖+ (L+ 4κ)‖y∗(xt)− yt‖. (27)\nwhere (i) uses Proposition 1 that ∇Φ(xt+1) = ∇1f(xt+1, y∗(xt+1)) and that y∗ is κ-Lipschitz, (ii) uses eq. (21) and the inequality that √ a+ b ≤ √ a+ √ b (a, b ≥ 0) and (iii) uses κ ≥ 1.\nNext, we prove the convergence of the sequence under the assumption that H(z) is a KŁ function. Recall that we have shown in the proof of Theorem 1 that: 1) {H(zt)}t decreases monotonically to the finite limit H∗; 2) for any limit point x∗, y∗ of {xt}t, {yt}t, H(x∗, y∗) has the constant value H∗. Hence, the KŁ inequality (see Definition 2) holds after sufficiently large number of iterations, i.e., there exists t0 ∈ N+ such that for all t ≥ t0,\nϕ′(H(zt)−H∗)dist∂H(zt)(0) ≥ 1. Rearranging the above inequality and utilizing eq. (27), we obtain that for all t ≥ t0,\nϕ′(H(zt)−H∗)\n≥ 1 dist∂H(zt)(0)\n≥ [( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt − xt−1‖+ (L+ 4κ)‖y∗(xt−1)− yt−1‖ ]−1 (28)\nBy concavity of the function ϕ (see Definition 2), we know that\nϕ(H(zt)−H∗)− ϕ(H(zt+1)−H∗) ≥ ϕ′(H(zt)−H∗)(H(zt)−H(zt+1)) (i)\n≥ ‖xt+1 − xt‖2 + 14κ2 ‖yt − y ∗(xt)‖2( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt − xt−1‖+ (L+ 4κ)‖y∗(xt−1)− yt−1‖\n(29)\n(ii) ≥ 1 2\n[ ‖xt+1 − xt‖+ 12κ‖yt − y ∗(xt)‖ ]2\n( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt − xt−1‖+ (L+ 4κ)‖y∗(xt−1)− yt−1‖ ,\nwhere (i) uses Proposition 2 and eq. (28), (ii) uses the inequality that a2 + b2 ≥ 12 (a+ b) 2.\nRearranging the above inequality that[ ‖xt+1 − xt‖+ 1\n2κ ‖yt − y∗(xt)‖ ]2 ≤ 2[ϕ(H(zt)−H∗)− ϕ(H(zt+1)−H∗)][( 1\nηx + (L+ 4κ2)(1 + κ)\n) ‖xt − xt−1‖+ (L+ 4κ)‖y∗(xt−1)− yt−1‖ ] ≤ [ C[ϕ(H(zt)−H∗)− ϕ(H(zt+1)−H∗)]\n+ 1\nC ( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt − xt−1‖+ 1 C (L+ 4κ)‖y∗(xt−1)− yt−1‖ ]2 where the final step uses the inequality that 2ab ≤ (Ca+ bC )\n2 for any a, b ≥ 0 and C > 0 (the value of C will be assigned later). Taking square root of both sides of the above inequality and telescoping over t = t0, . . . , T − 1, we obtain that\nT−1∑ t=t0 ‖xt+1 − xt‖+ 1 2κ T−1∑ t=t0 ‖yt − y∗(xt)‖\n≤ Cϕ[H(zt0)−H∗]− Cϕ[H(zT )−H∗] + 1\nC ( 1 ηx + (L+ 4κ2)(1 + κ) ) T−1∑ t=t0 ‖xt − xt−1‖\n+ 1\nC (L+ 4κ) T−1∑ t=t0 ‖y∗(xt−1)− yt−1‖\n≤ Cc θ [H(zt0)−H∗]θ + 1 C ( 1 ηx + (L+ 4κ2)(1 + κ) ) T−2∑ t=t0−1 ‖xt+1 − xt‖\n+ 1\nC (L+ 4κ) T−2∑ t=t0−1 ‖y∗(xt)− yt‖\nwhere the final steps uses ϕ(s) = cθ s θ and the fact that H(zT ) − H∗ ≥ 0. Since the value of C > 0 is arbitrary, we can select large enough C such that 1C ( 1 ηx + (L + 4κ)2(1 + κ) ) < 12 and 1 C (L+ 4κ) < 1 2κ . Hence, the inequality above further implies that\n1\n2 T−1∑ t=t0 ‖xt+1 − xt‖ ≤ Cc θ [H(zt0)−H∗]θ + 1 2 ‖xt0 − xt0−1‖+ 1 2κ ‖y∗(xt0−1)− yt0−1‖ < +∞.\nLetting T →∞, we conclude that ∞∑ t=1 ‖xt+1 − xt‖<+∞.\nMoreover, this implies that {xt}t is a Cauchy sequence and therefore converges to a certain limit, i.e., xt\nt→ x∗. We have shown in Theorem 1 that any such limit point must be a critical point of Φ + g. Hence, we conclude that {xt}t converges to a certain critical point x∗ of (Φ + g)(x). Also, note that ‖y∗(xt)− yt‖ t→ 0, xt t→ x∗ and y∗ is a Lipschitz mapping, so we conclude that {yt}t converges to y∗(x∗)." }, { "heading": "F PROOF OF THEOREM 3", "text": "Theorem 3 (Funtion value convergence rate). Under the same conditions as those of Theorem 2, the Lyapunov function value sequence {H(zt)}t converges to the limit H∗ at the following rates. 1. If KŁ geometry holds with θ = 1, then H(zt) ↓ H∗ within finite number of iterations; 2. If KŁ geometry holds with θ ∈ ( 12 , 1), then H(zt) ↓ H ∗ super-linearly as\nH(zt)−H∗ ≤ (2Mc2)− 1 2θ−1 exp ( − ( 1\n2(1− θ)\n)t−t0) , ∀t ≥ t0; (6)\n3. If KŁ geometry holds with θ = 12 , then H(zt) ↓ H ∗ linearly as\nH(zt)−H∗ ≤ ( 1 + 1\n2Mc2\n)t0−t (H(zt0)−H∗), ∀t ≥ t0; (7)\n4. If KŁ geometry holds with θ ∈ (0, 12 ), then H(zt) ↓ H ∗ sub-linearly as\nH(zt)−H∗ ≤ [ C(t− t0) ]− 11−2θ , ∀t ≥ t0. (8) where C = min [ 1−2θ 8Mc2 , d −(1−2θ) t0 ( 1− 2−(1−2θ) )] > 0.\nProof. Note that eq. (27) implies that\ndist∂H(zt+1)(0) 2 ≤2 ( 1 ηx + (L+ 4κ2)(1 + κ) )2 ‖xt+1 − xt‖2 + 2(L+ 4κ)2‖y∗(xt)− yt‖2,\n(30)\nRecall that we have shown that for all t ≥ t0, the KŁ property holds and we have[ ϕ′(H(zt)−H∗) ]2 dist2∂H(zt)(0) ≥ 1.\nThroughout the rest of the proof, we assume t ≥ t0. Substituting eq. (30) into the above bound yields that\n1 ≤2 [ ϕ′(H(zt)−H∗) ]2[( 1 ηx + (L+ 4κ2)(1 + κ) )2 ‖xt − xt−1‖2\n+(L+ 4κ)2‖y∗(xt−1)− yt−1‖2 ] .\n≤2M [ ϕ′(H(zt)−H∗) ]2[ 2‖xt − xt−1‖2 + 1\n4κ2 ‖y∗(xt−1)− yt−1‖2\n] (31)\nwhere the second inequality uses the definition of M in eq. (5).\nSubstituting eq. (4) and ϕ′(s) = csθ−1 (c > 0) into eq. (31) and rearranging, we further obtain that[ c(H(zt)−H∗)θ−1 ]−2 ≤ 2M [H(zt−1)−H(zt)] Defining dt = H(zt)−H∗, the above inequality further becomes\ndt−1 − dt ≥ 1\n2Mc2 d\n2(1−θ) t . (32)\nNext, we prove the convergence rates case by case.\n(Case 1) If θ = 1, then eq. (32) implies that dt−1 − dt ≥ 12Mc2 > 0 whenever dt > 0. Hence, dt achieves 0 (i.e., H(zt) achieves H∗) within finite number of iterations.\n(Case 2) If θ ∈ ( 12 , 1), since dt ≥ 0, eq. (32) implies that\ndt−1 ≥ 1\n2Mc2 d\n2(1−θ) t , (33)\nwhich is equivalent to that\n(2Mc2) 1 2θ−1 dt ≤ [ (2Mc2) 1 2θ−1 dt−1 ] 1 2(1−θ)\n(34)\nSince dt ↓ 0, (2Mc2) 1 2θ−1 dt1 ≤ e−1 for sufficiently large t1 ∈ N+ and t1 ≥ t0. Hence, eq. (34) implies that for t ≥ t1\n(2Mc2) 1 2θ−1 dt ≤ [ (2Mc2) 1 2θ−1 dt1\n][ 1 2(1−θ) ]t−t1 ≤ exp { − [ 1\n2(1− θ)\n]t−t1} .\nNote that θ ∈ ( 12 , 1) implies that 1 2(1−θ) > 1, and thus the inequality above implies that H(zt) ↓ H ∗ at the super-linear rate given by eq. (6).\n(Case 3) If θ = 12 ,\ndt−1 − dt ≥ 1\n2Mc2 dt, (35)\nwhich implies that dt ≤ ( 1 + 12Mc2 )−1 dt−1. Therefore, dt ↓ 0 (i.e., H(zt) ↓ H∗) at the linear rate given by eq. (7).\n(Case 4) If θ ∈ (0, 12 ), consider the following two subcases.\nIf dt−1 ≤ 2dt, denote ψ(s) = 11−2θ s −(1−2θ), then\nψ(dt)− ψ(dt−1) = ∫ dt−1 dt −ψ′(s)ds = ∫ dt−1 dt s−2(1−θ)ds (i) ≥ d−2(1−θ)t−1 (dt−1 − dt)\n(ii)\n≥ 1 2Mc2 ( dt dt−1 )2(1−θ) ≥ 1 23−2θMc2 ≥ 1 8Mc2 (36)\nwhere (i) uses dt ≤ dt−1 and −2(1− θ) < −1, and (ii) uses eq. (32). If dt−1 > 2dt\nψ(dt)− ψ(dt−1) = 1 1− 2θ (d −(1−2θ) t − d −(1−2θ) t−1 ) ≥\n1\n1− 2θ (d −(1−2θ) t − (2dt)−(1−2θ))\n≥1− 2 −(1−2θ)\n1− 2θ d −(1−2θ) t ≥\n1− 2−(1−2θ)\n1− 2θ d −(1−2θ) t0 . (37)\nwhere we use −(1− 2θ) < 0, dt−1 > 2dt and dt ≤ dt0 . Hence,\nψ(dt)− ψ(dt−1) ≥min [ 1\n8Mc2 ,\n1− 2−(1−2θ)\n1− 2θ d −(1−2θ) t0\n] = C\n1− 2θ > 0, (38)\nwhich implies that\nψ(dt) ≥ ψ(dt0) + C\n1− 2θ (t− t0) ≥\nC\n1− 2θ (t− t0)\nBy substituing the definition of ψ, the inequality above implies that H(zt) ↓ H∗ in a sub-linear rate given by eq. (8)." }, { "heading": "G PROOF OF THEOREM 4", "text": "Theorem 4 (Variable convergence rate). Under the same conditions as those of Theorem 2, the sequences {xt, yt}t converge to their limits x∗, y∗(x∗) respectively at the following rates. 1. If KŁ geometry holds with θ = 1, then (xt, yt) → (x∗, y∗(x∗)) within finite number of itera-\ntions; 2. If KŁ geometry holds with θ ∈ ( 12 , 1), then (xt, yt)→ (x ∗, y∗(x∗)) super-linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O ( exp ( − ( 1 2(1− θ) )t−t0)) , ∀t ≥ t0; (9)\n3. If KŁ geometry holds with θ = 12 , then (xt, yt)→ (x ∗, y∗(x∗)) linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O (( min { 2, 1 + 1\n2Mc2\n})(t0−t)/2) , ∀t ≥ t0; (10)\n4. If KŁ geometry holds with θ ∈ (0, 12 ), then (xt, yt)→ (x ∗, y∗(x∗)) sub-linearly as\nmax { ‖xt − x∗‖, ‖yt − y∗(x∗)‖ } ≤ O ( (t− t0)− θ 1−2θ ) , ∀t ≥ t0. (11)\nProof. (Case 1) If θ = 1, then based on the first case of Appendix F, H(zt) ≡ H∗ after finite number of iterations. Hence, for large enough t, Proposition 2 yields that\n2‖xt+1 − xt‖2 + 1 4κ2 ( ‖yt+1 − y∗(xt+1)‖2 + ‖yt − y∗(xt)‖2 ) ≤ H(zt)−H(zt+1) = 0, (39)\nwhich implies that xt+1 = xt and yt = y∗(xt) for large enougth t. Hence, xt → x∗ and yt → y∗(x∗) within finite number of iterations.\n(Case 2) If θ ∈ ( 12 , 1), denote At = ‖xt+1 − xt‖+ 1 2κ‖yt − y ∗(xt)‖. Then, based on the definition of M in eq. (5), we have( 1 ηx + (L+ 4κ2)(1 + κ) ) ‖xt − xt−1‖+ (L+ 4κ)‖y∗(xt−1)− yt−1‖ ≤ √ 2MAt−1. (40)\nHence, eqs. (28) & (40) and ϕ′(s) = csθ−1 imply that\nc(H(zt)−H∗)θ−1 ≥ ( √ 2MAt−1) −1,\nwhich along with θ − 1 < 0 implies\nH(zt)−H∗ ≤ (c √ 2MAt−1) 1 1−θ . (41)\nThen, eqs. (29) & (40) imply that\nϕ(H(zt)−H∗)− ϕ(H(zt+1)−H∗) ≥ ‖xt+1 − xt‖2 + 14κ2 ‖yt − y ∗(xt)‖2\n2 √ 2MAt−1 .\nUsing the inequality that a2 + b2 ≥ 12 (a+ b) 2 and recalling the definition of At and ϕ(s) = cθ s θ, the above inequality further implies that\nc θ (H(zt)−H∗)θ − c θ (H(zt+1)−H∗)θ ≥ A2t 4 √ 2MAt−1 . (42)\nSubstituting eq. (41) into eq. (42) and using H(zt+1)−H∗ ≥ 0 yield that\nA2t ≤ 4 θ (c √ 2MAt−1) 1 1−θ ,\nwhich is equivalent to that\nC1At ≤ (C1At−1) 1 2(1−θ) , (43)\nwhere\nC1 = (4/θ) 1−θ 2θ−1 (c √ 2M) 1 2θ−1 .\nNote that eq. (43) holds for t ≥ t0. Since At → 0, there exists t1 ≥ t0 such that C1At1 ≤ e−1. Hence, by iterating eq. (43) from t = t1 + 1, we obtain\nC1At ≤ exp [ − ( 1\n2(1− θ)\n)t−t1] , ∀t ≥ t1 + 1.\nHence, for any t ≥ t1 + 1, ∞∑ s=t As ≤ 1 C1 ∞∑ s=t exp [ − ( 1 2(1− θ) )s−t1] = 1\nC1 exp\n[ − ( 1\n2(1− θ) )t−t1] ∞∑ s=t exp [( 1 2(1− θ) )t−t1 − ( 1 2(1− θ) )s−t1] = 1\nC1 exp\n[ − ( 1\n2(1− θ) )t−t1] ∞∑ s=t exp {( 1 2(1− θ) )t−t1[ 1− ( 1 2(1− θ) )s−t]}\n(i) ≤ 1 C1\nexp [ − ( 1\n2(1− θ) )t−t1] ∞∑ s=t exp [ 1− ( 1 2(1− θ) )s−t] = 1\nC1 exp\n[ − ( 1\n2(1− θ) )t−t1] ∞∑ s=0 exp [ 1− ( 1 2(1− θ) )s] (ii)\n≤O { exp [ − ( 1\n2(1− θ)\n)t−t1]} , (44)\nwhere (i) uses the inequalities that 12(1−θ) > 1 and that s ≥ t ≥ t1 + 1, and (ii) uses the fact that ∑∞ s=0 exp [ 1 − ( 1 2(1−θ) )s] < +∞ is a positive constant independent from t. Therefore, the convergence rate (9) can be directly derived as follows\n‖xt − x∗‖ = lim sup T→∞ ‖xt − xT ‖ ≤ lim sup T→∞ T−1∑ s=t ‖xs+1 − xs‖\n≤ lim sup T→∞ T−1∑ s=t As ≤ O { exp [ − ( 1 2(1− θ) )t−t1]} , (45)\nand\n‖yt − y∗(x∗)‖ ≤‖yt − y∗(xt)‖+ ‖y∗(xt)− y∗(x∗)‖ (i) ≤ 2κAt + κ‖xt − x∗‖\n≤2κ ∞∑ s=t As + κ‖xt − x∗‖ (ii) ≤ O { exp [ − ( 1 2(1− θ) )t−t1]} ,\nwhere (i) uses the Lipschitz property of y∗ in Proposition 1, and (ii) uses eqs. (44) & (45). (Case 3 & 4) Notice that eq. (42) still holds if θ ∈ ( 0, 12 ] . Hence, if At ≥ 12At−1, then eq. (42) implies that\nAt ≤ 8c √ 2M\nθ\n[ (H(zt)−H∗)θ − (H(zt+1)−H∗)θ ] .\nOtherwise, At ≤ 12At−1. Combining these two inequalities yields that\nAt ≤ 8c √ 2M\nθ\n[ (H(zt)−H∗)θ − (H(zt+1)−H∗)θ ] + 1\n2 At−1.\nNotice that the inequality above holds whenever t ≥ t0. Hence, telescoping the inequality above yields\nT∑ s=t As ≤ 8c √ 2M θ [ (H(zt)−H∗)θ − (H(zT+1)−H∗)θ ] + 1 2 T−1∑ s=t−1 As, ∀t ≥ t0, (46)\nwhich along with AT ≥ 0, H(zT+1)−H∗ ≥ 0 implies that\n1\n2 T∑ s=t As ≤ 8c √ 2M θ (H(zt)−H∗)θ + 1 2 At−1,\nLetting t = t0 and T →∞ in the above inequality yields that ∑∞ s=t0\nAs < +∞. Hence, by letting T →∞ and denoting St = ∑∞ s=tAs in eq. (46), we obtain that\nSt ≤ 8c √ 2M\nθ (H(zt)−H∗)θ +\n1 2 St−1, ∀t ≥ t0,\nwhich further implies that\nSt ≤ 1\n2t−t0 St0 +\n8c √ 2M\nθ\nt∑ s=t0+1 1 2t−s (H(zs)−H∗)θ (47)\n(Case 3) If θ = 1/2, eq. (7) holds. Substituting eq. (7) and θ = 1/2 into eq. (47) yields that\nSt ≤ 1\n2t−t0 St0 + 8c\n√ 2M [H(zt0)−H∗] t∑ s=t0+1 1 2t−s ( 1 + 1 2Mc2 )(t0−s)/2 ≤ 1\n2t−t0 St0 + C2 2t t∑ s=t0+1 (1 4 + 1 8Mc2 )−s/2 (48)\nwhere\nC2 = 8c √ 2M [H(zt0)−H∗] ( 1 + 1\n2Mc2\n)t0/2 (49)\nis a positive constant independent of t.\nNotice that when 14 + 1 8Mc2 ≥ 1, t∑\ns=t0+1\n(1 4 + 1 8Mc2 )−s/2 ≤ t− t0\nand when 14 + 1 8Mc2 < 1,\nt∑ s=t0+1 (1 4 + 1 8Mc2 )−s/2 = (1 4 + 1 8Mc2 )−t/2 1− ( 14 + 18Mc2)(t−t0)/2 1− ( 1 4 + 1 8Mc2 )1/2 ≤ O[(14 + 18Mc2)−t/2] Since either of the two above inequalities holds, combining them yields that\nt∑ s=t0+1 (1 4 + 1 8Mc2 )−s/2 ≤ O { max [ t− t0, (1 4 + 1 8Mc2 )−t/2]} Substituing the above inequality into eq. (48) yields that\nSt ≤ 1\n2t−t0 St0 +O\n{ max [ 2−t(t− t0), ( 1 + 1\n2Mc2 )−t/2]} ≤O {[ min ( 2, 1 + 1\n2Mc2\n)]−t/2} .\nHence,\n‖xt − x∗‖ (i) ≤ ∞∑ s=t As = St ≤ O {[ min ( 2, 1 + 1 2Mc2 )]−t/2} ,\nwhere (i) comes from eq. (45). Then,\n‖yt − y∗(x∗)‖ ≤‖yt − y∗(xt)‖+ ‖y∗(xt)− y∗(x∗)‖ ≤ 2κAt + κ‖xt − x∗‖ ≤2κSt + κ‖xt − x∗‖ ≤ O {[ min ( 2, 1 + 1\n2Mc2\n)]−t/2} .\nThe two above inequalities yield the linear convergence rate (10).\n(Case 4) If θ ∈ (0, 12 ), then eq. (8) holds. Substituting eq. (8) into eq. (47) yields that for some constant C3 > 0,\nSt ≤ 1\n2t−t0 St0 +\n8c √ 2M\nθ\nt∑ s=t0+1 C3 2t−s (s− t0)− θ 1−2θ\n≤ 1 2t−t0 St0 + 8cC3\n√ 2M\n2t−t0θ t−t0∑ s=1 2ss− θ 1−2θ\n(i) =\n1\n2t−t0 St0 +\n8cC3 √ 2M\n2t−t0θ t1∑ s=1 2ss− θ 1−2θ + 8cC3 √ 2M 2t−t0θ t−t0∑ s=t1+1 2ss− θ 1−2θ\n≤ 1 2t−t0 St0 + 8cC3\n√ 2M\n2t−t0θ t1∑ s=1 2s + 8cC3 √ 2M 2t−t0θ t−t0∑ s=t1+1 2s ( t− t0 2 )− θ1−2θ (ii)\n≤ 1 2t−t0 St0 + 8cC3\n√ 2M\n2t−t0θ 2t1+1 +\n8cC3 √ 2M\n2t−t0θ ( t− t0 2 )− θ1−2θ 2t−t0+1\n=O [ 1\n2t−t0 +\n1\n2(t−t0)/2 + (t− t0)− θ 1−2θ\n] = O [ (t− t0)− θ 1−2θ ] , (50)\nwhere (i) denotes t1 = b(t− t0)/2c, (ii) uses the inequality that ∑t−t0 s=t1+1 2s < ∑t−t0 s=0 2\ns < 2t−t0+1. Therefore, the sub-linear convergence rate eq. (11) follows from the following inequalities.\n‖xt − x∗‖ ≤ St ≤ O [ (t− t0)− θ 1−2θ ] ,\nand\n‖yt − y∗(x∗)‖ ≤‖yt − y∗(xt)‖+ ‖y∗(xt)− y∗(x∗)‖ ≤ 2κAt + κ‖xt − x∗‖ ≤2κSt + κ‖xt − x∗‖ ≤ O [ (t− t0)− θ 1−2θ ] ." } ]
2,021
null
SP:79b19c6490c2ea5cab56666927520888191a83a7
[ "This paper shows that transformer models can be used to accurately learn advanced mathematical computations from millions of examples. The problems are drawn from the fields of differential equations and control theory. The selected problems are ones that are solvable using known algorithms; however, these algorithms involve a sequence of advanced mathematical operations (e.g., differentiation, calculating the rank of a matrix, calculating the eigenvalues of a matrix, etc), for which no known simple shortcuts exist. For the experiments in this paper, for each problem a large number (50 million) of training examples are randomly generated, and are then used to train a transformer model. Across these problems, the paper shows that the neural network is able to solve these problems at high accuracy (96-99.7% accuracy)." ]
Using transformers over large generated datasets, we train models to learn mathematical properties of differential systems, such as local stability, behavior at infinity and controllability. We achieve near perfect prediction of qualitative characteristics, and good approximations of numerical features of the system. This demonstrates that neural networks can learn to perform complex computations, grounded in advanced theory, from examples, without built-in mathematical knowledge.
[ { "affiliations": [], "name": "François Charton" }, { "affiliations": [], "name": "Amaury Hayat" } ]
[ { "authors": [ "Forough Arabshahi", "Sameer Singh", "Animashree Anandkumar" ], "title": "Combining symbolic expressions and black-box function evaluations for training neural programs", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Forough Arabshahi", "Sameer Singh", "Animashree Anandkumar" ], "title": "Towards solving differential equations through neural programming", "venue": null, "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Hajer Bahouri", "Jean-Yves Chemin", "Raphaël Danchin" ], "title": "Fourier analysis and nonlinear partial differential equations, volume 343", "venue": "Springer Science & Business Media,", "year": 2011 }, { "authors": [ "Pierre Bernhard", "Marc Deschamps" ], "title": "Kalman on dynamics and contro, linear system theory, optimal control, and filter", "venue": "Technical report,", "year": 2017 }, { "authors": [ "Kevin Clark", "Urvashi Khandelwal", "Omer Levy", "Christopher D Manning" ], "title": "What does bert look at? an analysis of bert’s attention", "venue": null, "year": 1906 }, { "authors": [ "Jean-Michel Coron" ], "title": "Control and nonlinearity, volume 136 of Mathematical Surveys and Monographs", "venue": "American Mathematical Society, Providence, RI,", "year": 2007 }, { "authors": [ "George Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of control, signals and systems,", "year": 1989 }, { "authors": [ "Lawrence C Evans" ], "title": "Partial differential equations, volume 19", "venue": "American Mathematical Soc.,", "year": 2010 }, { "authors": [ "Richard Evans", "David Saxton", "David Amos", "Pushmeet Kohli", "Edward Grefenstette" ], "title": "Can neural networks understand logical entailment", "venue": "arXiv preprint arXiv:1802.08535,", "year": 2018 }, { "authors": [ "Joseph Funke", "Matthew Brown", "Stephen M Erlien", "J Christian Gerdes" ], "title": "Collision avoidance and stabilization for autonomous vehicles in emergency scenarios", "venue": "IEEE Transactions on Control Systems Technology,", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Kurt Hornik" ], "title": "Approximation capabilities of multilayer feedforward networks", "venue": "Neural networks,", "year": 1991 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks", "venue": "Neural networks,", "year": 1990 }, { "authors": [ "Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Tomas Mikolov" ], "title": "Bag of tricks for efficient text classification", "venue": "arXiv preprint arXiv:1607.01759,", "year": 2016 }, { "authors": [ "Rudolf E. Kalman", "Yu-Chi Ho", "Kumpati S. Narendra" ], "title": "Controllability of linear dynamical systems", "venue": "Contributions to Differential Equations,", "year": 1963 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "David Kleinman" ], "title": "An easy way to stabilize a linear constant system", "venue": "IEEE Transactions on Automatic Control,", "year": 1970 }, { "authors": [ "Isaac E Lagaris", "Aristidis Likas", "Dimitrios I Fotiadis" ], "title": "Artificial neural networks for solving ordinary and partial differential equations", "venue": "IEEE transactions on neural networks,", "year": 1998 }, { "authors": [ "Isaac E Lagaris", "Aristidis C Likas", "Dimitris G Papageorgiou" ], "title": "Neural-network methods for boundary value problems with irregular boundaries", "venue": "IEEE Transactions on Neural Networks,", "year": 2000 }, { "authors": [ "Guillaume Lample", "François Charton" ], "title": "Deep learning for symbolic mathematics", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Hyuk Lee", "In Seok Kang" ], "title": "Neural algorithm for solving differential equations", "venue": "Journal of Computational Physics,", "year": 1990 }, { "authors": [ "Dahlard L Lukes" ], "title": "Stabilizability and optimal control", "venue": "Funkcial. Ekvac,", "year": 1968 }, { "authors": [ "Anthony Scopatz" ], "title": "Sympy: symbolic computing in python", "venue": "PeerJ Computer Science,", "year": 2017 }, { "authors": [ "Nicolas Minorsky" ], "title": "Automatic steering tests", "venue": "Journal of the American Society for Naval Engineers,", "year": 1930 }, { "authors": [ "Philipp Petersen", "Felix Voigtlaender" ], "title": "Optimal approximation of piecewise smooth functions using deep relu neural networks", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "Allan Pinkus" ], "title": "Approximation theory of the mlp model in neural networks", "venue": "Acta numerica,", "year": 1999 }, { "authors": [ "Michael Polanyi", "Amartya Sen" ], "title": "The Tacit Dimension", "venue": null, "year": 2009 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": null, "year": 2019 }, { "authors": [ "Keith Rudd" ], "title": "Solving partial differential equations using artificial neural networks", "venue": "PhD thesis,", "year": 2013 }, { "authors": [ "David Saxton", "Edward Grefenstette", "Felix Hill", "Pushmeet Kohli" ], "title": "Analysing mathematical reasoning abilities of neural models", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Daniel Selsam", "Matthew Lamm", "Benedikt Bünz", "Percy Liang", "Leonardo de Moura", "David L Dill" ], "title": "Learning a sat solver from single-bit supervision", "venue": "arXiv preprint arXiv:1802.03685,", "year": 2018 }, { "authors": [ "Justin Sirignano", "Konstantinos Spiliopoulos" ], "title": "Dgm: A deep learning algorithm for solving partial differential equations", "venue": "Journal of Computational Physics,", "year": 2018 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kai Sheng Tai", "Richard Socher", "Christopher D Manning" ], "title": "Improved semantic representations from tree-structured long short-term memory networks", "venue": "arXiv preprint arXiv:1503.00075,", "year": 2015 }, { "authors": [ "Andrew Trask", "Felix Hill", "Scott E Reed", "Jack Rae", "Chris Dyer", "Phil Blunsom" ], "title": "Neural arithmetic logic units", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Cimrman", "Ian Henriksen", "E.A. Quintero", "Charles R Harris", "Anne M. Archibald", "Antônio H. Ribeiro", "Fabian Pedregosa", "Paul van" ], "title": "Mulbregt, and SciPy 1. 0 Contributors. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python", "venue": "Nature Methods,", "year": 2020 }, { "authors": [ "Eugene P Wigner" ], "title": "The unreasonable effectiveness of mathematics in the natural sciences", "venue": "communications on pure and applied mathematics,", "year": 1960 }, { "authors": [ "Gal Yehuda", "Moshe Gabel", "Assaf Schuster" ], "title": "It’s not what machines can learn, it’s what we cannot teach", "venue": "arXiv preprint arXiv:2002.09398,", "year": 2020 }, { "authors": [ "Wojciech Zaremba", "Ilya Sutskever" ], "title": "Learning to execute", "venue": "arXiv preprint arXiv:1410.4615,", "year": 2014 }, { "authors": [ "Lample", "Charton" ], "title": "2020) provide the following formula to calculate the number of functions with m operators", "venue": null, "year": 2020 } ]
[ { "heading": "1 Introduction", "text": "Scientists solve problems of mathematics by applying rules and computational methods to the data at hand. These rules are derived from theory, they are taught in schools or implemented in software libraries, and guarantee that a correct solution will be found. Over time, mathematicians have developed a rich set of computational tools that can be applied to many problems, and have been said to be “unreasonably effective” (Wigner, 1960).\nDeep learning, on the other hand, learns from examples and solves problems by improving a random initial solution, without relying on domain-related theory and computational rules. Deep networks have proven to be extremely efficient for a large number of tasks, but struggle on relatively simple, rule-driven arithmetic problems (Saxton et al., 2019; Trask et al., 2018; Zaremba and Sutskever, 2014).\nYet, recent studies show that deep learning models can learn complex rules from examples. In natural language processing, models learn to output grammatically correct sentences without prior knowledge of grammar and syntax (Radford et al., 2019), or to automatically map one language into another (Bahdanau et al., 2014; Sutskever et al., 2014). In mathematics, deep learning models have been trained to perform logical inference (Evans et al., 2018), SAT solving (Selsam et al., 2018) or basic arithmetic (Kaiser and Sutskever, 2015). Lample and Charton (2020) showed that transformers can be trained from generated data to perform symbol manipulation tasks, such as function integration and finding formal solutions of ordinary differential equations.\nIn this paper, we investigate the use of deep learning models for complex mathematical tasks involving both symbolic and numerical computations. We show that models can predict the qualitative and quantitative properties of mathematical objects, without built-in mathematical knowledge. We consider three advanced problems of mathematics: the local stability and controllability of differential systems, and the existence and behavior at infinity of solutions of partial differential equations. All three problems have been widely researched and have many applications outside of pure mathematics. They have known solutions that rely on advanced symbolic and computational techniques, from formal differentiation, Fourier transform, algebraic full-rank conditions, to function evaluation, matrix inversion, and computation of complex eigenvalues. We find that neural networks can solve these problems with a very high accuracy, by simply looking at instances of problems and their solutions, while being totally unaware of the underlying theory. In one of the quantitative problems\n∗ Equal contribution, names in alphabetic order.\nwhere several solutions are possible (predicting control feedback matrix), neural networks are even able to predict different solutions that those generated with the mathematical algorithms we used for training.\nAfter reviewing prior applications of deep learning to related areas we introduce the three problems we consider, describe how we generate datasets, and detail how we train our models. Finally, we present our experiments and discuss their results." }, { "heading": "2 Related work", "text": "Applications of neural networks to differential equations have mainly focused on two themes: numerical approximation and formal resolution. Whereas most differential systems and partial differential equations cannot be solved explicitly, their solutions can be approximated numerically, and neural networks have been used for this purpose (Lagaris et al., 1998; 2000; Lee and Kang, 1990; Rudd, 2013; Sirignano and Spiliopoulos, 2018). This approach relies on the universal approximation theorem, that states that any continuous function can be approximated by a neural network with one hidden layer over a wide range of activation functions (Cybenko, 1989; Hornik et al., 1990; Hornik, 1991; Petersen and Voigtlaender, 2018; Pinkus, 1999). This has proven to be especially efficient for high dimensional problems.\nFor formal resolution, Lample and Charton (2020) proposed several approaches to generate arbitrarily large datasets of functions with their integrals, and ordinary differential equations with their solutions. They found that a transformer model (Vaswani et al., 2017) trained on millions of examples could outperform state-of-the-art symbolic frameworks such as Mathematica or MATLAB (Wolfram-Research, 2019; MathWorks, 2019) on a particular subset of equations. Their model was used to guess solutions, while verification (arguably a simpler task) was left to a symbolic framework (Meurer et al., 2017). Arabshahi et al. (2018a;b) proposed to use neural networks to verify the solutions of differential equations, and found that Tree-LSTMs (Tai et al., 2015) were better than sequential LSTMs (Hochreiter and Schmidhuber, 1997) at generalizing beyond the training distribution.\nOther approaches investigated the capacity of neural networks to perform arithmetic operations (Kaiser and Sutskever, 2015; Saxton et al., 2019; Trask et al., 2018) or to run short computer programs (Zaremba and Sutskever, 2014). More recently, Saxton et al. (2019) found that neural networks were good at solving arithmetic problems or at performing operations such as differentiation or polynomial expansion, but struggled on tasks like prime number decomposition or on primality tests that require a significant number of steps to compute. Unlike the questions considered here, most of those problems can be solved by simple algorithmic computations." }, { "heading": "3 Differential systems and their stability", "text": "A differential system of degree n is a system of n equations of n variables x1(t), ..., xn(t),\ndxi(t)\ndt = fi\n( x1(t), x2(t), ..., xn(t) ) , for i = 1...n\nor, in vector form, with x ∈ Rn and f : Rn → Rn, dx(t)\ndt = f\n( x(t) ) Many problems can be set as differential systems. Special cases include n-th order ordinary differential equations (letting x1 = y, x2 = y′, ... xn = y(n−1)), systems of coupled differential equations, and some particular partial differential equations (separable equations or equations with characteristics). Differential systems are one of the most studied areas of mathematical sciences. They are found in physics, mechanics, chemistry, biology, and economics as well as in pure mathematics. Most differential systems have no explicit solution. Therefore, mathematicians have studied the properties of their solutions, and first and foremost their stability, a notion of paramount importance in many engineering applications." }, { "heading": "3.1 Local stability", "text": "Let xe ∈ Rn be an equilibrium point, that is, f(xe) = 0. If all solutions x(t) converge to xe when their initial positions x(0) at t = 0 are close enough, the equilibrium is said to be locally stable (see Appendix B for a proper mathematical definition). This problem is well known, if f is differentiable in xe, an answer is provided by the Spectral Mapping Theorem (SMT) (Coron, 2007, Theorem 10.10): Theorem 3.1. Let J(f)(xe) be the Jacobian matrix of f in xe (the matrix of its partial derivatives relative to its variables). Let λ be the largest real part of its eigenvalues. If λ is positive, xe is an unstable equilibrium. If λ is negative, then xe is a locally stable equilibrium.\nPredicting the stability of a given system at a point xe is our first problem. We will also predict λ, which represents the speed of convergence when negative, in a second experiment. Therefore, to apply the SMT, we need to:\n1. differentiate each function with respect to each variable, obtain the formal Jacobian J(x)\nf(x) =\n( cos(x2)− 1− sin(x1)\nx21 − √ 1 + x2\n) , J(x) = ( − cos(x1) − sin(x2)\n2x1 −(2 √ 1 + x2) −1 ) 2. evaluate J(xe), the Jacobian in xe (a real or complex matrix)\nxe = (0.1, ...0.1) ∈ Rn, J(xe) = ( − cos(0.1) − sin(0.1)\n0.2 −(2 √ 1 + 0.1)−1\n) ,\n3. calculate the eigenvalues λi, i = 1...n of J(xe)\nλ1 = −1.031, λ2 = −0.441 4. compute λ = −max(Real(λi)) and return the stability (resp. λ the speed of\nconvergence)\nλ = 0.441 > 0→ locally stable with decay rate 0.441" }, { "heading": "3.2 Control theory", "text": "One of the lessons of the spectral mapping theorem is that instability is very common. In fact, unstable systems are plenty in nature (Lagrange points, epidemics, satellite orbits, etc.), and the idea of trying to control them through external variables comes naturally. This is the controllability problem. It has a lot of practical applications, including space launch and the landing on the moon, the US Navy automated pilot, or recently autonomous cars (Bernhard et al., 2017; Minorsky, 1930; Funke et al., 2016). Formally, we are given a system\ndx dt = f\n( x(t), u(t) ) , (1)\nwhere x ∈ Rn is the state of the system. We want to find a function u(t) ∈ Rp, the control action, such that, beginning from a position x0 at t = 0, we can reach a position x1 at t = T (see Appendix B). The first rigorous mathematical analysis of this problem was given by Maxwell (1868), but a turning point was reached in 1963, when Kalman gave a precise condition for a linear system (Kalman et al., 1963), later adapted to nonlinear system: Theorem 3.2 (Kalman condition). Let A = ∂xf(xe, ue) and B = ∂uf(xe, ue), if\nSpan{AiBu : u ∈ Rm, i ∈ {0, ..., n− 1}} = Rn, (2)\nthen the system is locally controllable around x = xe, u = ue.\nWhen this condition holds, a solution to the control problem that makes the system locally stable in xe is u(t) = ue +K(x(t)− xe) (c.f. Coron (2007); Kleinman (1970); Lukes (1968)\nand appendix B.4 for key steps of the proof), where K is the m×n control feedback matrix:\nK = −Btr ( e−AT [∫ T 0 e−AtBBtre−A trtdt ] e−A trT )−1 . (3)\nIn the non-autonomous case, where f = f(x, u, t) (and A and B) depends on t, (2) can be replaced by:\nSpan{Diu : u ∈ Rm, i ∈ {0, ..., 2n− 1} = Rn}, (4)\nwhere D0(t) = B(t) and Di+1(t) = D′i(t) − A(t)Di(t). All these theorems make use of advanced mathematical results, such as the Cayley-Hamilton theorem, or LaSalle invariance principle. Learning them by predicting controllability and computing the control feedback matrix K is our second problem. To measure whether the system is controllable at a point xe, we need to:\n1. differentiate the system with respect to its internal variables, obtain A(x, u) 2. differentiate the system with respect to its control variables, obtain B(x, u) 3. evaluate A and B in (xe, ue) 4. calculate the controllability matrix C with (2) (resp. (4) if non-autonomous) 5. calculate the rank d of C, if d = n, the system is controllable 6. (optionally) if d = n, compute the control feedback matrix K with (3)\nIn: f(x, u) = ( sin(x21) + log(1 + x2) + atan(ux1) 1+x2\nx2 − ex1x2\n) , xe = [0.1] ue = 1 , Out: n− d = 0 System is controllable K = (−22.8 44.0)\nA step by step derivation of this example is given in Section A of the appendix." }, { "heading": "3.3 Stability of partial differential equations using Fourier Transform", "text": "Partial Differential Equations (PDEs) naturally appear when studying continuous phenomena (e.g. sound, electromagnetism, gravitation). Over such problems, ordinary differential systems are not sufficient. Like differential systems, PDEs seldom have explicit solutions, and studying their stability has many practical applications. It is also a much more difficult subject, where few general theorems exist. We consider linear PDEs of the form\n∂tu(t, x) + ∑ |α|≤k aα∂ α x u(t, x) = 0, (5)\nwhere t, x ∈ Rn, and u(t, x) are time, position, and state. α = (α1, ..., αn) ∈ Rn is a multi-index and aα are constants. Famous examples of such problems include the heat equation, transport equations or Schrodinger equation (Evans, 2010). We want to determine whether a solution u(t, x) of (5) exists for a given an initial condition u(0, x) = u0, and if it tends to zero as t→ +∞. This is mathematically answered (see appendix B.4 and Evans (2010); Bahouri et al. (2011) for similar arguments) by: Proposition 3.1. Given u0 ∈ S ′(Rn), the space of tempered distribution, there exists a solution u ∈ S ′(Rn) if there exists a constant C such that\n∀ξ ∈ Rn , ũ0(ξ) = 0 or Real(f(ξ)) > C, (6)\nwhere ũ0 is the Fourier transform of u0 and f(ξ) is the Fourier polynomial associated with the differential operator Dx = ∑ |α|≤k aα∂ α x . In addition, if C > 0, this solution u(t, x) goes to zero when t→ +∞.\nLearning this proposition and predicting, given an input Dx and u0, whether a solution u exists, if so, whether it vanishes at infinite time, will be our third and last problem.\nTo predict whether our PDE has a solution under given initial conditions, and determine its behavior at infinity, we need to: find the Fourier polynomial f(ξ) associated to Dx; find the Fourier transform ũ0(ξ) of u0; minimize f(ξ) on F ; output (0,0) if this minimum is infinite, (1,0) is finite and negative, (1,1) if finite and positive. Optionally, output F . A step by step example is given in Appendix A.\nIn: Dx = 2∂2x0 + 0.5∂ 2 x1 + ∂ 4 x2 − 7∂ 2 x0,x1 − 1.5∂x1∂ 2 x2 ,\nOut:(1, 0) → there exists a solution u ; it does not vanish at t→ +∞" }, { "heading": "4 Datasets and models", "text": "To generate datasets, we randomly sample problems and compute their solutions with mathematical software (Virtanen et al., 2020; Meurer et al., 2017) using the techniques described in Section 3. For stability and controllability, we generate differential systems with n equations and n+ q variables (i.e. n random functions, q > 0 for controllability).\nFollowing Lample and Charton (2020), we generate random functions by sampling unarybinary trees, and randomly selecting operators, variables and integers for their internal nodes and leaves. We use +,−,×, /, exp, log, sqrt, sin, cos, tan, sin-1, cos-1, tan-1 as operators, and integers between −10 and 10 as leaves. When generating functions with n+ q variables, we build trees with up to 2(n+ q + 1) operators.\nGenerated trees are enumerated in prefix order (normal Polish notation) and converted into sequences of tokens compatible with our models. Integers and floating point reals are also represented as sequences: 142 as [INT+, 1, 4, 2], and 0.314 as [FLOAT+, 3, DOT, 1, 4, E, INT-, 1]. A derivation of the size of the problem space is provided in appendix D.4.\nLocal stability Datasets for local stability include systems with 2 to 6 equations (in equal proportion). Functions that are not differentiable at the equilibrium xe and degenerate systems are discarded. Since many of the operators we use are undefined at zero, setting xe = 0 would result in biasing the dataset by reducing the frequency of operators like division, square root, or logarithms. Instead, we select xe with all coordinates equal to 0.01 (denoted as xe = [0.01]). This is, of course, strictly equivalent mathematically to sampling systems with equilibrium at the origin or at any other point.\nWhen predicting overall stability, since stable systems become exponentially rare as dimension increases, we use rejection sampling to build a balanced dataset with 50% stable systems. When predicting convergence speed, we work from a uniform (i.e. unbalanced) sample. The value of λ at xe is expressed as a floating point decimal rounded to 4 significant digits. For this problem, we generate two datasets with over 50 million systems each.\nControl theory Datasets for automonous control include systems with 3 to 6 equations, and 4 to 9 variables (1 to 3 control variables). In the non-autonomous case, we generate systems with 2 or 3 equations. As above, we discard undefined or degenerate systems. We also skip functions with complex Jacobians in xe (since the Jacobian represents local acceleration, one expects its coordinates to be real). We have xe = [0.5] or [0.9].\nIn the autonomous case, more than 95% of the systems are controllable. When predicting controllability, we use rejection sampling to create a balanced dataset. In the non-autonomous case, we use a uniform sample with 83% controllable cases. Finally, to predict feedback matrices, we restrict generation to controllable systems and express the matrix as a sequence of floating point decimals. All 3 datasets have more than 50 million examples each.\nStability of partial differential equations using Fourier Transform We generate a differential operator (a polynomial in ∂xi) and an initial condition u0. u0 is the product of n functions f(ajxj) with known Fourier transforms, and d operators exp(ibkxk), with 0 ≤ d ≤ 2n and aj , bk ∈ {−100, . . . , 100}. We calculate the existence of solutions, their behavior when t → +∞, and the set of frequencies, and express these three values as a sequence of 2 Booleans and floating point decimals. Our dataset is over 50 million examples.\nModels and evaluation In all experiments, we use a transformer architecture with 8 attention heads. We vary the dimension from 64 to 1024, and the number of layers from 1 to 8. We train our models with the Adam optimizer (Kingma and Ba, 2014), a learning rate of 10−4 and the learning rate scheduler in Vaswani et al. (2017), over mini-batches of 1024 examples. Additional information can be found in appendix D.1. Training is performed on 8 V100 GPUs with float16 operations. Our qualitative models (predicting stability, controllability and existence of solutions) were trained for about 12 hours, but accuracies close to the optimal values were reached after about 6 hours. Learning curves for this problem can be found in appendix D.3. On quantitative models, more training time and examples were needed: 76 hours for convergence speed, 73 hours for control matrices.\nEvaluation is performed on held-out validation and test sets of 10000 examples. We ensure that validation and test examples are never seen during training (given the size of the problem space, this never happens in practice). Model output is evaluated either by comparing it with the reference solution or using a problem-specific metric." }, { "heading": "5 Experiments", "text": "" }, { "heading": "5.1 Predicting qualitative properties of differential systems", "text": "In these experiments, the model is given n functions f : Rn+p → R (n ∈ {2, . . . , 6}, p = 0 for stability, p > 0 for controllability) and is trained to predict whether the corresponding system is stable, resp. controllable, at a given point xe. This is a classification problem.\nTo provide a baseline for our results, we use fastText (Joulin et al., 2016), a state-of-the-art text classification tool, which estimates, using a bag of words model, the probability of a qualitative feature (stability) conditional to the distribution of tokens and of small fixed sequences (N-grams of up to five tokens). Such a model can detect simple correlations between inputs and outputs, such as the impact on stability of the presence of a given operator, or the number of equations in the system. It would also find out obvious solutions, due to the specifics of one problem or glitches in the data generator. FastText was trained over 2 million examples from our dataset (training over larger sets does not improve accuracy).\nA 6-layer transformer with 512 dimensions correctly predicts the system stability in 96.4% of the cases. Since the dataset is balanced, random guessing would achieve 50%. FastText achieves 60.6%, demonstrating that whereas some easy cases can be learnt by simple text classifiers, no trivial general solution exists for this dataset. Prediction accuracy decreases with the degree, but remains high even for large systems (Table 1).\nFor autonomous controllability over a balanced dataset, a 6-layer transformer with 512 dimensions correctly predicts 97.4% of the cases. The FastText baseline is 70.5%, above the 50% chance level. Whereas accuracy increases with model size (dimension and number of layers), even very small models (dimension 64 and only 1 or 2 layers) achieve performance over 80%, above the FastText baseline (Table 2).\nFor non-autonomous systems, our dataset features systems of degree 2 and 3, 83% controllable. FastText achieves 85.3%, barely above the chance level of 83%. This shows that text classifiers have difficulty handling difficult problems like this one, even in low dimensions. Our model achieves 99.7% accuracy. Again, small models, that would be unsuitable for natural language processing, achieve near perfect accuracy (Table 3)." }, { "heading": "5.2 Predicting numerical properties of differential systems", "text": "Speed of convergence In these experiments, the model is trained to predict λ, the convergence speed to the equilibrium, up to a certain precision. Here, we consider predictions to be correct when they fall within 10% of the ground truth. Further experiments with different levels of precision (2, 3 or 4 decimal digits) are provided in Appendix C.\nA model with 8 layers and a dimension of 1024 predicts convergence speed with an accuracy of 86.6% overall. While reasonably good results can be achieved with smaller models, the accuracy decrease quickly when model size falls under a certain value, unlike when qualitative properties were predicted. Table 4 summarizes the results.\nControl feedback matrices In these experiments, we train the model (6 layers, 512 dimensions) to predict a feedback matrix ensuring stability of an autonomous system. We use two metrics to evaluate accuracy:\n1) prediction within 10% of all coefficients in the target matrix K given by (3) and provided in the training set,\n2) verifying that the model outputs a correct feedback matrix K1, i.e. that all eigenvalues in A+BK1 have negative real parts. This makes more mathematical sense, as it verifies that the model provides an actual solution to the control problem (like a differential equation, a feedback control problem can have many different solutions).\nUsing the first metric, 15.8% of target matrices K are predicted with less than 10% error. Accuracy is 50.0% for systems with 3 equations, but drops fast as systems becomes larger. These results are very low, although well above chance level (<0.0001%). With the second metric (i.e. the one that actually matters mathematically), we achieve 66.5% accuracy, a much better result. Accuracy decreases with system size, but even degree 6 systems, with 1× 6 to 3× 6 feedback matrices, are correctly predicted 41.5% of the time. Therefore, while the model fails to approximate K to a satisfactory level, it does learn to predict correct solutions to the control problem in 66.5% of the cases. This result is very surprising, as it suggests that a mathematical property characterizing feedback matrices might have been learned." }, { "heading": "5.3 Predicting qualitative properties of PDEs", "text": "In this setting, the model is given a differential operator Dx and an initial condition u0. It is trained to predict if a solution to ∂tu+Dxu = 0 exists and, if so, whether it converges to 0 when t→ +∞. The space dimension (i.e. dimension of x) is between 2 and 6. In a first series of experiments the model is only trained to predict the existence and convergence of solutions. Overall accuracy is 98.4%. In a second series, we introduce an auxiliary task by adding to the output the frequency bounds F of u0. We observe it significantly contributes to the stability of the model with respect to hyper-parameters. In particular, without the auxiliary task, the model is very sensitive to the learning rate scheduling and often fails to converge to something better than random guessing. However, in case of convergence, the model reaches the same overall accuracy, with and without auxiliary task. Table 6 details the results." }, { "heading": "6 Discussion", "text": "We studied five problems of advanced mathematics from widely researched areas of mathematical analysis. In three of them, we predict qualitative and theoretical features of differential systems. In two, we perform numerical computations. According to mathematical theory, solving these problems requires a combination of advanced techniques, symbolic and numerical, that seem unlikely to be learnable from examples. Yet, our model achieves more than 95% accuracy on all qualitative tasks, and between 65 and 85% on numerical computations.\nWhen working from synthetic data, a question naturally arises about the impact of data generation on the results of experiments. In particular, one might wonder whether the model is exploiting a defect in the generator or a trivial property of the problems that allows for easier solutions. We believe this is very unlikely. First, because our results are consistent over different problems, using datasets generated with different techniques. Second, because a trivial solution would be found by the bag of words model we use as a baseline. And finally, because we build our datasets by direcly sampling problems from a distribution that includes all possible functions (up to the basis operators and the random number generator). This eliminates the biases that can result from sampling special instances or solutions (Yehuda et al., 2020). It also means that the training set is an extremely tiny sample of the whole problem space (over the 50 million examples generated, we did not get a single duplicate).\nLearning from very large samples often raises questions about overfitting and generalisation out of the training distribution. Due to the size of the problem space, it is very unlikely that the model could memorize a large number of cases and interpolate between them. Note that because the space of functions from Rn to Rn has infinite dimension, the universal approximation theorem does not apply here. Note also that for some of our problems (e.g. local stability), mathematical theory states that solutions cannot be obtained by simple interpolation. To investigate out-of-distribution generalization, we modified our data generator to produce 10 new test sets for stability prediction. We changed the distribution of operators and variables, and experimented with systems with longer expressions and more equations. Table 7 (see Appendix C.2 for a detailed analysis) summarizes our key results.\nChanges in the distribution of operators and variables have very little impact on accuracy, demonstrating that the model can generalize out of the training distribution. Our trained model also performs well on systems with longer expressions than the training data. This is interesting because generalizing to longer sequences is a known limitation of many sequence to sequence architectures. Finally, a model trained on systems with 2 to 5 equations predicts the stability of systems of 6 equations to high accuracy (78%). Being able to generalize to a larger problem space, with one additional variable, is a very surprising result, that tends to confirm that some mathematical properties of differential systems have been learned.\nIt seems unlikely that the model follows the same mathematical procedure as human solvers. For instance, problems involving more computational steps, such as non-autonomous controllability, do not result in lower accuracy. Also, providing at train time intermediate results that would help a human calculator (frequencies for PDE, or Jacobians for stability) does not improve performance. Understanding how the model finds solutions would be very interesting, as no simpler solutions than the classical mathematical steps are known.\nTo this effect, we tried to analyze model behavior by looking at the attention heads and the tokens the models focus on when it predicts a specific sequence (following Clark et al. (2019)). Unfortunately, we were not able to extract specific patterns, and found that each head in the model, from the first layer onwards, attends many more tokens than in usual natural language tasks (i.e. attention weights tend to be uniformly distributed). This makes interpretation very difficult.\nThese results open many perspectives for transformers in fields that need both symbolic and numerical computations. There is even hope that our models could help solve mathematical problems that are still open. On a more practical level, they sometimes provide fast alternatives to classical solvers. The algorithmic complexity of transformer inference and classical algorithms for the problems we consider here is discussed in appendix E.1. However, for the problems in our dataset, the simpler and parallelizable computations used by transformers allow for 10 to 100 times shorter evaluation times (see Appendix E.2)." }, { "heading": "7 Conclusion", "text": "In this paper, we show that by training transformers over generated datasets of mathematical problems, advanced and complex computations can be learned, and qualitative and numerical tasks performed with high accuracy. Our models have no built-in mathematical knowledge, and learn from examples only. However, solving problems with high accuracy does not mean that our models have learned the techniques we use to compute their solutions. Problems such as non-autonomous control involve long and complex chains of computations, which some of the smaller models we used could probably not handle.\nMost probably, our models learn shortcuts that allow them to solve specific problems, without having to learn or understand their theoretical background. Such a situation is common in everyday life. Most of us learn and use language without understanding its rules. On many practical subjects, we have tacit knowledge and know more than we can tell (Polanyi and Sen (2009)). This may be the way neural networks learn advanced mathematics. Understanding what these shortcuts are, how neural networks discover them, and how they can impact mathematical practice, is a subject for future research." }, { "heading": "A Examples of computations", "text": "A.1 Step by step example : autonomous control\nTo measure whether the system\ndx1(t)\ndt = sin(x21) + log(1 + x2) + atan(ux1) 1 + x2\ndx2(t)\ndt = x2 − ex1x2 ,\nis controllable at a point xe, with asymptotic control ue, using Kalman condition we need to\n1. differentiate the system with respect to its internal variables, obtain the Jacobian A(x, u)\nA(x, u) =\n( 2x1 cos(x 2 1) + u(1+x2)−1\n1+u2x21 (1 + x2) −1 − atan(ux1) (1+x2)2\n−x2ex1x2 1− x1ex1x2 ) 2. differentiate the system with respect to its control variables, obtain a matrix B(x, u)\nB(x, u) =\n( x1((1 + u 2x21)(1 + x2)) −1\n0 ) 3. evaluate A and B in xe = [0.5], ue = 1\nA(xe, ue) = ( 1.50 0.46 −0.64 0.36 ) , B(xe, ue) = ( 0.27 0 ) 4. calculate the controllability matrix given by (2).\nC = [B,AB]((xe, ue)) = [( 0.27 0 ) , ( 1.50 0.46 −0.64 0.36 )( 0.27 0 )] = ( 0.27 0.40 0 −0.17 ) 5. output n−d, with d the rank of the controllability matrix, the system is controllable\nif n− d = 0\nn− rank(C) = 2− 2 = 0 : System is controllable in (xe = [0.5], ue = 1)\n6. (optionally) if n− d = 0, compute the control feedback matrix K as in (3)\nK = (−22.8 44.0) .\nA.2 Step by step example: stability of linear PDE\nTo find the existence and behavior at infinite time of a solution, given a differential operator Dx and an initial condition u0 we proceed as follows\n1. find the Fourier polynomial f(ξ) associated to Dx Dx = 2∂ 2 x0 + 0.5∂ 2 x1 + ∂ 4 x2 − 7∂ 2 x0,x1 − 1.5∂x1∂ 2 x2 ,\nf(ξ) = −4πξ20 − πξ21 + 2πξ42 + 14πξ0ξ1 + 3iπξ1ξ22\n2. find the Fourier transform ũ0(ξ) of u0 u0(x) = e −3ix2x−10 sin(x0)e 2.5ix1e−x 2 2 ,\nũ0(ξ) = π 3/21[−(2π)−1,(2π)−1](ξ0)δ0(ξ1 − 2.5(2π)−1)e−π\n2(ξ2+3(2π) −1)2\n3. find the set F of frequency ξ for which ũ0(ξ) 6= 0 F = [−(2π)−1, (2π)−1]× {2.5(2π)−1} × (−∞,+∞)\n4. minimize f(ξ) on F minF (f(ξ)) = −22.6\n5. output (0,0) if this minimum is infinite, (1,0) is finite and negative, (1,1) if finite and positive. (optionally) output F Out = (1, 0) : there exists a solution u ; it does not vanish at t→ +∞" }, { "heading": "A.3 Examples of inputs and outputs", "text": "" }, { "heading": "A.3.1 Local stability", "text": "System Speed of convergenceat xe = [0.01] d dtx0 = − x1 atan (8x0x2) + 0.01atan (0.0008) d dtx1 = − cos (9x0) + cos (0.09)\nd dtx2 = x0 − √ x1 + x2 − 0.01 + 0.1 √ 2\n−1250\n d dtx0 = − 2x2 x0−2x2(x1−5) + 0.182 d dtx1 = (x1 + (x2 − e x1) (tan (x0) + 3)) (log (3) + iπ) +3.0 log (3) + 3.0iπ\nd dtx2 = asin ( x0 log ( − 4x1 )) − asin (0.06 + 0.01iπ)\n−0.445\n d dtx0 = e x1+e − sin (x0−e2) − 1.01ee − sin (0.01−e2) d dtx1 = 0.06− 6x1\nd dtx2 = −201 + x0+2 x20x2\n6.0 (locally stable)\n d dtx0 = x2e −x1 sin (x1)− 9.9 · 10−5 d dtx1 = 7.75.10 −4 − e x2 atan (atan (x1)) 4ex2+9 d dtx2 = (x1 − asin (9)) e − x0 log (3)+iπ\n− (0.01− asin (9)) e− 0.01 log (3)+iπ\n−0.0384\n d dtx0 = − x0(7− 4 √ 7 √ i) 9 − x1 + 0.0178− 0.00111 4 √ 7 √ i d dtx1 = −0.000379 + e − 63 cos ((x2−9) atan (x1))+7 d dtx2 = −x0 − x1 + asin ( cos (x0) + x2 x0 ) −1.55 + 1.32i\n3.52.10−11 (locally stable)\nA.3.2 Controllability: autonomous systems\nAutonomous system Dimension of\nuncontrollable space at xe = [0.5], ue = [0.5] dx0 dt = − asin ( x1 9 − 4 tan (cos (10)) 9 ) − asin ( 4 tan (cos (10)) 9 − 0.0556 ) dx1 dt = u− x2 + log ( 10 + tan (x1)u+x0 ) − 2.36\ndx2 dt = 2x1 + x2 − 1.5\n0 (controllable)\n dx0 dt = u− asin (x0)− 0.5 + π 6 dx1 dt = x0 − x1 + 2x2 + atan (x0)− 1.46\ndx2 dt = 5x2 cos (x2) − 2.85\n1\n dx0 dt = 6u+ 6x0 − 6x1 x0 dx1 dt = 0.75 + x 2 1 − cos (u− x2)\ndx2 dt = −x 2 0 + x0 + log (e x2)− 0.75\n2\n dx0 dt = +x0 ( cos ( u x0+2x2 ) + asin (u)x1 ) −0.5 cos ( 1 3 ) − π6 dx1 dt = πx1 4(x2+4) − π36\ndx2 dt = 2.5− 108e 0.5 − 12x0x2 + x1 + 108eu\n0 (controllable)\n dx0 dt = −10 sin ( 3x0 log (8) − 22 ) − 6.54 dx1 dt = sin ( 9 + −x1−48x2 ) − 1\ndx2 dt = 4 tan ( 4x0 u ) − 4 tan (4)\n1\nA.3.3 Controllability: non-autonomous systems\nNon-autonomous system Local controllabilityat xe = [0.5], ue = [0.5] dx0 dt = (x2 − 0.5) e − asin (8) dx1 dt = e t+0.5 − et+x1 + −x1+e x0 u x2 + 1− 2e\ndx2 dt = t(x2 − 0.5)\n( asin (6) + √ tan (8) ) False dx0 dt = atan ( √ x2) x0−1 − 2 atan (√ 2 2 ) dx1 dt = − u −√x0x1+3 + x2 + log (x0) + log(2)− 0.5 + (1/(6− √ 2))\ndx2 dt = −70t(x0 − 0.5)\nFalse\n dx0 dt = x0+7 sin (x0eu)+3 dx1 dt = − 9x2e − sin ( √ log (x1))\nx0 dx2 dt = t+ asin (tx2 + 4)\nFalse\n dx0 dt = 0.5− x2 + tan (x0)− tan (0.5) dx1 dt = t x1(t+cos (x1(t+u)))\n− t0.5(t+cos (0.5t+0.25)) dx2 dt = 2.75− x0 (u+ 4)− x0\nTrue\n dx0 dt = u (u− x0 − tan (8)) + 0.5(tan (8)) dx1 dt = − 6t(−2+π2 ) x0x1 − 12t (4− π) dx2 dt = −7(u− 0.5)− 7 tan (log (x2))\n+7 tan (log (0.5))\nTrue\nA.3.4 Stability of partial differential equations using Fourier transform\nPDE ∂tu+Dxu = 0 and initial condition Existence of a solution, u→ 0 at t→ +∞ Dx = 2∂x0 ( 2∂4x0∂ 4 x2 + 3∂ 3 x1 + 3∂ 2 x1 ) u0 = δ0(−18x0)δ0(−62x2)e89ix0−8649x 2 1+89ix1−59ix2 False , False\n Dx = −4∂ 4 x0 − 5∂ 3 x0 − 6∂ 2 x0∂ 2 x1∂ 2 x2 + 3∂ 2 x0∂x1 − 4∂ 6 x1\nu0 = (162x0x2) −1 (ei(−25x0+96x2) sin (54x0) sin (3x2)) True , False Dx = ∂x1 ( 4∂5x0∂x1 + 4∂ 2 x0 − 9∂x0∂ 6 x2 +2∂3x1∂ 5 x2 − 4∂ 3 x1∂ 4 x2 − 2∂x2 ) u0 = (33x0) −1 ( e86ix0−56ix1−16x 2 2+87ix2 sin (33x0)\n) True , False Dx = −6∂7x0∂ 2 x2 + ∂ 5 x0∂ 6 x2 − 9∂ 4 x0∂ 2 x1 − 9∂ 4 x0∂ 4 x2 +7∂2x0∂ 6 x2 + 4∂ 2 x0∂ 5 x2 − 6∂ 6 x1\nu0 = δ0(88x1)e −2x0(2312x0+15i)\nTrue , True" }, { "heading": "B Mathematical definitions and theorems", "text": "" }, { "heading": "B.1 Notions of stability", "text": "Let us consider a system dx(t)\ndt = f(x(t)). (7)\nxe is an attractor, if there exists ρ > 0 such that\n|x(0)− xe| < ρ =⇒ lim t→+∞ x(t) = xe. (8)\nBut, counter intuitive as it may seem, this is not enough for asymptotic stability to take place.\nDefinition B.1. We say that xe is a locally (asymptotically) stable equilibrium if the two following conditions are satisfied:\n(i) xe is a stable point, i.e. for every ε > 0, there exists η > 0 such that\n|x(0)− xe| < η =⇒ |x(t)− xe| < ε, ∀ t ≥ 0. (9)\n(ii) xe is an attractor, i.e. there exists ρ > 0 such that\n|x(0)− xe| < ρ =⇒ lim t→+∞ x(t) = xe. (10)\nIn fact, the SMT of Subsection 3.1 deals with an even stronger notion of stability, namely the exponential stability defined as follows:\nDefinition B.2. We say that xe is an exponentially stable equilibrium if xe is locally stable equilibrium and, in addition, there exist ρ > 0, λ > 0, and M > 0 such that\n|x(0)− xe| < ρ =⇒ |x(t)| ≤Me−λt|x(0)|.\nIn this definition, λ is called the exponential convergence rate, which is the quantity predicted in our first task. Of course, if xe is locally exponentially stable it is in addition locally asymptotically stable." }, { "heading": "B.2 Controllability", "text": "We give here a proper mathematical definition of controllability. Let us consider a nonautonomous system\ndx(t)\ndt = f(x(t), u(t), t), (11)\nsuch that f(xe, ue) = 0.\nDefinition B.3. Let τ > 0, we say that the nonlinear system (11) is locally controllable at the equilibrium xe in time τ with asymptotic control ue if, for every ε > 0, there exists η > 0 such that, for every (x0, x1) ∈ Rn × Rn with |x0 − xe| ≤ η and |x1 − xe| ≤ η there exists a trajectory (x, u) such that\nx(0) = x0, x(τ) = x1\n|u(t)− ue| ≤ ε, ∀ t ∈ [0, τ ]. (12)\nAn interesting remark is that if the system is autonomous, the local controllability does not depend on the time τ considered, which explains that it is not precised in Theorem 3.2." }, { "heading": "B.3 Tempered distribution", "text": "We start by recalling the multi-index notation: let α = (α1, ..., αn) ∈ Nn, x ∈ Rn, and f ∈ C∞(Rn), we denote\nxα = xα11 × · · · × xαnn ∂αx f = ∂ α1 x1 . . . ∂ αn xn f.\n(13)\nα is said to be a multi-index and |α| = ∑n i=1 |αi|. Then we give the definition of the Schwartz functions: Definition B.4. A function φ ∈ C∞ belongs to the Schwartz space S(Rn) if, for any multi-index α and β,\nsup x∈Rn\n|xα∂βxφ| < +∞. (14)\nFinally, we define the space of tempered distributions: Definition B.5. A tempered distribution φ ∈ S ′(Rn) is a linear form u on S(Rn) such that there exists p > 0 and C > 0 such that\n|〈u, φ〉| ≤ C ∑\n|α|,|β|<p\nsup x∈Rn\n|xα∂βxφ|, ∀ φ ∈ S(Rn). (15)" }, { "heading": "B.4 Proofs of theorems", "text": "" }, { "heading": "B.4.1 Analysis of Problem 2", "text": "The proofs of Theorem 3.2, of validity of the feedback matrix given by the expression (3), and of the extension of Theorem 3.2 to the non-autonomous system given by condition (4) can be found in Coron (2007). We give here the key steps of the proof for showing that the matrix K given by (3) is a valid feedback matrix to illustrate the underlying mechanisms:\n• Setting V (x(t)) = x(t)trC−1T x(t), where x is solution to x′(t) = f(x, ue+K.(x−xe)), and\nCT =\n( e−AT [∫ T 0 e−AtBBtre−A trtdt ] e−A trT ) . (16)\n• Showing, using the form of CT , that d\ndt (V (x(t))) = −|BtrC−1T x(t)| 2 − |Btre−TA tr C−1T x(t)| 2\n• Showing that, if for any t ∈ [0, T ], |BtrC−1T x(t)|2 = 0, then for any i ∈ {0, ..., n− 1},\nxtrC−1T A iB = 0, ∀ t ∈ [0, T ].\n• Deducing from the controllability condition (2), that\nx(t)trC−1T = 0, ∀ t ∈ [0, T ].\nand therefore from the invertibility of C−1T ,\nx(t) = 0, ∀ t ∈ [0, T ].\n• Concluding from the previous and LaSalle invariance principle that the system is locally exponentially stable." }, { "heading": "B.4.2 Analysis of Problem 3", "text": "In this section we prove Proposition 3.1. We study the problem ∂tu+ ∑ |α|≤k aα∂ α x u = 0 on R+ × Rn, (17)\nwith initial condition u(0, ·) = u0 ∈ S ′(Rn), (18)\nand we want to find a solution u ∈ C0([0, T ],S ′(Rn)). Denoting ũ the Fourier transform of u with respect to x, the problem is equivalent to\n∂tũ(t, ξ) + ∑ |α|≤k aα(iξ) αũ(t, ξ) = 0, (19)\nwith initial condition ũ0 ∈ S(Rn). As the only derivative now is with respect to time, we can check that\nũ(t, ξ) = ũ0(ξ)e −f(ξ)t, (20) where f(ξ) = ∑ |α|≤k aα(iξ)\nα, is a weak solution to (19) belonging to the space C0([0,+∞),D′(Rn)). Indeed, first of all we can check that for any t ∈ [0,+∞), ξ → exp (−f(ξ)t) is a continuous function and ũ0 belongs to S ′(Rn) ⊂ D′(Rn), thus ũ(t, ·) belongs to D′(Rn). Besides, t → e−f(ξ)t is a C∞ function whose derivative in time are of the form P (ξ)e−f(ξ)t where P (ξ) is a polynomial function. ũ is continuous in time and ũ ∈ C0([0,+∞),D′(Rn)). Now we check that it is a weak solution to (19) with initial condition ũ0. Let φ ∈ C∞c ([0,+∞) × Rn) the space of smooth functions with compact support, we have\n− 〈ũ, ∂tφ〉+ ∑ |α|≤k aα(iξ) α〈ũ, φ〉+ 〈ũ0, φ〉\n=− 〈ũ0, ∂t(e−f(ξ)tφ)〉 − 〈ũ0, f(ξ)e−f(ξ)tφ〉+ 〈ũ0, e−f(ξ)tf(ξ)φ〉+ 〈ũ0, φ〉 =0.\n(21)\nHence, u defined by (20) is indeed a weak solution of (19) in C0([0,+∞),D′(Rn)). Now, this does not answer our question as this only tells us that at time t > 0, u(t, ·) ∈ D′(Rn) which is a less regular space than the space of tempered distribution S ′(Rn). In other words, at t = 0, ũ = ũ0 has a higher regularity by being in S ′(Rn) and we would like to know if equation (19) preserves this regularity. This is more than a regularity issue as, if not, one cannot define a solution u as the inverse Fourier Transform of ũ because such function might not exist. Assume now that there exists a constant C such that\n∀ξ ∈ Rn , ũ0(ξ) = 0 or Re(f(ξ)) > C. (22)\n∀ ξ ∈ Rn, 1supp(ũ0)e −f(ξ)t ≤ e−Ct. (23)\nThis implies that, for any t > 0, ũ ∈ S ′(Rn). Besides, defining for any p ∈ N, Np(φ) = ∑\n|α|,|β|<p\nsup ξ∈Rn\n|ξα∂βξ φ(ξ)|, (24)\nthen for t1, t2 ∈ [0, T ], Np((e−f(ξ)t1 − e−f(ξ)t2)φ) = ∑\n|α|,|β|<p\nsup ξ∈Rn\n|ξαPβ(ξ, φ)|, (25)\nwhere Pβ(ξ, φ) is polynomial with f(ξ), φ(ξ), and their derivatives of order strictly smaller than p. Besides, each term of this polynomial tend to 0 when t1 tends to t2 on supp(ũ0), the set of frequency of u0. Indeed, let β1 be a multi-index, k ∈ N, and Qi(ξ) be polynomials in ξ, where i ∈ {0, ..., k}.∣∣∣∣∣1supp(u0)∂β1ξ φ(ξ) ( k∑ i=0 Qi(ξ)t i 1e −f(ξ)t1 −Qi(ξ)ti2e−f(ξ)t2\n)∣∣∣∣∣ ≤\nk∑ i=0 max supp(ũ0) ∣∣∣ti1e−f(ξ)t1 − ti2e−f(ξ)t2∣∣∣max ξ∈Rn ∣∣∣∂β1ξ φ(ξ)Qi(ξ, t)∣∣∣ . (26)\nFrom (22), the time-dependant terms in the right-hand sides converge to 0 when t1 tends to t2. This implies that u ∈ C0([0, T ],S ′(Rn)). Finally let us show the property of the behavior at infinity. Assume that C > 0, one has, for any φ ∈ S(Rn)\n〈ũ(t, ·), φ〉 = 〈ũ0,1supp(ũ0)e −f(ξ)tφ〉. (27)\nLet us set g(ξ) = e−f(ξ)tφ(ξ), one has for two multi-index α and β\n|ξα∂βξ g(ξ)| ≤ |ξ αQ(ξ)e−f(ξ)t|, (28)\nwhere Q is a sum of polynomials, each multiplied by φ(ξ) or one of its derivatives. Thus ξαQ(ξ) belongs to S(Rn) and therefore, from assumption (22),\n|ξα∂βξ g(ξ)|1supp(u0) ≤ max ξ∈Rn |ξαQ(ξ)|e−Ct, (29)\nwhich goes to 0 when t→ +∞. This imply that ũ(t, ·)→ 0 in S ′(Rn) when t→ +∞, and hence u(t, ·)→ 0. This ends the proof of Proposition 3.1.\nLet us note that one could try to find solutions with lower regularity, where u is a distribution of D′(R+ × Rn), and satisfies the equation\n∂tu+ ∑ |α|≤k aα∂ α x u = δt=0u0 on R+ × Rn. (30)\nThis could be done using for instance Malgrange-Erhenpreis theorem, however, studying the behavior at t→ +∞ may be harder mathematically, hence this approach was not considered in this paper." }, { "heading": "C Additional experiments", "text": "C.1 Prediction of speed of convergence with higher precision\nIn Section 5.1, λ is predicted with a 10% margin error. Prediction of λ to better accuracy can be achieved by training models on data rounded to 2, 3 or 4 significant digits, and measuring the number of exact predictions on the test sample. Overall, we predict λ with two significant digits in 59.2% of test cases. Table 8 summarizes the results for different precisions (for transformers with 6 layers and a dimensionality of 512)." }, { "heading": "C.2 Out-of-distribution generalization", "text": "In all our experiments, trained models are tested on held-out samples generated with the same procedure as the training data, and our results prove that the model can generalize out of the training data. However, training and test data come from the same statistical distribution (iid). This would not happen in practical cases: problems would come from some unknown distribution over problem space. Therefore, it is interesting to investigate how the model performs when the test set follows a different statistical distribution. This provides insight about how learned properties generalize, and may indicate specific cases over which the model struggles.\nTo this purpose, we modified the data generator to produce new test datasets for end to end stability prediction (section 5.1). Four modifications were considered:\n1. Unary operators: varying the distribution of operators in the system. In the training data, unary operators are selected at random from a set of nine, three trigonometric functions, three inverse trigonometric functions, logarithm and exponential, and square root (the four basic operations are always present). In this set of experiments, we generated four test sets, without trigonometric functions, without logs and exponentials, only with square roots, and with a different balance of operators (mostly square roots).\n2. Variables and integers: varying the distribution of variables in the system. In the training data, 30% of the leaves are numbers, the rest variables. We changed this probability to 0.1, 0.5 and 0.7. This has no impact on expression length, but higher probabilities make the Jacobians more sparse.\n3. Expression lengths: making expressions longer than in the train set. In the training data, for a system of n equations, we generate functions with 3 to 2n+ 3 operators. In this experiments, we tried functions between n+ 3 and 3n+ 3 and 2n+ 3 and 4n+ 3. This means that the test sequences are, on average, much longer that those seen at training, a known weakness of sequence to sequence models.\n4. Larger degree: our models were trained on systems with 2 to 5 equations, we tried to test it on systems with 6 equations. Again, this usually proves difficult for transformers.\nNote that the two first sets of experiments feature out-of-distribution tests, exploring different distributions over the same problem space as the training data. The two last sets, on the other hand, explore a different problem space, featuring longer sequences.\nTable 9 presents the results of these experiments. Changing the distribution of operators, variables and integers has little impact on accuracy, up to two limiting cases. First, over systems of degree five (the largest in our set, and more difficult for the transformers) change in operator distribution has a small adverse impact on performance (but not change in variable distribution). Second, which the proportion of integers become very large, and therefore Jacobians become very sparse, the degree of the systems has less impact on performance. But overall results remain over 95%, and the model proves to be very resistant to changes in distribution over the same problem space.\nOver systems with longer expressions, overall accuracy tends to decreases. Yet, systems of two or three equations are not affected by a doubling of the number of operators (and sequence length), compared to the training data. Most of the loss in performance concentrates on larger degrees, which suggests that it results from the fact that the transformer is presented at test time with much longer sequences that what it saw at training. In any case, all results but one are well above the fastText baseline (60.5%).\nWhen tested on systems with six equations, the trained model predicts stability in 78.7% of cases. This is a very interesting result, where the model is extrapolating out of the problem space (i.e. no system of six equations have been seen during training) with an accuracy well above chance level, and the fastText baseline." }, { "heading": "D Model and problem space", "text": "" }, { "heading": "D.1 Model architecture", "text": "The training loss is the cross entropy between the model predicted output and actual result from the dataset. During training, we use the Adam optimizer, with a learning rate of 0.0001 and scheduling (as in Vaswani et al. (2017)). Mini-batch size varies from one problem to the other, typically between 32 and 128 examples.\nDuring training, we use 8 GPU. The model is distributed across GPUs, so that all of them have access to the same shared copy of the model. At each iteration, every GPU processes an independently generated batch, and the optimizer updated the model weights using the gradients accumulated by all GPU. Overall, this is equivalent to training on a single GPU, but with 8 times larger batches.\nD.2 Model behavior and attentions heads\nWe tried to analyze model behavior by looking at the attention heads and the tokens the models focus on when it predicts a specific sequence. As each head attends many more tokens than in usual natural language tasks, and to improve visualization, we tried to reduce the number of hidden states a head can attend by using a top-k on the attention weights, but this deteriorated the performance, and we did not investigate more in this direction. We also ran a sequence-to-sequence model without attention, so that each input equation is mapped to a fixed-sized representation. We then fed a set of input equations into the model, and used a t-SNE visualization to see whether we can observe clusters of equations. What we observed is mainly that equations with nearby representations have similar length / tokens. However, even embeddings in similar locations can lead to different decoded sequences. The relevance of the representations built in the encoder depends on how the computation is split between the encoder and the decoder. If the decoder does the majority of the work, encoder representations become less meaningful." }, { "heading": "D.3 Learning curves", "text": "Although all generated datasets included more than 50 million examples, most models were trained on less. Figure 1 shows how performance increases with the number of training examples, for the end to end stability problem (i.e. predicting whether systems of degree 2 to 5 are stable). There are twelve curves corresponding to as many experiments over shuffled versions of the dataset (i.e. different experiments used different parts of the dataset).\nOverall, less than 10 million examples are needed to achieve close to optimal accuracy. Learning curves from different experiments are close, which proves the stability of the learning process." }, { "heading": "D.4 Size of the problem space", "text": "Lample and Charton (2020) provide the following formula to calculate the number of functions with m operators:\nE0 = L\nE1 = (q1 + q2L)L\n(m+ 1)Em = (q1 + 2q2L)(2m− 1)Em−1 − q1(m− 2)Em−2 Where L is the number of possible leaves (integers or variables), and q1 and q2 the number of unary and binary operators. In the stability and controllability problems, we have q1 = 9, q2 = 4 and L = 20 + q, with q the number of variables.\nReplacing, we have, for a function with q variables and m operators E0(q) = 20 + q\nE1(q) = (89 + 4q)(20 + q)\n(m+ 1)Em(q) = (169 + 8q)(2m− 1)Em−1 − 4(m− 2)Em−2\nIn the stability problem, we sampled systems of n functions, with n variables, n from 2 to 6. Functions have between 3 and 2n+ 2 operators. The number of possible systems is\nPSst = 6∑ n=2 ( 2n+2∑ m=3 Em(n) )n > E14(6) 6 ≈ 3.10212\n(since Em(n) increases exponentially with m and n, the dominant factor in the sum is the term with largest m and n)\nIn the autonomous controllability problem, we generated systems with n functions (n between 3 and 6), and n + p variables (p between 1 and n/2). Functions had between n + p and 2n+ 2p+ 2 operators. The number of systems is\nPSaut = 6∑ n=3 n/2∑ p=1 2(n+p+1)∑ m=n+p Em(n+ p) n > E20(9)6 ≈ 4.10310 For the non-autonomous case, the number of variables in n+ p+ 1, n is between 2 and 3 and p = 1, therefore\nPSnaut = 3∑ n=2 2(n+2)∑ m=n+1 Em(n+ 2) n > E10(5)3 ≈ 5.1074 Because expressions with undefinite or degenerate jacobians are skipped, the actual problem space size will be smaller by several orders of magnitude. Yet, problem space remains large enough for overfitting by memorizing problems and solutions to be impossible." }, { "heading": "E Computation efficiency", "text": "" }, { "heading": "E.1 Algorithmic complexity", "text": "Let n be the system degree, p the number of variables and q the average length (in tokens) of functions in the system. In all problems considered here, we have p = O(n). Differentiating or evaluating an expression with q tokens is O(q), and calculating the Jacobian of our system is O(npq), i.e. O(n2q).\nIn the stability experiment, calculating the eigenvalues of the Jacobian will be O(n3) in most practical situations. In the autonomous controllability experiments, construction of the n×np Kalman matrix is O(n3p), and computing its rank, via singular value decomposition or any equivalent algorithm, will be O(n3p) as well. The same complexity arise for feedback matrix computations (multiplication, exponentiation and inversion are all O(n3) for a square n matrix). As a result, for controllability, complexity is O(n4). Overall, the classical algorithms have a complexity of O(n2q) for Jacobian calculation, and O(n3) (stability) and O(n4) (controllability) for the problem specific computations.\nCurrent transformer architectures are quadratic in the length of the sequence, in our case nq, so a transformer will be O(n2q2) (in speed and memory usage). Therefore, the final comparison will depend on how q, the average length of equations, varies with n, the number of parameters. If q = O(1) or O(log(n)), transformers have a large advantage over classical methods. This means sparse Jacobians, a condition often met in practice. For controllability, the advantage remains if q = O(n1/2), and the two methods are asymptotically equivalent if q = O(n).\nHowever, current research is working on improving transformer complexity to log-linear or linear. If this happened (and there seem to be no theoretical reason preventing it), transformers would have lower asymptotic complexity in all cases.\nE.2 Computation time versus evaluation time\nTable 10 compares the average time needed to solve one problem, for a trained transformer running on a GPU, and a Python implementation of the algorithms, running on a MacBook Pro." } ]
2,021
Learning advanced mathematical computations from examples
SP:504c896a68c2e0154232d2f2214e8a499d941b60
[ "The paper studies the usage of the representations developed in the last layer of a neural network as a way to measure the similarity between input patterns. The fundamental idea revolves around the concept of orthogonal weight matrices, to decorrelate the activations of the neurons, and which would definitely enrich the internal representations developed by the neurons in the last hidden layer. In the paper, it is suggested to regularize during training to maintain the orthogonality of the weight matrices. Moreover, a variant of Batch Normalization is proposed. The method proposed in the paper is then experimentally applied and evaluated on two benchmark datasets (MNIST and Henan Renmin)." ]
The data representation plays an important role in evaluating similarity between objects. In this paper, we propose a novel approach for implicit data representation to evaluate similarity of input data using a trained neural network. In contrast to the previous approach, which uses gradients for representation, we utilize only the outputs from the last hidden layer of a neural network and do not use a backward step. The proposed technique explicitly takes into account the initial task and significantly reduces the size of the vector representation, as well as the computation time. Generally, a neural network obtains representations related only to the problem being solved, which makes the last hidden layer representation useless for input similarity task. In this paper, we consider two reasons for the decline in the quality of representations: correlation between neurons and insufficient size of the last hidden layer. To reduce the correlation between neurons we use orthogonal weight initialization for each layer and modify the loss function to ensure orthogonality of the weights during training. Moreover, we show that activation functions can potentially increase correlation. To solve this problem, we apply modified Batch-Normalization with Dropout. Using orthogonal weight matrices allow us to consider such neural networks as an application of the Random Projection method and get a lower bound estimate for the size of the last hidden layer. We perform experiments on MNIST and physical examination datasets. In both experiments, initially, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure similarity between input data and define hidden classes. We also cluster the inputs to evaluate how well objects from the same hidden class are grouped together. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation.
[]
[ { "authors": [ "Babajide O Ayinde", "Tamer Inanc", "Jacek M Zurada" ], "title": "On correlation of features extracted by deep neural networks", "venue": "In 2019 International Joint Conference on Neural Networks (IJCNN),", "year": 2019 }, { "authors": [ "Jonathan Blanchette", "Robert Laganière" ], "title": "On batch orthogonalization layers", "venue": null, "year": 2018 }, { "authors": [ "Guillaume Charpiat", "Nicolas Girard", "Loris Felardos", "Yuliya Tarabalka" ], "title": "Input similarity from the neural network perspective", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Minhyung Cho", "Jaehyung Lee" ], "title": "Riemannian approach to batch normalization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Michael Cogswell", "Faruk Ahmed", "Ross Girshick", "Larry Zitnick", "Dhruv Batra" ], "title": "Reducing overfitting in deep networks by decorrelating representations", "venue": "arXiv preprint arXiv:1511.06068,", "year": 2015 }, { "authors": [ "Guillaume Desjardins", "Karen Simonyan", "Razvan Pascanu" ], "title": "Natural neural networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "arXiv preprint arXiv:1803.07728,", "year": 2018 }, { "authors": [ "Kazuaki Hanawa", "Sho Yokoi", "Satoshi Hara", "Kentaro Inui" ], "title": "Evaluation criteria for instance-based explanation", "venue": "arXiv preprint arXiv:2006.04528,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Junlin Hu", "Jiwen Lu", "Yap-Peng Tan" ], "title": "Deep transfer metric learning", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Lei Huang", "Xianglong Liu", "Bo Lang", "Adams Wei Yu", "Yongliang Wang", "Bo Li" ], "title": "Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural networks", "venue": "arXiv preprint arXiv:1709.06079,", "year": 2017 }, { "authors": [ "Lei Huang", "Dawei Yang", "Bo Lang", "Jia Deng" ], "title": "Decorrelated batch normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Kui Jia", "Shuai Li", "Yuxin Wen", "Tongliang Liu", "Dacheng Tao" ], "title": "Orthogonal deep neural networks. IEEE transactions on pattern analysis and machine intelligence, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yann A LeCun", "Léon Bottou", "Genevieve B Orr", "Klaus-Robert Müller" ], "title": "Efficient backprop", "venue": "In Neural networks: Tricks of the trade,", "year": 2012 }, { "authors": [ "Yanghao Li", "Naiyan Wang", "Jianping Shi", "Jiaying Liu", "Xiaodi Hou" ], "title": "Revisiting batch normalization for practical domain adaptation", "venue": "arXiv preprint arXiv:1603.04779,", "year": 2016 }, { "authors": [ "Jirı Matoušek" ], "title": "Lecture notes on metric embeddings", "venue": "Technical report, Technical report, ETH Zürich,", "year": 2013 }, { "authors": [ "Andrew Maxwell", "Runzhi Li", "Bei Yang", "Heng Weng", "Aihua Ou", "Huixiao Hong", "Zhaoxian Zhou", "Ping Gong", "Chaoyang Zhang" ], "title": "Deep learning architectures for multi-label classification of intelligent health risk prediction", "venue": "BMC bioinformatics,", "year": 2017 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Hamed Pirsiavash", "Paolo Favaro" ], "title": "Representation learning by learning to count", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "arXiv preprint arXiv:1802.05365,", "year": 2018 }, { "authors": [ "Andrew Rosenberg", "Julia Hirschberg" ], "title": "V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL)", "venue": null, "year": 2007 }, { "authors": [ "Sebastian Ruder" ], "title": "An overview of multi-task learning in deep neural networks", "venue": "arXiv preprint arXiv:1706.05098,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Mei Wang", "Weihong Deng" ], "title": "Deep visual domain adaptation: A survey", "venue": null, "year": 2018 }, { "authors": [ "Di Xie", "Jiang Xiong", "Shiliang Pu" ], "title": "All you need is beyond a good init: Exploring better solution for training extremely deep convolutional neural networks with orthonormality and modulation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Hongliang Yan", "Yukang Ding", "Peihua Li", "Qilong Wang", "Yong Xu", "Wangmeng Zuo" ], "title": "Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Werner Zellinger", "Thomas Grubinger", "Edwin Lughofer", "Thomas Natschläger", "Susanne Saminger-Platz" ], "title": "Central moment discrepancy (cmd) for domain-invariant representation learning", "venue": "arXiv preprint arXiv:1702.08811,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Evaluating object similarity is an important area in machine learning literature. It is used in various applications such as search query matching, image similarity search, recommender systems, clustering, classification. In practice, the quality of similarity evaluation methods depends on the data representation.\nFor a long time neural networks show successful results in many tasks and one such task is obtaining good representations. Many of these methods can be considered in terms of domain and task. The first case is when we have only unlabeled dataset. Then we can use autoencoders (Bank et al., 2020) or self-supervised approaches (Chen et al., 2020; Devlin et al., 2018; Doersch et al., 2015; Dosovitskiy et al., 2014; Gidaris et al., 2018; Noroozi & Favaro, 2016; Noroozi et al., 2017; Oord et al., 2018; Peters et al., 2018), which require formulation of a pretext task, which in most cases depends on the data type. These methods can be called explicit because they directly solve the problem of representation learning. Moreover, these models can be used for transfer knowledge when we have labeled data only in the target domain. The second case is where we have labeled data in the source and target domains. Then we can apply a multi-task learning approach (Ruder, 2017) or fine-tune the models (Simonyan & Zisserman, 2014; He et al., 2016) trained on a large dataset like ImageNet. Finally, there is the domain adaptation approach (Wang & Deng, 2018) where we have a single task but different source and target domains with labeled data in the target\ndomain (Hu et al., 2015) or with unlabeled data (Li et al., 2016; Yan et al., 2017; Zellinger et al., 2017).\nIn our study the target task is to measure similarity between objects and to define hidden classes based on it. We are interested in studying the issue of implicit learning of representations. Can the neural networks store information about subcategories if we don’t explicitly train them to do this? More formally, we have the same source and target domains but different tasks and we don’t have labeled data in the target domain. That makes our case different from the cases of transfer learning.\nA solution to this problem could be useful in many practical cases. For example, we could train a model to classify whether messages are spam or not and then group spam campaigns or kind of attacks (phishing, spoofing, etc.) based on similarity measuring by trained neural network. Similar cases could be in the medicine (classifying patients into healthy/sick and grouping them by the disease) or in financial (credit scoring) area. The benefits are that we do not depend on the data type and, more importantly, we use only one model for different tasks without fine-tuning, which significantly reduces time for developing and supporting of several models.\nSimilar study was done in (Hanawa et al., 2020), where authors proposed evaluation criteria for instance-based explanation of decisions made by neural network and tested several metrics for measuring input similarity. In particular, they proposed the Identical subclass test which checks whether two objects considered similar are from the same subclass. According to the results of their experiments, the most qualitative approach is the approach presented in (Charpiat et al., 2019), which proposed to measure similarity between objects using the gradients of a neural network. In experiments, the authors applied their approach to the analysis of the self-denoising phenomenon. Despite the fact that this method has theoretical guaranties and does not require to modify the model to use it, in practice, especially in real-time tasks, using gradients tends to increase the computation time and size of vector representation. This approach will be described in more detail in Section 2. To avoid these problems, we propose a method that only uses outputs from the last hidden layer of a neural network and does not use a backward step to vectorize the input. In our research, we found that a correlation of neurons and insufficient width of the last hidden layer influence on the quality of representations obtained in implicit way. To solve these issues, we propose several modifications. First, we show that the weight matrix should be orthogonal. Second, we modify Batch-Normalization (Ioffe & Szegedy, 2015) to obtain the necessary mathematical properties, and use it with dropout(Srivastava et al., 2014) to reduce the correlation caused by nonlinear activation functions. Using orthogonal weight matrices allows us to consider the neural network in terms of Random Projection method and evaluate the lower bound of the width of the last hidden layer. Our approach will be discussed in detail in Section 3. Finally, in Section 4 we perform experiments on MNIST dataset and physical examination dataset (Maxwell et al., 2017). We used these datasets to show that our approach can be applied for any type of data and combined with different architectures of neural networks. In both experiments, we split a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure the similarity between input data and define hidden classes. We also cluster the inputs to evaluate how well objects from the same class are grouped together. Our experimental results show that the proposed approach achieves competitive results on the input similarity task while reducing both computation time and the size of the input representation." }, { "heading": "2 RELATED WORKS", "text": "Using a trained neural network to measure similarity of inputs is a new research topic. In (Charpiat et al., 2019) the authors introduce the notion of object similarity from the neural network perspective. The main idea is as follows: how much would parameter variation that changed the output for x impact the output for x′? In principle, if the objects x and x′ are similar, then changing parameters should affect the outputs in a similar way. The following is a formal description for one- and multidimensional cases of the output value of a neural network.\nOne-dimensional case Let fθ(x) ∈ R be a parametric function, in particular a neural network, x, x′ be input objects, θ ∈ Rnθ - model parameters, nθ - number of parameters. The authors proposed the following metric:\nρθ(x,x ′) = ∇θfθ(x′)∇θfθ(x) ||∇θfθ(x′)|| ||∇θfθ(x)||\n(1)\nIn this way, the object similarity is defined as the cosine similarity between the gradients computed at these points.\nMulti-dimensional case Let fθ(x) ∈ Rd, d > 1. In this case, the authors obtained the following metric:\nρθ,d(x,x ′) =\n1 d Tr(Kθ(x,x ′)), (2)\nwhere Kθ(x,x′) = K −1/2 x′,x′Kx,x′K −1/2 x,x , and Kx′,x = ∂fθ∂θ ∣∣∣ x′ ∂fθ ∂θ T ∣∣∣ x is calculated using the Jaco-\nbian matrix ∂fθ∂θ ∣∣∣ x .\nSummary Unlike using third-party models for vectorization, such as VGG, this approach allows us to use any pre-trained neural network to calculate the similarity of input data. To achieve this, authors use gradients of neural network as illustrated in equation 1 and equation 2. This gradientbased solution takes into account all activations, which does not require the selection of a hidden layer for vectorization. However, it has a number of drawbacks. First of all, fast computation of gradients require additional computational resources. Second, the size of objects representation is nθ for one-dimensional output and nθ ∗d for multi-dimensional case. This means that increasing model complexity increases representation size and, as a result, can lead to large memory consumption. Motivation of this research is to develop an approach that reduces the size of the representation and uses only the forward pass of a trained neural network to work. In the next section, we will discuss the proposed approach in detail." }, { "heading": "3 PROPOSED METHOD", "text": "In this paper, we suggest using outputs from the last hidden layer of a trained neural network to encode inputs. This representation has several advantages. First, the output of the last layer is usually low-dimensional, which allows us to get a smaller vector dimension. Second, and more importantly, on the last hidden layer the semantics of the original problem is taken into account to the greatest extent. For example, for a classification problem, data is theoretically linearly separable on the last hidden layer. Informally, this allows us to measure similarities within each class, which reduces false cases in terms of the original problem. This is extremely important for clustering spam campaigns, because this representation reduces the probability to group spam and legitimate messages together. However, as already mentioned in the section 1, a neural network that is trained to solve a simple problem does not retain the quality of representaions needed to solve a more complex one. Due to this fact, it’s impossible to apply this representation as is. We have identified the main causes of this: a strong correlation of neurons and an insufficient size of the last hidden layer. Our goal is to avoid these in order to make the proposed vectorization reasonable. In practice, last hidden layers are usually fully connected. For this reason, we consider only this type of the last layer. In 3.1, we show how to reduce correlation between neurons, and in 3.2 we offer an estimate of the lower bound of the size of the last hidden layer and prove that the proposed representation can be used for the input similarity problem. We introduce the following notation:\n1. l ∈ {0, 1, . . . , L} - layer number, where l = 0 - input layer (source space), l = L - output layer, other - hidden layers.\n2. Nl - number of units in layer l\n3. hl ∈ RNl - pre-activation vector 4. Φ(hl) = (φ(hl,1), . . . , φ(hl,Nl)) - activation vector\n5. Wl ∈ RNl−1×Nl , bl ∈ RNl - weight matrix and bias vector" }, { "heading": "3.1 NEURON CORRELATION", "text": "The benefits of decorrelated representations have been studied in (LeCun et al., 2012) from an optimization viewpoint and in (Cogswell et al., 2015) for reducing overfitting. We consider the decorrelated representations from information perspective. Basically, correlation of neurons means that neurons provide similar information. Therefore, we gain less information from observing two neurons at once. This phenomenon may occur due to the fact that the neural network does not retain more information than is necessary to solve a particular task. Thus, only important features are highlighted. The example in Fig. 1A illustrates that the output values of two neurons are linearly dependent, which entails that many objects in this space are indistinguishable. On the contrary (Fig. 1B), decorrelation of neurons provides more information and, as a result, the ability to distinguish most objects.\nIn the following paragraphs, we identify the main causes of the correlation of neurons and suggest ways to prevent it. We decided to consider the correlations before and after activation of neurons separately.\nReducing correlation of neurons before activation Statement 1 explains the main reason for the correlation that occurs before activation.\nStatement 1. Suppose that for some layer l the following conditions are satisfied:\n1. E[Φ(hl)] = 0\n2. E[Φ(hl)TΦ(hl)] = σ2l I - covariance matrix as we required E[Φ(hl)] = 0\nThen, in order for correlation not to occur on the layer l + 1, it is necessary that the weight matrix Wl+1 be orthogonal, that is, satisfy the conditionW Tl+1Wl+1 = I\nThe statement 1 shows that the first correlation factor on a fully connected layer is a non-orthogonal weight matrix, given that, if the input neurons do not correlate. See Appendix A.1 for proof of the statement 1. During training, the network does not try to maintain this property, solving the problem at hand. Later in this paper, we will add regularization to penalize the loss function if the weights are not orthogonal.\nThe corollary follows from the statement 1, which gives the second condition for preventing correlation. This corollary states that the dimension of the layer l + 1 should be no greater than on the layer l. Otherwise, this leads to a correlation and does not increase information. See Appendix A.1 for proof this corollary.\nCorollary 1. Suppose that the conditions of statement 1 are satisfied, then if the dimension Nl+1 > Nl, then there is certainly a pair of neurons hl+1,i,hl+1,j ; i 6= j : Cov(hl+1,i,hl+1,j) 6= 0\nIt should be noted that there are also the studies addressing orthogonal weight matrices (Huang et al., 2017; Jia et al., 2019; Xie et al., 2017). However all of these works consider this topic from optimization perspective. In particular, in (Cho & Lee, 2017) was proposed an approach for optimization of loss on the Stiefel manifold Vp(Rn) = {W ∈ Rn×p|W TW = I} to ensure orthonormality of weight matrices throughout training. To achieve this, they applied the orthogonality regularization\n(3) to require the Gram matrix of the weight matrix to be close to identity matrix. In our study we also use regularization (3) to ensure orthonormality of weight matrices.\n1\n2 L∑ l=1 λl ∣∣∣∣W Tl Wl − I∣∣∣∣2F , (3)\nProviding the necessary moments of neurons after activation In the statement 1, we relied on the zero expected value and the same variance of units in the layer. But the nonlinear activation function does not guarantee the preservation of these properties. Due to this fact, we cannot reason in the same way for the following layers. Therefore, we propose using an activation normalization approach similar to Batch-Normalization:\nφ̂(hl,i) = γl φ(hl,i)− µφ(hl,i)√\nσ2φ(hl,i) + , (4)\nwhere γl is a trainable scale parameter, µφ(hl,i), σ 2 φ(hl,i) - parameters that are evaluated, as inIoffe & Szegedy (2015). The difference compared to the standard Batch-Normalization is that the γl is the same for all neurons and we removed the βl,i parameters. This leads to an expected value of zero and the same variance γ2l of each unit in the layer.\nReducing correlation of neurons after activation It should be noted that an activation function can also impact the formation of redundant features (Ayinde et al., 2019). In particular, in this work we use tanh(x) ∈ (−1, 1) as the activation function. There are several methods that prevent formation of redundant features. In (Cogswell et al., 2015) was proposed DeCov loss which penalizes non-diagonal elements of estimated covariance matrix of hidden representation. In (Desjardins et al., 2015; Blanchette & Laganière, 2018; Huang et al., 2018) were proposed approaches for learning decorrelation layers that perform the following transformation: Φ̃(hl) = (Φ(hl) − µΦ(hl))Σ − 12 Φ(hl)\n. All of these methods have a common drawback: they require estimating covariance matrices. Often in practice the size of mini-batch is much smaller than is needed for estimating covariance matrices. Therefore, the covariance matrix is often singular. Moreover, methods that use the decorrelation layer are computationally expensive when it comes to high-dimensional embeddings, since it is necessary to calculate the square root of the inverse covariance matrix. This is especially evident in wide neural networks. Besides, these techniques add a significant amount of parameters ∑L−1 l=1 N 2 l .\nAs an alternative, we suggest using Dropout (Srivastava et al., 2014), which prevents units from co-adapting too much and reduces the correlation between neurons during training stage in the layer in proportion to p - the probability of retaining a unit in the network. See Appendix A.2 for the proof of this statement.\nIt is important to note that we apply normalization to the input data (input layer l = 0) as well as Dropout, since it is not always possible to decorrelate data so that this does not affect the quality of training. Moreover, fully-connected layers are often used in more complex architectures, such as convolutional neural networks. Obviously, after convolution operations, transformed data will be correlated. In this case, we must apply dropout after the convolutional block in order to reduce the correlation of neurons and use the proposed approach for vectorization." }, { "heading": "3.2 NEURAL NETWORK FROM THE RANDOM PROJECTION PERSPECTIVE", "text": "In the previous section, we proposed techniques of minimizing correlation between neurons. However, it is not guaranteed that the obtained representations are useful or sufficient for the similarity measuring of objects. In order to prove this, we consider a neural network as an application of the Random Projection method that allow us to estimate a lower bound of the representation size and define a metric to measure similarity.\nThe Random Projection method (Matoušek, 2013) is one of the methods of dimensionality reduction, which is based on the Johnson-Lindenstrauss lemma (Matoušek, 2013):\nLemma 1. Let ε ∈ (0, 1) and X = {x1, . . . ,xn} - a set of n points in space Rd; k ≥ C lognε2 , where C > 0 is a large enough constant. Then there exists a linear map f : Rd → Rk such that ∀x, x′ ∈ X:\n(1− ε)||x− x′|| ≤ ||f(x)− f(x′)|| ≤ (1 + ε)||x− x′|| (5)\nAs can be seen, Lemma 1 states only the existence of a linear map. However, there is also a probabilistic formulation of the Lemma 1, which states that if we take a sufficiently large k and consider a random orthogonal projection onto the space Rk, then inequality in equation 5 holds with high probability. This is the cornerstone of this work, allowing us to obtain Neural Random Projection, which is explained below.\nIn this work, we use the tanh activation function and assume that we are working in the linear region of the activation function. Due to this, we can make the following approximation:\nhL−1(x) ≈ xW̃1W̃2 . . .WL−1 + b̃ = xγ̂Ŵ + b̂ (6)\nwhere W̃l, b̃means that consistent use of scales and shifts was taken into account, respectively. Just WL−1 means that we consider the output before activation and hence before applying equation 4. And γ̂ is common multiplier after all modified batch-normalizations. It is obvious that the final matrix Ŵ is still orthonormal.\nAccording to approximation in equation 6, pre-activation outputs of the last layer are an orthogonal projection of the input data. Moreover, the process of random orthogonal initialization and optimization on the Stiefel manifold can be seen as a random sampling of an orthogonal matrix. For this reasons, we consider the neural network from the point of view of the Random Projection method. Due to this fact, we use the similarity metric L2 and get a lower bound estimate for the size of the last hidden layer (k ≥ C lognε2 ), although in practice this estimate is often too conservative, since Lemma 1 does not take data structure into account. Therefore a small C and ε closer to one can be considered, as will be shown in the experiments section." }, { "heading": "4 EXPERIMENTS", "text": "In this section we present the experiments on two datasets with different data types. Both experiments have the same structure. Initially, we group a set of labels into two disjoint subsets to train a neural network for binary classification problem, and then use this model to measure similarity between input data and define hidden classes. To evaluate the quality of defining hidden classes and hence the quality of similarity measure we use kNN classifier. To evaluate how well objects from the same hidden class are grouped together we use KMeans approach and v-measure (Rosenberg & Hirschberg, 2007). We use source data representation as baseline. To compare our approach, we consider 3 models (A, B, C) that have the same architectures but different regularizations in fullyconnected layers. We also compare the representations from the last hidden layer with the previous gradient-based approach (Charpiat et al., 2019) using Model A, since the authors did not impose additional requirements on the neural network." }, { "heading": "4.1 MNIST", "text": "We performed experiments on MNIST dataset, which consists of 70,000 images of size 28x28 with a 6-1 training-testing split. This dataset was chosen to show that our approach works with images and can be plugged-in after convolution blocks. See Appendix B.1 for more detailed description of the experiments.\nResults As illustrated in Table 1, we achieved results comparable (kNN digit accuracy) with the previous approach and much better results from v-measure perspective. We also obtained a much smaller dimension of the representation (30 vs 14.9k), and in addition it is smaller than the original dimension (30 vs 784). This allowed us to drastically reduce the time for vectorization, using only the forward pass, and the time to measure the similarity of objects (kNN prediction time) and search\nfor the closest object. It should be noted that only our model (Model C) achieves comparable quality with the previous approach. Moreover we could improve evaluation of similarity in comparison to source representation.\nAlthough the classic Batch-Normalization (Model B) gives a good increase in the quality of digit recognition compared to Model A, it still fails to achieve sufficient results on similarity task. This is well illustrated by the visualization of the representation of the last hidden layer (Fig. 2). As can be seen, Model A can only take into account the semantics of the original problem. In Model B, for the most part, the numbers are locally distinguishable, but some classes are strongly mixed, which is confirmed by the presence of high correlation (Fig. 3). Our model (Model C) can significantly reduce the correlation and gives an explicit separation with a large margin between the classes. In addition, the proposed modifications do not impair the quality of the model, as seen in Table 1 (Model accuracy).\nAs can be seen from Table 2, generally, there is a negative statistical relationship between target metrics and neuronal correlation." }, { "heading": "4.2 HENAN RENMIN HOSPITAL DATA", "text": "Next, we used physical examination dataset (Maxwell et al., 2017), which contains 110,300 anonymous medical examination records. We retained only four most frequent classes (Normal, Fatty Liver, Hypertension and Hypertension & Fatty Liver), because this dataset is highly imbalanced. As a result, we got 100140 records. After that we split four classes on two corresponding groups ”Sick” (Fatty Liver, Hypertension and Hypertension & Fatty Liver) and ”Healthy” (Normal). After that we divided data set into train and test subsets in proportion 4-1.\nIt should be noted that this task is more difficult in comparison to the previous experiment because hidden classes are semantically related which makes it easier for the neural network to mix these classes into one.\nThis dataset was chosen to show that our approach works well with categorical features. Also we show that we can make the first layer wider than the source dimension and after that plug-in our approach. As mentioned above, the original classes are imbalanced so instead of accuracy metric we used F1micro score to evaluate the quality of the similarity measurement (kNN F1micro). See Appendix B.2 for more detailed description of the experiments.\nResults As illustrated in Table 3, we achieved results comparable (kNN disease F1micro score) with the previous approach. As in the previous experiments we obtained a much smaller dimension\nof the representation (31 vs 29k) and drastically reduced the time for vectorization and the time to measure the similarity of objects (kNN prediction time). It should be noted that we noticeably improved the quality of similarity measurement in comparison to the source space (61.1 vs 79.6) as can be seen also in Fig. 4. We also reduced correlation in comparison of the Model A and Model B, however it did not help us to obtain more quality representaions. This result can be explained by the fact that the hidden classes are strongly semantically related, in particular in ”Sick” class. Therefore, it is easier for the neural network to mix these hidden classes into one. Probably, the data have more complicated structure in source space (see Table 3 for h0) and it’s not enough to only reduce the correlation. See Table 4 for detailed results.\nAs can be seen from Tabel 4, in this case, the statistical relationship between neuron correlation and target metrics is not so strong. Especially for the small dimension of the last hidden layer. This may be due to the fact that if the layer size is insufficient, there are not enough dimensions to describe the variability of the data, even if the data is not highly correlated." }, { "heading": "5 CONCLUSION", "text": "In this work, we studied an approach for obtaining implicit data representation. In order to obtain implicit data representation, we introduced the Neural Random Projection method, which includes regularization of the neural network in the form of optimization among orthogonal matrices, a modification of Batch-Normalization and its combination with Dropout. This allowed to obtain representations applicable for input similarity measure using trained neural network. We experimentally compared our approach to the previous one and showed that it significantly reduced both computation time and the size of the input representation. Finally, our approach allowed us to introduce the L2 metric on the representations and improve the quality of the similarity measurement in comparison with the source space. And how can be seen from the experiments the correlation and the size of the last hidden layer not yet all factors affecting on the final implicit representation. This remains an open question, and in the future we are planning to research it more. We expect that our study will be useful for many applied problems, where it is initially very difficult to determine a method for measuring data similarity in its original form." }, { "heading": "A THEORETICAL PART", "text": "" }, { "heading": "A.1 REDUCING CORRELATION OF NEURONS BEFORE ACTIVATION", "text": "Statement 1. Suppose that for some layer l the following conditions are satisfied:\n1. E[Φ(hl)] = 0\n2. E[Φ(hl)TΦ(hl)] = σ2l I - covariance matrix as we required E[Φ(hl)] = 0\nThen, in order for correlation not to occur on the layer l + 1, it is necessary that the weight matrix Wl+1 be orthogonal, that is, satisfy the conditionW Tl+1Wl+1 = I .\nProof. To prove this, we consider the expected value and the covariance matrix of the vector hl+1, using the relation hl+1 = Φ(hl)Wl+1 + bl+1 and let the condition of orthogonality be satisfied:\n1. Expected value:\nE[hl+1] = E[Φ(hl)Wl+1 + bl+1] = E[Φ(hl)]Wl+1 + E[bl+1] = bl+1\n2. Covariance matrix:\nE[(hl+1 − E[hl+1])T (hl+1 − E[hl+1])] = E[W Tl+1Φ(hl)TΦ(hl)Wl+1] = = W Tl+1E[Φ(hl)TΦ(hl)]Wl+1 = = σ2lW T l+1Wl+1 = σ 2 l I\nCorollary 1. Suppose that the conditions of statement 1 are satisfied, then if the dimension Nl+1 > Nl, then there is certainly a pair of neurons hl+1,i,hl+1,j ; i 6= j : Cov(hl+1,i,hl+1,j) 6= 0\nProof. Suppose, towards a contradiction, that Nl+1 > Nl and ∀i, j ∈ {1, . . . , Nl+1}, i 6= j ⇒ Cov(hl+1,i,hl+1,j) = 0. By the statement 1, this is possible only ifWl+1 is an orthogonal matrix, which means that there is a linearly independent system of Nl+1 vectors of dimension Nl−1. It follows that rank Wl+1 ≥ min(Nl, Nl+1) = Nl, which is impossible, therefore the matrix Wl+1 is not orthogonal, thus we have a contradiction." }, { "heading": "A.2 REDUCING CORRELATION OF NEURONS AFTER ACTIVATION", "text": "For further discussion, we describe the Dropout method in our notation: zl = rl· Φ̂(hl), where rl = (rl,1, . . . , rl,Nl); rl,i ∼ Bernoulli(p) and the · sign means element-wise multiplication, p the probability of retaining a unit in the network and Φ̂ denotes activation after modified BatchNormalization. The following statement explains how Dropout reduces correlation. Statement 2. Dropout reduces the correlation in proportion to p\nProof. Let Cij , i 6= j - correlation value between neurons i, j. Given the properties of φ̂(hli), we obtain the following expression for the correlation: Cij = E[φ̂(hli),φ̂(hlj)] γ2 . Now consider the correlation after applying Dropout:\nĈij = E[zli, zlj ]√ E[z2li]E[z2lj ] = E[rliφ̂(hli), rli φ̂(hlj)]√ E[r2liφ̂2(hli)]E[r2lj φ̂2(hlj)] =\n= E[rli] E[rlj ] E[φ̂(hli), φ̂(hlj)]√ E[r2li] E[r2lj ] E[φ̂2(hli)]E[φ̂2(hlj)] =\n= p2E[φ̂(hli), φ̂(hlj)]\npγ2 = pCij\nHere we used the fact that rli, rlj are independent Bernoulli random variables, and they are also independent of φ̂(hli), φ̂(hlj)." }, { "heading": "B DETAILS OF EXPERIMENTS", "text": "Computing infrastructure All experiments were performed on Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz, 189GB RAM and CentOS Linux 7 (Core) x86-64 operating system. The models were implemented using Python 3.6v and TensorFlow 2.2.0v library." }, { "heading": "B.1 MNIST", "text": "" }, { "heading": "Experiments description", "text": "1. Models (A, B, C) trained to solve binary classification problem (even/odd digit). Number of epochs - 100, optimizer - Adam with learning rate 0.001; batch size - 256; validation split - 0.1, early stopping: min delta - 1e-3 and patience 5 epochs.\n2. kNN classifier trained (k = 9) to solve digit classification problem, using pre-trained model A and metric from equation 1 from previous approach. We measured vectorization time on the complete dataset and evaluated digit accuracy, even/odd accuracy and prediction time of this kNN model on test set.\n3. Everything is similar to the previous item, only the representations of the last hidden layer of each model and the L2 metric were used.\n4. Like previous item, only the source space representation and the L2 metric were used.\nWe used models with the following base architecture:\nConv2D(16× 3× 3)→ ReLU →MaxPolling2D(2× 2)→ Conv2D(8× 3× 3)→ ReLU → MaxPolling2D(2× 2)→ Dense(64)→ tanh→ Dense(dimhL−1)→ tanh→ Dense(1) The size of the last hidden layer was chosen corresponding to the lower bound from section 3.2:\ndimhL−1 ≈ log 6· 104\nε2\nIn all models, only l2 with λ = 0.01 regularization was used in the convolution block. Further, the difference between models in fully-connected layers will be given.\nModel A: simple network, only l2 with λ = 0.01 regularization is used. And the variable ranges from 0.2 to 1.0 with step 0.2.\nModel B: orthogonal matrix initialization, regularization from equation 3 with λl ∈ {0.0, 0.01, 0.1, 1} and l2 regularization on bias vector; standard Batch-Normalization is used before activation and Dropout with p ∈ {0.0, 0.1, 0.2, 0.3} after activation. Model C (proposed model): modified Batch-Normalization is used after activation and Dropout after Batch-Normalization.\nEach model with each combination of hyperparameters is trained 20 times. It is needed to obtain 95% confident intervals for target metrics (KNN accuracy digit and v-measure)." }, { "heading": "B.2 HENAN RENMIN HOSPITAL DATA", "text": "" }, { "heading": "Experiments description", "text": "1. Models (A, B, C) trained to solve binary classification problem (sick/healthy). Number of epochs - 200, optimizer - Adam with initial learning rate 0.001 and exponential decay factor 0.99 every 500 iterations; batch size - 128; validation split - 0.1, early stopping: min delta - 1e-3 and patience 5 epochs.\n2. kNN classifier (k = 9) trained to classify type of disease, using pre-trained model A and metric from equation 1. We measured vectorization time on the complete dataset. And we evaluated F1micro score on hidden classes because they are imbalanced, sick/healthy accuracy and prediction time of this kNN model on test set.\n3. Everything is similar to the previous item, only the representations of the last hidden layer of each model and the L2 metric were used.\n4. Like previous item, only the source space representation and the L2 metric were used.\nWe used models with the following base architecture: Dense(128) → tanh → Dense(96) → tanh→ Dense(64)→ tanh→ Dense(32)→ tanh→ Dense(dimhL−1)→ Dense(1) The size of the last hidden layer was chosen corresponding to the lower bound from previous section:\ndimhL−1 ≈ log 8· 104\nε2\nEach model with each combination of hyperparameters is trained 20 times. It is needed to obtain 95% confident intervals for target metrics (KNN F1micro disease and v-measure).\nModel A: simple network, only l2 with λ = 0.001 regularization is used.\nModel B: orthogonal matrix initialization, regularization from equation 3 with λl ∈ {0.0, 0.01, 0.1, 1} except the first layer and l2 regularization on bias vector; standard BatchNormalization is used before activation and Dropout with p ∈ {0.0, 0.1, 0.2, 0.3} after activation. Model C (proposed model): modified Batch-Normalization is used after activation and Dropout with after Batch-Normalization." } ]
2,020
null
SP:b5366ec9f9cf6d872098a4f610801990b35b29d7
[ "The paper \"Learning to Actively Learn\" proposes a differentiable procedure to design algorithms for adaptive data collection tasks. The framework is based on the idea of making use of a measure of problem convexity (each problem is parametrized by a parameter theta) to solve solving a min-max objective over policies. The rationale behind this objective is that the resulting policy of solving this min-max objective should be robust to the problem complexity. The algorithm then proceeds to sample problem instances and making use of a differentiable objective to find a policy parametrized by a parameter psi, which could be parametrizing a neural network. " ]
This work proposes a procedure for designing algorithms for specific adaptive data collection tasks like active learning and pure-exploration multi-armed bandits. Unlike the design of traditional adaptive algorithms that rely on concentration of measure and careful analysis to justify the correctness and sample complexity of the procedure, our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds. In particular, a single adaptive learning algorithm is learned that competes with the best adaptive algorithm learned for each equivalence class. Our procedure takes as input just the available queries, set of hypotheses, loss function, and total query budget. This is in contrast to existing meta-learning work that learns an adaptive algorithm relative to an explicit, user-defined subset or prior distribution over problems which can be challenging to define and be mismatched to the instance encountered at test time. This work is particularly focused on the regime when the total query budget is very small, such as a few dozen, which is much smaller than those budgets typically considered by theoretically derived algorithms. We perform synthetic experiments to justify the stability and effectiveness of the training procedure, and then evaluate the method on tasks derived from real data including a noisy 20 Questions game and a joke recommendation task.
[]
[ { "authors": [ "Alekh Agarwal", "Haipeng Luo", "Behnam Neyshabur", "Robert E Schapire" ], "title": "Corralling a band of bandit algorithms", "venue": "arXiv preprint arXiv:1612.06246,", "year": 2016 }, { "authors": [ "VI VM Aleksandrov" ], "title": "Sysoyev, and SHEMENEV. VV", "venue": "Stochastic optimization. Engineering Cybernetics,", "year": 1968 }, { "authors": [ "Philip Bachman", "Alessandro Sordoni", "Adam Trischler" ], "title": "Learning algorithms for active learning", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yoram Baram", "Ran El Yaniv", "Kobi Luz" ], "title": "Online choice of active learning algorithms", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Craig Boutilier", "Chih-Wei Hsu", "Branislav Kveton", "Martin Mladenov", "Csaba Szepesvari", "Manzil Zaheer" ], "title": "Differentiable bandit exploration", "venue": "arXiv preprint arXiv:2002.06772,", "year": 2020 }, { "authors": [ "Tongyi Cao", "Akshay Krishnamurthy" ], "title": "Disagreement-based combinatorial pure exploration: Efficient algorithms and an analysis with localization", "venue": null, "year": 2017 }, { "authors": [ "Alexandra Carpentier", "Andrea Locatelli" ], "title": "Tight (lower) bounds for the fixed budget best arm identification bandit problem", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Leonardo Cella", "Alessandro Lazaric", "Massimiliano Pontil" ], "title": "Meta-learning with stochastic linear bandits", "venue": "arXiv preprint arXiv:2005.08531,", "year": 2020 }, { "authors": [ "Lijie Chen", "Anupam Gupta", "Jian Li", "Mingda Qiao", "Ruosong Wang" ], "title": "Nearly optimal sampling algorithms for combinatorial pure exploration", "venue": "In Conference on Learning Theory,", "year": 2017 }, { "authors": [ "Shouyuan Chen", "Tian Lin", "Irwin King", "Michael R Lyu", "Wei Chen" ], "title": "Combinatorial pure exploration of multi-armed bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "David Cohn", "Les Atlas", "Richard Ladner" ], "title": "Improving generalization with active learning", "venue": "Machine learning,", "year": 1994 }, { "authors": [ "Sanjoy Dasgupta" ], "title": "Analysis of a greedy active learning strategy", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Sanjoy Dasgupta" ], "title": "Coarse sample complexity bounds for active learning", "venue": "In Advances in neural information processing systems,", "year": 2006 }, { "authors": [ "Sanjoy Dasgupta", "Daniel J Hsu", "Claire Monteleoni" ], "title": "A general agnostic active learning algorithm. In Advances in neural information processing", "venue": null, "year": 2008 }, { "authors": [ "Meng Fang", "Yuan Li", "Trevor Cohn" ], "title": "Learning how to active learn: A deep reinforcement learning approach", "venue": "arXiv preprint arXiv:1708.02383,", "year": 2017 }, { "authors": [ "Tanner Fiez", "Lalit Jain", "Kevin G Jamieson", "Lillian Ratliff" ], "title": "Sequential experimental design for transductive linear bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Victor Gabillon", "Alessandro Lazaric", "Mohammad Ghavamzadeh", "Ronald Ortner", "Peter Bartlett" ], "title": "Improved learning complexity in combinatorial pure exploration bandits", "venue": "In Artificial Intelligence and Statistics,", "year": 2016 }, { "authors": [ "Javier Garcıa", "Fernando Fernández" ], "title": "A comprehensive survey on safe reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2015 }, { "authors": [ "Aurélien Garivier", "Emilie Kaufmann" ], "title": "Optimal best arm identification with fixed confidence", "venue": "In Conference on Learning Theory,", "year": 2016 }, { "authors": [ "Ken Goldberg", "Theresa Roeder", "Dhruv Gupta", "Chris Perkins" ], "title": "Eigentaste: A constant time collaborative filtering", "venue": "algorithm. information retrieval,", "year": 2001 }, { "authors": [ "Daniel Golovin", "Andreas Krause" ], "title": "Adaptive submodularity: Theory and applications in active learning and stochastic optimization", "venue": "Journal of Artificial Intelligence Research,", "year": 2011 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Steve Hanneke" ], "title": "A bound on the label complexity of agnostic active learning", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Steve Hanneke" ], "title": "Teaching dimension and the complexity of active learning", "venue": "In International Conference on Computational Learning Theory,", "year": 2007 }, { "authors": [ "Botao Hao", "Tor Lattimore", "Csaba Szepesvari" ], "title": "Adaptive exploration in linear contextual bandit", "venue": "arXiv preprint arXiv:1910.06996,", "year": 2019 }, { "authors": [ "Wei-Ning Hsu", "Hsuan-Tien Lin" ], "title": "Active learning by learning", "venue": "In Twenty-Ninth AAAI conference on artificial intelligence,", "year": 2015 }, { "authors": [ "Huang Hu", "Xianchao Wu", "Bingfeng Luo", "Chongyang Tao", "Can Xu", "Wei Wu", "Zhan Chen" ], "title": "Playing 20 question game with policy-based reinforcement learning", "venue": "arXiv preprint arXiv:1808.07645,", "year": 2018 }, { "authors": [ "Tzu-Kuo Huang", "Alekh Agarwal", "Daniel J Hsu", "John Langford", "Robert E Schapire" ], "title": "Efficient and parsimonious agnostic active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Lalit Jain", "Kevin G Jamieson" ], "title": "A new perspective on pool-based active classification and false-discovery control", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Matti Kääriäinen" ], "title": "Active learning in the non-realizable case", "venue": "In International Conference on Algorithmic Learning Theory,", "year": 2006 }, { "authors": [ "Zohar Karnin", "Tomer Koren", "Oren Somekh" ], "title": "Almost optimal exploration in multi-armed bandits", "venue": "In International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Zohar S Karnin" ], "title": "Verification based solution for structured mab problems", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ], "title": "Learning active learning from data", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Branislav Kveton", "Martin Mladenov", "Chih-Wei Hsu", "Manzil Zaheer", "Csaba Szepesvari", "Craig Boutilier" ], "title": "Differentiable meta-learning in contextual bandits", "venue": "arXiv preprint arXiv:2006.05094,", "year": 2020 }, { "authors": [ "Tor Lattimore", "Csaba Szepesvari" ], "title": "The end of optimism? an asymptotic analysis of finite-armed linear bandits", "venue": "arXiv preprint arXiv:1610.04491,", "year": 2016 }, { "authors": [ "Liam Li", "Kevin Jamieson", "Afshin Rostamizadeh", "Ekaterina Gonina", "Moritz Hardt", "Benjamin Recht", "Ameet Talwalkar" ], "title": "Massively parallel hyperparameter tuning", "venue": "arXiv preprint arXiv:1810.05934,", "year": 2018 }, { "authors": [ "Lisha Li", "Kevin Jamieson", "Giulia DeSalvo", "Afshin Rostamizadeh", "Ameet Talwalkar" ], "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Alex Luedtke", "Marco Carone", "Noah Simon", "Oleg Sofrygin" ], "title": "Learning to learn from data: Using deep adversarial learning to construct optimal statistical procedures", "venue": "Science Advances,", "year": 2020 }, { "authors": [ "Shie Mannor", "John N Tsitsiklis" ], "title": "The sample complexity of exploration in the multi-armed bandit problem", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Igor Mordatch", "Kendall Lowrey", "Emanuel Todorov" ], "title": "Ensemble-cio: Full-body dynamic motion planning that transfers to physical humanoids", "venue": "In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2015 }, { "authors": [ "Robert D Nowak" ], "title": "The geometry of generalized binary search", "venue": "IEEE Transactions on Information Theory,", "year": 2011 }, { "authors": [ "Jungseul Ok", "Alexandre Proutiere", "Damianos Tranos" ], "title": "Exploration in structured reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Aravind Rajeswaran", "Sarvjeet Ghotra", "Balaraman Ravindran", "Sergey Levine" ], "title": "Epopt: Learning robust neural network policies using model ensembles", "venue": "arXiv preprint arXiv:1610.01283,", "year": 2016 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Amr Sharaf", "Hal Daumé III" ], "title": "Meta-learning for contextual bandit exploration", "venue": "arXiv preprint arXiv:1901.08159,", "year": 2019 }, { "authors": [ "Max Simchowitz", "Kevin G Jamieson" ], "title": "Non-asymptotic gap-dependent regret bounds for tabular mdps", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Max Simchowitz", "Kevin Jamieson", "Benjamin Recht" ], "title": "The simulator: Understanding adaptive sampling in the moderate-confidence regime", "venue": "arXiv preprint arXiv:1702.05186,", "year": 2017 }, { "authors": [ "Marta Soare", "Alessandro Lazaric", "Rémi Munos" ], "title": "Best-arm identification in linear bandits", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Chao Tao", "Saúl Blanco", "Yuan Zhou" ], "title": "Best arm identification in linear bandits with linear dimension dependency", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Alexandre B Tsybakov" ], "title": "Introduction to nonparametric estimation", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "Bart Van Parys", "Negin Golrezaei" ], "title": "Optimal learning for structured bandits", "venue": null, "year": 2020 }, { "authors": [ "Larry Wasserman" ], "title": "All of statistics: a concise course in statistical inference", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Mark Woodward", "Chelsea Finn" ], "title": "Active one-shot learning", "venue": "arXiv preprint arXiv:1702.06559,", "year": 2017 }, { "authors": [ "Liyuan Xu", "Junya Honda", "Masashi Sugiyama" ], "title": "Fully adaptive algorithm for pure exploration in linear bandits", "venue": "arXiv preprint arXiv:1710.05552,", "year": 2017 }, { "authors": [ "Kaufmann" ], "title": "One can directly extract values of C(θ) from the literature for regret minimization of linear or other", "venue": "Simchowitz et al., 2017; Chen et al.,", "year": 2016 }, { "authors": [ "Hao" ], "title": "2019), and tabular as well as structured MDPs (Simchowitz & Jamieson, 2019", "venue": "Ok et al.,", "year": 2018 }, { "authors": [ "Kveton" ], "title": "θ)] where our policy and states are parameterized the same way as Appendix C for a fair comparison. To optimize for the parameter, we take gradient steps like (8) but with the new sampling and rollout where θ̃i ∼ P̂ . This gradient step follows from both the classical policy gradient algorithm in reinforcement learning", "venue": null, "year": 2020 }, { "authors": [ "Hu" ], "title": "2018) collected a dataset of 1000 celebrities and 500 possible questions to ask about each celebrity. We chose 100 questions out of the 500 by first constructing p̄′, X ′ and Z ′ for the 500 dimensions data, and sampling without replacement 100 of the 500 dimensions from a distribution derived from a static allocation. We down-sampled the number of questions so our training can run with sufficient M and L to de-noise the gradients while being prototyped with a single GPU", "venue": null, "year": 2018 }, { "authors": [ "Hu" ], "title": "Unknown to each celebrity-question pair collected from some population. To better fit the linear bandit scenario, we re-normalize the probability of getting Yes / No, conditioning on the event that these people did not answer Unknown. The probability of answering Yes to all 500 questions for each celebrity then constitutes vectors", "venue": null, "year": 2018 }, { "authors": [ "Fiez" ], "title": "1}1000 are binary vectors taking the majority votes. To sub-sample 100 questions from the 500, we could have uniformly at random selected the questions, but many of these questions are not very discriminative. Thus, we chose a “good” set of queries based on the design recommended by ρ", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "This work proposes a procedure for designing algorithms for specific adaptive data collection tasks like active learning and pure-exploration multi-armed bandits. Unlike the design of traditional adaptive algorithms that rely on concentration of measure and careful analysis to justify the correctness and sample complexity of the procedure, our adaptive algorithm is learned via adversarial training over equivalence classes of problems derived from information theoretic lower bounds. In particular, a single adaptive learning algorithm is learned that competes with the best adaptive algorithm learned for each equivalence class. Our procedure takes as input just the available queries, set of hypotheses, loss function, and total query budget. This is in contrast to existing meta-learning work that learns an adaptive algorithm relative to an explicit, user-defined subset or prior distribution over problems which can be challenging to define and be mismatched to the instance encountered at test time. This work is particularly focused on the regime when the total query budget is very small, such as a few dozen, which is much smaller than those budgets typically considered by theoretically derived algorithms. We perform synthetic experiments to justify the stability and effectiveness of the training procedure, and then evaluate the method on tasks derived from real data including a noisy 20 Questions game and a joke recommendation task." }, { "heading": "1 INTRODUCTION", "text": "Closed-loop learning algorithms use previous observations to inform what measurements to take next in a closed-loop in order to accomplish inference tasks far faster than any fixed measurement plan set in advance. For example, active learning algorithms for binary classification have been proposed that under favorable conditions require exponentially fewer labels than passive, random sampling to identify the optimal classifier (Hanneke et al., 2014). And in the multi-armed bandits literature, adaptive sampling techniques have demonstrated the ability to identify the “best arm” that optimizes some metric with far fewer experiments than a fixed design (Garivier & Kaufmann, 2016; Fiez et al., 2019). Unfortunately, such guarantees often either require simplifying assumptions that limit robustness and applicability, or appeal to concentration inequalities that are very loose unless the number of samples is very large (e.g., web-scale).\nThe aim of this work is a framework that achieves the best of both worlds: algorithms that learn through simulated experience to be as effective as possible with a tiny measurement budget (e.g., 20 queries), while remaining robust due to adversarial training. Our work fits into a recent trend sometimes referred to as learning to actively learn (Konyushkova et al., 2017; Bachman et al., 2017; Fang et al., 2017; Boutilier et al., 2020; Kveton et al., 2020) which tunes existing algorithms or learns entirely new active learning algorithms by policy optimization. Previous works in this area learn a policy by optimizing with respect to data observed through prior experience (e.g., metalearning or transfer learning) or an assumed explicit prior distribution of problem parameters (e.g. the true weight vector for linear regression). In contrast, our approach makes no assumptions about what parameters are likely to be encountered at test time, and therefore produces algorithms that do not suffer from a potential mismatch of priors. Instead, our method learns a policy that attempts to mirror the guarantees of frequentist algorithms with instance dependent sample complexities: if the problem is hard you will suffer a large loss, if it is easy you will suffer little.\nThe learning framework is general enough to be applied to many active learning settings of interest and is intended to be used to produce novel and robust high performing algorithms. The difference is that instead of hand-crafting hard instances that witness the difficulty of the problem, we use adversarial training inspired by the robust reinforcement learning literature to automatically train minimax policies. Embracing the use of a simulator allows our learned policies to be very aggressive while maintaining robustness. Indeed, this work is particularly useful in the setting where relatively few rounds of querying can be made, where concentration inequalities of existing algorithms are vacuous. To demonstrate the efficacy of our approach we implement the framework for the (transductive) linear bandit problem. This paradigm includes pure-exploration combinatorial bandits (e.g., shortest path, matchings) as a special case which itself reduces to active binary classification. We empirically validate our framework on a simple synthetic experiment before turning our attention to datasets derived from real data including a noisy 20 questions game and a joke recommendation task." }, { "heading": "2 PROPOSED FRAMEWORK FOR ROBUST LEARNING TO ACTIVELY LEARN", "text": "Whether learned or defined by an expert, any algorithm for active learning can be thought of as a policy from the perspective of reinforcement learning. At time t, based on an internal state st, the policy takes action xt and receives observation yt, which then updates the state and the process repeats. In our work, at time t the state st ∈ S is a function of the history {(xi, yi)}t−1i=1 such as its sufficient statistics. Without loss of generality, a policy π takes a state as input and defines a probability distribution over X so that at time t we have xt ∼ π(st). Fix a horizon T . For t = 1, 2, . . . , T\n• state st ∈ S is a function of the history, {(xi, yi)}t−1i=1 , • action xt ∈ X is drawn at random from the distribution π(st) defined over X , and • next state st+1 ∈ S is constructed by taking action xt in state st and observing yt ∼ f(·|θ∗, st, xt) until the game terminates at time t = T and the policy receives loss LT . Note that LT is a random variable that depends on the tuple (π, {(xi, yi)}Ti=1, θ∗). We assume that f is a distribution of known parameteric form to the policy (e.g., f(·|θ, s, x) ≡ N (〈x, θ〉, 1)) but the parameter θ is unknown to the policy. Let Pπ,θ,Eπ,θ denote the probability and expectation under the probability law induced by executing policy π in the game with θ∗ = θ to completion. Note that Pπ,θ includes any internal randomness of the policy π and the random observations yt ∼ f(·|θ, st, xt). Thus, Pπ,θ assigns a probability to any trajectory {(xi, yi)}Ti=1. For a given policy π and θ∗ = θ, the metric of interest we wish to minimize is the expected loss `(π, θ) := Eπ,θ [LT ] where LT as defined above is the loss observed at the end of the episode. For a fixed policy π, `(π, θ) defines a loss surface over all possible values of θ. This loss surface captures the fact that some values of θ are just intrinsically harder than others, but also that a policy may be better suited for some values of θ versus others.\nExample: In active binary classification, T is a label budget, X could be a set of images such that we can query the label of example image xt ∈ X , yt ∈ {−1, 1} is the requested binary label, and the loss LT is the classification error of a trained classifier on these collected labels.\nFinally, θx = p(y = 1|x) for all x ∈ X . More examples can be found in Appendix A." }, { "heading": "2.1 INSTANCE DEPENDENT PERFORMANCE METRIC", "text": "We now define the sense in which we wish to evaluate a particular policy. For any fixed value of θ one could clearly design an algorithm that would maximize performance on θ, but then it might have very poor performance on some other value θ′ 6= θ. Thus, we would ideally like π to perform uniformly well over a set of θ’s that are all equivalent in a certain sense. Define a positive function C : Θ → (0,∞) that assigns a score to each θ ∈ Θ that intuitively captures the “difficulty” of a particular θ, and can be used as a partial ordering of Θ. Ideally, C(θ) is a monotonic transformation of `(π̃, θ) for some “best” policy π̃ that we will define shortly. We give the explicit C(θ)\nfor the active binary classification example in Section 3, further description of C in Section 2.2, and more examples in Appendix A. For any set of problem instances Θ define\n`(π,Θ) := sup θ∈Θ `(π, θ).\nAnd for any r ≥ 0, define Θ(r) = {θ : C(θ) ≤ r}. The quantity `(π,Θ(r)) − infπ′ `(π′,Θ(r)) is then a function of r that describes the sub-optimality gap of a given policy π relative to an r-dependent baseline policy trained specifically for each r. For a fixed rk > 0, a policy π that aims to minimize just `(π,Θ(r)) might focus just on the hard instances (i.e., those with C(θ) close to r) and there may exist a different policy π′ that performs far better than π on easier instances (i.e., those with C(θ) r). To avoid this, assuming supr `(π,Θ(r)) − infπ′ `(π ′,Θ(r)) <∞, we define\nπ∗ := arg inf π sup r>0\n( `(π,Θ(r))− inf\nπ′ `(π′,Θ(r))\n) (1)\nas the policy that minimizes the worst case sub-optimality gap over all r > 0. Figure 1 illustrates these definitions. Instead of computing infπ′ `(π′,Θ(r)) for all r, in practice we define a grid with an increasing sequence {rk}Kk=1, to find an approximation to π∗. We are now ready to state the goal of this work:\nObjective: Given an increasing sequence r1 < · · · < rK that indexes nested sets of problem instances of increasing difficulty, Θ(r1) ⊂ Θ(r2) ⊂ · · · ⊂ Θ(rK), we wish to identify a policy π̂ that minimizes the maximum sub-optimality gap with respect to this sequence. Explicitly, we seek to learn\nπ̂ := arginf π max k≤K\n( `(π,Θ(rk))− inf\nπ′ `(π′,Θ(rk))\n) (2)\nwhere `(π,Θ) := sup θ∈Θ `(π, θ) and `(π, θ) is the expected loss incurred by policy π on instance θ.\nNote that as K → ∞ and supk rk+1 rk → 1, (1) and (2) are essentially equivalent under benign smoothness conditions on C(θ), in which case π̂ → π∗. In practice, we choose a finite K where ΘrK contains all problems that can be solved within the budget T relatively accurately, and a small > 0, where maxk\nrk+1 rk = 1 + . Furthermore, the objective in (2) is equivalent with\nπ̂ = arginf π max k≤K\n( `(π,Θ(rk))− `(πk,Θ(rk)) ) where πk ∈ arg inf\nπ sup θ:C(θ)≤rk `(π, θ).\nWe can efficiently solve this objective by first computing πk for all k ∈ [K] to obtain `(πk,Θ(rk)) as benchmarks, and then use these benchmarks to train π̂.\n2.2 PICKING THE COMPLEXITY FUNCTION C(θ)\nWe have defined an optimal policy in terms of a function C(θ) that determines a partial ordering over instances θ. This function can come from a heuristic that intuitively captures the difficulty of an instance. Or it can be defined and motivated from information theoretic lower bounds that often describe a general ordering, but are typically very loose relative to empirical performance. For example, consider the standard multi-armed bandit game where an agent has access to K distributions and in each round t ∈ [T ] she chooses a distribution It ∈ [K] and observes a random variable in [0, 1] with mean θIt . If her strategy is described by a policy π, once t reaches T she receives loss LT = maxi∈[K] ∑T t=1 θi−θIt with expectation `(π, θ) = E[LT ] where the expectation is taken with respect to the randomness in the observations, and potentially any randomness of the policy. Under benign conditions, it is known that any policy must suffer `(π, θ) & min{ √ KT, ∑ i 6=∗(θ∗− θi)−1} where θ∗ = maxi∈[k] θi (Lattimore & Szepesvári, 2018). Such a lower bound is an ideal candidate for C(θ). We define a different C(θ) for our particular experiments of interest, and others are described in Appendix A. The bottom line is that any function C(θ) works, but if it happens to correspond to an information theoretic lower bound, the resulting policy will match the lower bound if it is achievable." }, { "heading": "2.3 DIFFERENTIABLE POLICY OPTIMIZATION", "text": "The first step in learning the policy π̂ defined in Equation 2 is to learn each πk := infπ supθ:C(θ)≤rk `(π, θ) for all k = 1, . . . ,K. Once all πk are defined, π̂ of (2) is an optimization of the same form after shifting the loss by the scalar `(πk,Θ(rk)). Consequently, to learn π̂ it suffices to develop a training procedure to solve infπ supθ∈Ω `\n′(π, θ) for an arbitrary set Ω and generic loss function `′(π, θ).\nTo make the optimization problem infπ supθ∈Ω ` ′(π, θ) tractable, we parameterize it as follows. First, to compute the suprema over Θ, we consider a finite set Θ̃ := {θ̃i}Ni=1 ⊂ Ω, weighted by SOFTMAX(w) where w ∈ RN . In addition, instead of optimizing over all possible policies, we restrict the policy as the class of neural networks that take state representation as input and output a probability distribution over actions, parameterized by weights ψ. Mathematically, it could be stated as the following:\ninf π sup θ∈Ω `(π, θ) = inf π sup θ̃1:N⊂Ω max i∈[N ] `(π, θ̃i) (3)\n= inf π sup w∈RN ,θ̃1:N⊂Ω\nEi∼SOFTMAX(w) [ `(π, θ̃i) ] (4)\n≈ inf ψ sup w∈RN ,θ̃1:N⊂Ω\nEi∼SOFTMAX(w) [ `(πψ, θ̃i) ] . (5)\nAlgorithm 1: Gradient Based Optimization of (5) 1 Input: partition Ω, number of iterations Nit, number of problem samples M , number of\nrollouts per problem L, and loss variable LT at horizon T (see beginning of Section 2). 2 Goal: Compute the optimal policy arginfπ supθ∈Ω `(π, θ) = arginfπ supθ∈Ω Eπ,θ[LT ]. 3 Initialization: w, finite set Θ̃ and ψ 4 for t = 1, ..., Nit do 5 Collect rollouts of play: 6 Sample M problem indices I1, ..., IM\ni.i.d.∼ SOFTMAX(w) 7 for m = 1, ...,M do 8 Collect L independent rollout trajectories, denoted as τm,1:L, by the policy πψ\nfor problem instant θIm and observe losses ∀1 ≤ l ≤ L,LT (πψ, τm,l, θ̃Im). 9 end\n10 Optimize worst cases in Ω: 11 Update the generating distribution by taking ascending steps on gradient estimates:\nw ←w + 1 ML M∑ m=1 ∇w log(SOFTMAX(w)Im) · ( L∑ l=1 LT (π ψ, τm,l, θ̃Im)) (6)\nΘ̃←Θ̃ + 1 ML M∑ m=1 L∑ l=1 ( ∇Θ̃Lbarrier(θ̃Im ,Ω) +∇Θ̃LT (π ψ, τm,l, θ̃Im)\n+LT (π ψ, τm,l, θ̃Im) · ∇Θ̃ log(Pπψ,θ̃Im (τm,l))\n) (7)\nwhere Lbarrier is a differentiable barrier loss that heavily penalizes the θ̃Im ’s outside Ω. 12 Optimize policy: 13 Update the policy by taking descending step on gradient estimate:\nψ ← ψ − 1 ML M∑ m=1 L∑ l=1 LT (π ψ, τm,l, θ̃Im) · ∇ψ log(Pπψ,θ̃Im (τm,l)) (8)" }, { "heading": "14 end", "text": "Note that the objectives in (3) and (4) are indeed equivalent as θ̃1:N are free parameters we optimize over rather than taking fixed values. Now, to motivate (5), starting from the left hand side of (3),\nobserve that a small change in π may result in a large change in argsupθ∈Ω `(π, θ). Therefore, with the goal of covering the entire Ω, we optimize the N points so that when π changes a bit, there is at least one θ̃i close to the optimal argsup. In addition, to covering the entire space of Ω, N is expected to be very large in practice. However, to optimize the objective effectively, we can only evaluate on M of θ̃’s (M N ) in each iteration. Therefore, instead of naively sampling M points uniformly at random from theN points, in (4), we optimize an extra multinomial distribution, SOFTMAX(w), over theN points so that the points around the argsup are sampled more often. The final approximation in (5) comes from parameterizing the policy by a neural network.\nTo solve the saddle point optimization problem in (5), we use an instance of the Gradient Descent Ascent (GDA) algorithm as shown in Algorithm 1. The gradient estimates are unbiased estimates of the true gradients with respect to ψ, w and Θ̃ (shown in Appendix B). We choose N large enough to avoid mode collapse, and M,L as large as possible to reduce variance in gradient estimates while fitting the memory constraint. We use Adam optimization (Kingma & Ba, 2014) in taking gradient updates and regularize some of the parameters (an example will be presented in the next section).\nNote the decomposition for log(Pπψ,θ′(τ)) in (7) and (8), where rollout τ = {(xt, yt)}Tt=1, and log(Pπψ,θ′({(xt, yt)}Tt=1)) = log ( πψ(x1) · f(y1|θ′, s1) · ∏T t=2 π ψ(st, xt) · f(yt|θ′, st, xt) ) .\nHere πψ and f are only dependent on ψ and Θ̃ respectively. During evaluation of a fixed policy π, we are interested in solving supθ∈Ω `(π, θ) by gradient ascent updates like (7). The decoupling of πψ and f thus enables us to optimize the objective without differentiating through a policy π, which could be non-differentiable policies like deterministic algorithms.\nFinally, we make a few remarks on the parameterization of (5). As given in (5), we represent the generating distribution P as a simple finite number of weighted particles, analogous to a particle filter. Our policy parameterization πψ could be modelled by multi-layer perceptrons, recurrent neural networks, etc. We note that when using alternative generator parameterization like GANs (Goodfellow et al., 2014), an unbiased gradient can also be derived similarly." }, { "heading": "3 IMPLEMENTATION FOR LINEAR BANDITS AND CLASSIFICATION", "text": "We now apply the general framework of the previous section to a specific problem: transductive linear bandits. As described in Sections 5 and Appendix A this setting generalizes standard multiarmed bandits, linear bandits, and all of binary classification through a simple reduction to combinatorial bandits. We are particularly motivated to look at classification because the existing agnostic active learning algorithms are very inefficient (see Section 5). Indeed when applied to our setting of T = 20 they never get past their first stage of uniform random sampling. Consider the game:\nInput: Policy π, X ⊂ Rd, Z ⊂ Rd, time horizon T ∈ T Initialization: Nature chooses θ∗ ∈ Rd (hidden from policy) for t = 1, 2, . . . , T • Policy π selects xt ∈ X using history {(xs, ys)}t−1s=1 • Nature reveals yt ∼ f(·|θ∗, xt) with E[yt|θ∗, xt] = 〈xt, θ∗〉 Output: Policy π recommends ẑ ∈ Z as an estimate for z?(θ∗) := argmaxz∈Z〈z, θ∗〉 and\nsuffers loss LT = { 〈z?(θ∗)− ẑ, θ∗〉 if SIMPLE REGRET 1{z?(θ∗) 6= ẑ} if BEST IDENTIFICATION\nThe observation distribution f(·|θ, x) is domain specific but typically taken to be either a Bernoulli distribution for binary data, or Gaussian for real-valued data. We are generally interested in two objectives: BEST IDENTIFICATION which attempts to exactly identify the vector z? ∈ Z that is most aligned with θ∗, and SIMPLE REGRET which settles for an approximate maximizer.\nDefining C(θ) Recalling the discussion of Section 2.1, C(θ) should ideally be monotonically increasing in the intrinsic difficulty of minimizing the loss with respect to a particular θ. For arbitrary X ⊂ Rd and Z ⊂ Rd, it is shown in Fiez et al. (2019) that the sample complexity of identifying z?(θ) = arg maxz∈Z〈z, θ〉 with high probability is proportional to a quantity ρ?(θ), the value\nobtained by an optimization program. Another complexity term that appears in the combinatorial bandits literature (Cao & Krishnamurthy, 2017) where X = {ei : i ∈ [d]} and Z ⊂ {0, 1}d is\nρ̃(θ) = d∑ i=1 max z:zi 6=z?i (θ) ‖z − z?(θ)‖22 〈z − z?(θ), θ〉2 . (9)\nOne can show ρ?(θ) ≤ ρ̃(θ) and in many cases track each other. Because ρ̃(θ) can be computed much more efficiently compared to ρ?(θ), we use C(θ) = ρ̃(θ) in our experiments.\nAlgorithm 2: Training Workflow 1 Input: sequence {rk}Kk=1, complexity function C, and obj ∈ {SIMPLE REGRET, BEST\nIDENTIFICATION}. 2 Define k(θ) ∈ [K] such that rk(θ)−1 < C(θ) ≤ rk(θ) for all θ with C(θ) ≤ rK 3 For each k ∈ [K], obtain policy π̃k by Algorithm 1 with Ω = Θ(rk) and SIMPLE REGRET\nloss 4 if obj is SIMPLE REGRET then 5 For each k ∈ [K], compute `(π̃k, rk) // In this case, πk = π̃k. 6 Warm start π̂ = π̃bK/2c; optimize π̂ by Algorithm 1 with Ω = Θ(rK) and objective in (2), i.e., LT = 〈z?(θ)− ẑ, θ〉 − `(π̃k(θ),Θ(rk(θ))) 7 else if obj is BEST IDENTIFICATION then 8 For each k ∈ [K], warm start πk = π̃k; optimize πk by Algorithm 1 with Ω = Θ(rk) and BEST IDENTIFICATION loss; compute `(πk,Θ(rk)) 9 Warm start π̃ = πbK/2c; optimize π̃ by Algorithm 1 with Ω = Θ(rK) and objective in (2),\ni.e., LT = 1{z?(θ) 6= ẑ} − `(πk(θ),Θ(rk(θ))) 10 end 11 Output: π̂ (an approximate solution to (2))\nTraining. When training our policies, we follow the following procedure in Algorithm 2. Note that even when we are training for BEST IDENTIFICATION, we still warm start the training with optimizing SIMPLE REGRET. This is because a random initialized policy performs so poorly that BEST IDENTIFICATION is nearly always 1, making it difficult to improve the policy. In addition, our generating distribution parameterizations exactly follows from Section 2.3, while detailed state representations, policy parametrization and hyperparamters can be found in Appendix C, D.\nLoss functions. Instead of optimizing the approximated quantity from (5) directly, we add regularizers to the losses for both the policy and generator. First, we choose the Lbarrier in (7) to be λbarrier ·max{0, log(C(X ,Z, θ))− log(rk)}, for some large constant λbarrier. To discourage the policy from over committing to a certain action and/or the generating distribution from covering only a small subset of particles (i.e., mode collapse), we also add negative entropy penalties to both policy’s output distributions and SOFTMAX(w) with scaling factors λPol-reg and λGen-reg." }, { "heading": "4 EXPERIMENTS", "text": "We now evaluate the approach described in the previous section for combinatorial bandits with X = {ei : i ∈ [d]} and Z ⊂ {0, 1}d. We stress that the framework implemented here can be applied to any X , Z ⊂ Rd and any appropriate f–just plug and play to learn a new policy. In our experiments we take particular instances of combinatorial bandits with Bernoulli observations. We evaluated based on two criterion: instance-dependent worst-case and average-case. For instancedependent worst-case, we measure, for each rk and policy π, `(π,Θ(rk)) := max\nθ∈Θ(rk) `(π, θ) and\nplot this value as a function of rk. We note that our algorithm is designed to optimize for such metric. For the secondary average-case metric, we instead measure, for policy π and some collected set Θ, 1|Θ| ∑ θ∈Θ `(π, θ). Performances of instance-dependent worst-case metric are reported in Figures 2, 3, 4, 6, and 7 below while the average case performances are reported in the tables and Figure 5. Full scale of the figures can also be found in Appendix F.\nAlgorithms. We compare against a number of baseline active learning algorithms (see Section 5 for a review). UNCERTAINTY SAMPLING at time t computes the empirical maximizer of 〈z, θ̂〉 and the runner-up, and samples an index uniformly from their symmetric difference; if either are not unique, an index is sampled from the region of disagreement of the winners (see Appendix G for details). The greedy methods are represented by soft generalized binary search (SGBS) (Nowak, 2011) which maintains a posterior distribution over Z and samples to maximize information gain. A hyperparameter β ∈ (0, 1/2) of SGBS determines the strength of the likelihood update. We plot or report a range of performance over β ∈ {.01, .03, .1, .2, .3, .4}. The agnostic algorithms for classification (Dasgupta, 2006; Hanneke, 2007b;a; Dasgupta et al., 2008; Huang et al., 2015; Jain & Jamieson, 2019) or combinatorial bandits (Chen et al., 2014; Gabillon et al., 2016; Chen et al., 2017; Cao & Krishnamurthy, 2017; Fiez et al., 2019; Jain & Jamieson, 2019) are so conservative that given just T = 20 samples, they are all exactly equivalent to uniform sampling and hence represented by UNIFORM. To represent a policy based on learning to actively learn (LAL), we employ the method of Kveton et al. (2020) with a fixed prior P̃ constructed by drawing a z uniformly at random from Z and defining θ = 2z − 1 ∈ [−1, 1]d (details in Appendix H). When evaluating each policy, we use the successive halving algorithm (Li et al., 2017; 2018) for optimizing our non-convex objective with randomly initialized gradient descent and restarts (details in Appendix E).\nFigure 4: Max {θ : ρ̃(θ) ≤ r}, lower is better\nFigure 5: Average Eθ∼Ph [·], lower is better\nThresholds. We begin with a very simple instance to demonstrate the instance-dependent performance achieved by our learned policy. For d = 25 let X = {ei : i ∈ [d]}, Z = {0 + ∑k i=1 ei : k = 0, 1, . . . , d}, and f(·|θ, x) is a Bernoulli distribution over {−1, 1} with mean 〈x, θ〉 ∈ [−1, 1]. Note this is a binary classification task in one-dimension where the set of classifiers are thresholds on a line. We trained baseline policies {πk}9k=1 for the BEST IDENTIFICATION metric with C(θ) = ρ̃(X ,Z, θ) and rk = 23+i/2 for i ∈ {0, . . . , 8}. First we compare the base policies πk to π̂. Figure 2 presents `(π,Θ(r)) = supθ:ρ̃(θ)≤r `(π, θ) = supθ:ρ̃(θ)≤r Pπ,θ(ẑ 6= z?(θ)) as a function of r for our base policies {πk}k and the global policy π∗, each as an individual curve. Figure 3 plots\nthe same information in terms of gap: `(π,Θ(r))− min\nk:rk−1<r≤rk `(πk,Θ\n(rk)). We observe that each\nπk performs best in a particular region and π∗ performs almost as well as the r-dependent baseline policies over the range of r. This plot confirms that our optimization objective of (2) was successful. Under the same conditions as Figure 2, Figure 4 compares the performance of π∗ to the algorithm benchmarks. Since SGBS and LAL are deterministic, the adversarial training finds a θ that tricks them into catastrophic failure. Figure 5 trades adversarial evaluation for evaluating with respect to a parameterized prior: For each h ∈ {0.5, 0.6, . . . , 1}, θ ∼ Ph is defined by drawing a z uniformly at random from Z and then setting θi = (2zi− 1)(2αi− 1) where αi ∼ Bernoulli(h). Thus, each sign of 2z−1 is flipped with probability h. We then compute Eθ∼Ph [Pπ,θ(ẑ = z?(θ))] = Eθ∼Ph [`(π, θ)]. While SGBS now performs much better than uniform and uncertainty sampling, our policy π∗ is still superior to these policies. However, LAL is best overall which is expected since the support of Ph is basically a rescaled version of the prior used in LAL.\n20 Questions. We now address an instance constructed from the real data of Hu et al. (2018). Summarizing how we used the data from Hu et al. (2018) (see Appendix I for details), 100 yes/no questions were considered for 1000 celebrities. Each question i ∈ [100] for each person j ∈ [1000] was answered by several annotators to construct an empirical probability p̄(j)i ∈ [0, 1] denoting the\nproportion of annotators that answered “yes.” To construct our instance, we take X = {ei : i ∈ [100]} and Z = {z(j) : [z(j)i ] = 1{p̄ (j) i > 1/2}} ⊂ {0, 1}1000. Just as before, we trained {πk}4k=1 for the BEST IDENTIFICATION metric with C(θ) = ρ̃(X ,Z, θ) and ri = 23+i/2 for i ∈ {1, . . . , 4}.\nFigure 6: Max {θ : ρ̃(θ) ≤ r}\nTable 1: Average Eθ∼P̂ [·]\nMethod Accuracy (%) π∗ (Ours) 17.9 SGBS {26.5, 26.2,\n27.2, 26.5, 21.4, 12.8}\nUncertainty 14.3 LAL 4.1 Uniform 6.9\nFigure 6 is analogous to Figure 4 but for this 20 questions instance. Uncertainty sampling performs remarkably well on this instance. A potential explanation is that on a noiseless instance (e.g., θ = 2z − 1 for some z ∈ Z), our implementation of uncertainty sampling is equivalent to CAL (Cohn et al., 1994) and is\nknown to have near-optimal sample complexity (Hanneke et al., 2014). Uncertainty sampling even outperforms our r-dependent baseline by a bit which in theory should not occur–we conjecture this is due to insufficient convergence of our policies or local minima. Our second experiment constructs a distribution P̂ based on the dataset: to draw a θ ∼ P̂ we uniformly at random select a j ∈ [1000] and sets θi = 2p̄ (j) i − 1 for all i ∈ [d]. As shown in Table 1, SGBS and π∗ are the winners. LAL performs much worse in this case, potentially because of the distribution shift from P̃ (prior we train on) to P̂ (prior at test time). The strong performance of SGBS may be due to the fact that sign(θi) = 2z?(θ)i − 1 for all i and θ ∼ P̂ , a realizability condition under which SGBS has strong guarantees (Nowak, 2011).\nFigure 7: Max {θ : ρ̃(θ) ≤ r}\nTable 2: Average Eθ∼P̂ [·]\nMethod Average Regret π∗ (Ours) 3.209 SGBS {3.180, 3.224,\n3.278, 3.263, 3.153, 3.090}\nUncertainty 3.027 LAL 3.610 Uniform 3.877\nJester Joke Recommendation We now turn our attention away from best identification of the last two experiments `(π, θ) = Pπ,θ(ẑ 6= z?(θ)), to simple regret `(π, θ) = Eπ,θ[〈z?(θ) − ẑ, θ〉]. We consider the Jester jokes dataset of Goldberg et al. (2001) that contains jokes ranging innocent puns to grossly of-\nfensive jokes. We filter the dataset to only contain users that rated all 100 jokes, resulting in 14116 users. A rating of each joke was provided on a [−10, 10] scale which was rescaled to [−1, 1] and observations were simulated as Bernoulli’s like above. We then clustered the ratings of these users (see Appendix J for details) to 10 groups to obtain Z = {z(k) : k ∈ [10], z(k) ∈ {0, 1}100} where z\n(k) i = 1 corresponds to recommending the ith joke in user cluster z (k) ∈ Z . Figure 7 shows the same style plot as Figures 4,6 but for this jokes dataset, with our policy alone nearly achieving the r-dependent baseline for all r. Mirroring the construction of the 20Q prior, we construct P̂ by uniformly sampling a user and employing their θ to answer queries. Table 2 shows that despite our policy not being trained for this setting, its performance is still among the top." }, { "heading": "5 RELATED WORK", "text": "Learning to actively learn. Previous works vary in how the parameterize the policy, ranging from parameterized mixtures of existing expertly designed active learning algorithms (Baram et al., 2004; Hsu & Lin, 2015; Agarwal et al., 2016), parameterizing hyperparameters (e.g., learning rate, rate of forced exploration, etc.) in an existing popular algorithm (e.g, EXP3) (Konyushkova et al., 2017; Bachman et al., 2017; Cella et al., 2020), and the most ambitious, policies parameterized end-to-end like in this work (Boutilier et al., 2020; Kveton et al., 2020; Sharaf & Daumé III, 2019; Fang et al., 2017; Woodward & Finn, 2017). These works take an approach of defining a prior distribution either through past experience (meta-learning) or expert created (e.g., θ ∼ N (0,Σ)), and then evaluate their policy with respect to this prior distribution. Defining this prior can be difficult, and moreover, if the θ encountered at test time did not follow this prior distribution, performance could suffer significantly. Our approach, on the other hand, takes an adversarial training approach and can\nbe interpreted as learning a parameterized least favorable prior (Wasserman, 2013), thus gaining a much more robust policy as an end result.\nRobust and Safe Reinforcement Learning: Our work is also highly related to the field of robust and safe reinforcement learning, where our objective can be considered as an instance of minimax criterion under parameter uncertainty (Garcıa & Fernández, 2015). Widely applied in applications such as robotics (Mordatch et al., 2015; Rajeswaran et al., 2016), these methods train a policy in a simulator like Mujoco (Todorov et al., 2012) to minimize a defined loss objective while remaining robust to uncertainties and perturbations to the environment (Mordatch et al., 2015; Rajeswaran et al., 2016). Ranges of these uncertainty parameters are chosen based on potential values that could be encountered when deploying the robot in the real world. In our setting, however, defining the set of environments is far less straightforward and is overcome by the adoption of the C(θ) function. Active Binary Classification Algorithms. The literature on active learning algorithms can be partitioned into model-based heuristics like uncertainty sampling, query by committee, or modelchange sampling (Settles, 2009), greedy binary-search like algorithms that typically rely on a form of bounded noise for correctness (Dasgupta, 2005; Kääriäinen, 2006; Golovin & Krause, 2011; Nowak, 2011), and agnostic algorithms that make no assumptions on the probabilistic model (Dasgupta, 2006; Hanneke, 2007b;a; Dasgupta et al., 2008; Huang et al., 2015; Jain & Jamieson, 2019). Though the heuristics and greedy methods can perform very well for some problems, it is typically easy to construct counter-examples (e.g., outside the assumptions) in which they catastrophically fail (as demonstrated in our experiments). The agnostic algorithms have strong robustness guarantees but rely on concentration inequalities, and consequently require at least hundreds of labels to observe any deviation from random sampling (see Huang et al. (2015) for comparison). Therefore, they were not included in our experiments explicitly but were represented by uniform.\nPure-exploration Multi-armed Bandit Algorithms. In the linear structure setting, for sets X ,Z ⊂ Rd known to the player, pulling an “arm” x ∈ X results in an observation 〈x, θ∗〉+ zeromean noise, and the objective is to identify arg maxz∈Z〈z, θ∗〉 for a vector θ∗ unknown to the player (Soare et al., 2014; Karnin, 2016; Tao et al., 2018; Xu et al., 2017; Fiez et al., 2019). A special case of linear bandits is combinatorial bandits where X = {ei : i ∈ [d]} and Z ⊂ {0, 1}d (Chen et al., 2014; Gabillon et al., 2016; Chen et al., 2017; Cao & Krishnamurthy, 2017; Fiez et al., 2019; Jain & Jamieson, 2019). Active binary classification is a special case of combinatorial pure-exploration multi-armed bandits (Jain & Jamieson, 2019), which we exploit in the threshold experiments. While the above works have made great theoretical advances in deriving algorithms and information theoretic lower bounds that match up to constants, the constants are so large that these algorithms only behave well when the number of measurements is very large. When applied to the instances of our paper (only 20 queries are made), these algorithms behave no differently than random sampling." }, { "heading": "6 DISCUSSION AND FUTURE DIRECTIONS", "text": "We see this work as an exciting but preliminary step towards realizing the full potential of this general approach. From a practical perspective, training a π∗ can take many hours of computational resources for even these small instances. Scaling these methods to larger instances is an important next step. While training time scales linearly with the horizon length T , we note that one can take multiple samples per time step with minimal computational overhead enabling problems that require larger sample complexities. In our implementation we hard-coded the decision rule for ẑ given sT , while it could also be learned as in (Luedtke et al., 2020). Likewise, the parameterization of the policy and generator worked well for our purposes but was chosen somewhat arbitrarily–are there more natural choices? Finally, while we focused on stochastic settings, this work naturally extends to constrained fully adaptive adversarial sequences which is an interesting direction of future work.\nFUNDING DISCLOSURE\nRemoved for anonymization purposes." }, { "heading": "ACKNOWLEDGEMENT", "text": "Removed for anonymization purposes." }, { "heading": "B GRADIENT ESTIMATE DERIVATION", "text": "Here we derive the unbiased gradient estimates (6), (7) and (8) in Algorithm 1. Since each the gradient estimates in the above averages overM ·L identically distributed trajectories, it is therefore sufficient to show that our gradient estimate is unbiased for a single problem θ̃i and its rollout trajectory {(xt, yt)}Tt=1.\nFor a feasible w, using the score-function identity (Aleksandrov et al., 1968) ∇wEi∼SOFTMAX(w) [ `(πψ, θ̃i) ] = Ei∼SOFTMAX(w) [ `(πψ, θ̃i) · ∇w log(SOFTMAX(w)i) ] .\nObserve that if i ∼ SOFTMAX(w) and {(xt, yt)}Tt=1 is the result of rolling out a policy πψ on θ̃i then\ngw := LT (π ψ, {(xt, yt)}Tt=1, θ̃i) · ∇w log(SOFTMAX(w)i) is an unbiased estimate of∇wEi∼SOFTMAX(w) [ `(πψ, θ̃i) ] .\nFor a feasible set Θ̃, by definition of `(π, θ), ∇Θ̃Ei∼SOFTMAX(w) [ `(πψ, θ̃i) ] =Ei∼SOFTMAX(w) [ ∇Θ̃Eπ,θ̃i [ LT (π, {(xt, yt)}Tt=1, θ̃i) ]] =Ei∼SOFTMAX(w) [ Eπ,θ̃i [ ∇Θ̃LT (π, {(xt, yt)} T t=1, θ̃i) (10)\n+ LT (π, {(xt, yt)}Tt=1, θ̃i) · ∇Θ̃ log(Pπψ,θ̃i({(xt, yt)} T t=1)) ]] where the last equality follows from chain rule and the score-function identity (Aleksandrov et al., 1968). The quantity inside the expectations, call it gΘ̃, is then an unbiased estimator of ∇Θ̃Ei∼SOFTMAX(w) [ `(πψ, θ̃i) ] given i and {(xt, yt)}Tt=1 are rolled out accord-\ningly. Note that if Lbarrier 6= 0, ∇Θ̃Lbarrier(θ̃i,Ω) is clearly an unbiased gradient estimator of Ei∼SOFTMAX(w)[Eπ,θ̃i [Lbarrier(θ̃i,Ω)]] given i and rollout are sampled accordingly.\nLikewise, for policy,\ngψ := LT (π ψ, {(xt, yt)}Tt=1, θ̃i) · ∇ψ log(Pπψ,θ̃i({(xt, yt)} T t=1)) is an unbiased estimate of∇ψEi∼SOFTMAX(w) [ `(πψ, θ̃i) ] ." }, { "heading": "C LINEAR BANDIT PARAMETERIZATION", "text": "C.1 STATE REPRESENTATION\nWe parameterize our state space S as a flattened |X |×3 matrix where each row represents a distinct x ∈ X . Specifically, at time t the row of st corresponding to some x ∈ X records the number of times that action x has been taken ∑t−1 s=1 1{xs = x}, its inverse ( ∑t−1 s=1 1{xs = x})−1, and the sum\nof the observations ∑t−1 s=1 1{xs = x}ys.\nC.2 POLICY MLP ARCHITECTURE\nOur policy πψ is a multi-layer perceptron with weights ψ. The policy take a 3|X | sized state as input and outputs a vector of size |X | which is then pushed through a soft-max to create a probability distribution over X . At the end of the game, regardless of the policy’s weights, we set ẑ = argmaxz∈Z〈z, θ̂〉 where θ̂ is the minimum `2 norm solution to argminθ ∑T s=1(ys − 〈xs, θ〉)2.\nOur policy network is a simple 6-layer MLP, with layer sizes {3|X |, 256, 256, 256, 256, |X |} where 3|X | corresponds to the input layer and |X | is the size of the output layer, which is then pushed through a Softmax function to create a probability over arms. In addition, all intermediate layers are activated with the leaky ReLU activation units with negative slopes of .01. For the experiments for 1D thresholds and 20 Questions, they share the same network structure as mentioned above with |X | = 25 and |X | = 100 respectively." }, { "heading": "D HYPER-PARAMETERS", "text": "In this section, we list our hyperparameters. First we define λbinary to be a coefficient that gets multiplied to binary loses, so instead of 1{z?(θ∗) 6= ẑ}, we receive loss λbinary ·1{z?(θ∗) 6= ẑ}. We\nchoose λbinary so that the recieved rewards are approximately at the same scale as SIMPLE REGRET. During our experiments, all of the optimizers are Adam. All budget sizes are T = 20. For fairness of evaluation, during each experiment (1D thresholds or 20 Questions), all parameters below are shared for evaluating all of the policies. To elaborate on training strategy proposed in Algorithm 2 more, we divide our training into four procedures, as indicated in Table 3:\n• Init. The initialization procedure takes up a rather small portion of iterations primarily for the purpose of optimizing for Lbarrier so that the particles converge into the constrained difficulty sets. In addition, during the initialization process we initialize and freeze w = ~0, thus putting an uniform distribution over the particles. This allows us to utilize the entire set of particles without w converge to only a few particles early on. To initialize Θ̃, we sample 2/3 of the N particles uniformly from [−1, 1]|X | and the rest 1/3 of the particles by sampling, for each i ∈ [|Z|], N3|Z| particles uniformly from {θ : argmaxj〈θ, zj〉 = i}. We initialize our policy weights by Xavier initialization with weights sampled from normal distribution and scaled by .01. • Regret Training, π̃i Training with SIMPLE REGRET objective usually takes the longest among the Procedures. The primary purpose for this process is to let the policy converge to a reasonable warm start that already captures some essence of the task. • Fine-tune πi. Training with BEST IDENTIFICATION objective run multiple times for each πi with their corresponding complexity set Θi. During each run, we start with a warm started policy, and reinitialize the rest of the models by running the initialization procedure followed by optimizing the BEST IDENTIFICATION objective. • Fine-tune π̂ This procedure optimizes (2), with baselines mink `(πk,Θ(rk)) evaluated based on each πi learned from the previous procedure. Similar to fine-tuning each individual πi, we warm start a policy πbK/2c and reinitialize w and Θ by running the initialization procedure again.\nTo provide a general strategy of choosing hyper-parameters, we note that L, firstly, λbinary, λPol-reg are primarily parameters tuned for |X | as the noisiness and scale of the gradients, and entropy over the arms X grows with the size |X |. Secondly, λGen-reg is primarily tuned for |Z| as it penalizes the entropy over the N arms, which is a multiple of |Z|. Thirdly, learning rate of θ is primarily tuned for the convergence of constraint ρ∗ into the restricted class, thus Lbarrier becoming 0 after the specified number of iterations during initialization is a good indicator. Finally, we choose N and M by memory constraint of our GPU. The hyper-parameters for each experiment was tuned with less than 20 hyper-parameter assignments, some metrics to look at while tuning these hyperparameters includes but are not limited to: gradient magnitudes of each component, convergence of each loss and entropy losses for each regularization term (how close it is to the entropy of a uniform probability), etc." }, { "heading": "E POLICY EVALUATION", "text": "When evaluating a policy, we are essentially solving the following objective for a fixed policy π:\nmax θ∈Ω `(π, θ)\nwhere Ω is a set of problems. However, due to non-concavity of this loss function, gradient descent initialized randomly may converge to a local maxima. To reduce this possibility, we randomly initialize many initial iterates and take gradient steps round-robin, eliminating poorly performing trajectories. To do this with a fixed amount of computational resource, we apply the successive halving algorithm from Li et al. (2018). Specifically, we choose hyperparamters: η = 4, r = 100, R = 1600 and s = 0. This translates to:\n• Initialize |Θ̃| = 1600, optimize for 100 iterations for each θ̃i ∈ Θ̃ • Take the top 400 of them and optimize for another 400 iterations • Take the top 100 of the remaining 400 and optimize for an additional 1600 iterations We take gradient steps with the Adam optimizer (Kingma & Ba, 2014) with learning rate of 10−3 β1 = .9 and β2 = .999." }, { "heading": "F FIGURES AT FULL SCALE", "text": "" }, { "heading": "G UNCERTAINTY SAMPLING", "text": "We define the symmetric difference of a set of binary vectors, SymDiff({z1, ..., zn}) = {i : ∃j, k ∈ [n] s.t., z\n(i) j = 1 ∧ z (i) k = 0}, as the dimensions where inconsistencies exist.\nAlgorithm 3: Uncertainty sampling in very small budget setting 1 Input: X ,Z 2 for t = 1, ..., T do 3 θ̂t−1 = argminθ ∑T s=1(ys − 〈xs, θ〉)2 4 Ẑ = {z ∈ Z : maxz′∈Z〈z′, θ̂t−1〉 = 〈z, θ̂t−1〉} 5 if |Ẑ| = 1 then 6 Ẑt = Ẑ ⋃ {z ∈ Z : maxz′∈(Z\\Ẑ)〈z\n′, θ̂t−1〉 = 〈z, θ̂t−1〉} 7 else 8 Ẑt = Ẑ 9 end\n10 Uniformly sample It from SymDiff(Ẑt) 11 Pull xIt and observe yt 12 end" }, { "heading": "H LEARNING TO ACTIVELY LEARN ALGORITHM", "text": "To train a policy under the learning to actively learn setting, we aim to solve for the objective\nmin ψ\nEθ∼P̂ [`(π ψ, θ)]\nwhere our policy and states are parameterized the same way as Appendix C for a fair comparison. To optimize for the parameter, we take gradient steps like (8) but with the new sampling and rollout where θ̃i ∼ P̂ . This gradient step follows from both the classical policy gradient algorithm in reinforcement learning as well as from recent LAL work by Kveton et al. (2020).\nMoreover, note that the optimal policy for the objective must be deterministic as justified by deterministic policies being optimal for MDPs. Therefore, it is clear that, under our experiment setting, the deterministic LAL policy will perform poorly in the adversarial setting (for the same reason why SGBS performs poorly).\nI 20 QUESTIONS SETUP\nHu et al. (2018) collected a dataset of 1000 celebrities and 500 possible questions to ask about each celebrity. We chose 100 questions out of the 500 by first constructing p̄′, X ′ and Z ′ for the 500 dimensions data, and sampling without replacement 100 of the 500 dimensions from a distribution derived from a static allocation. We down-sampled the number of questions so our training can run with sufficient M and L to de-noise the gradients while being prototyped with a single GPU.\nSpecifically, the dataset from Hu et al. (2018) consists of probabilities of people answering Yes / No / Unknown to each celebrity-question pair collected from some population. To better fit the linear bandit scenario, we re-normalize the probability of getting Yes / No, conditioning on the event that these people did not answer Unknown. The probability of answering Yes to all 500 questions for each celebrity then constitutes vectors p̄′(1), ..., p̄′(1000) ∈ R500, where each dimension of a give p̄′(j)i represents the probability of yes to the ith question about the jth person. The action set X ′ is then constructed as X ′ = {ei : i ∈ [500]}, while Z ′ = {z(j) : [z(j)i ] = 1{p̄ (j) i > 1/2}} ⊂ {0, 1}1000 are binary vectors taking the majority votes.\nTo sub-sample 100 questions from the 500, we could have uniformly at random selected the questions, but many of these questions are not very discriminative. Thus, we chose a “good” set of queries based on the design recommended by ρ? of Fiez et al. (2019). If questions were being answered noiselessly in response to a particular z ∈ Z ′, then equivalently we have that for this setting\nθ = 2z − 1. Since ρ? optimizes allocations λ over X ′ that would reduce the number of required queries as much as possible (according to the information theoretic bound of (Fiez et al., 2019)) if we want to find a single allocation for all z′ ∈ Z simultaneously, we can perform the optimization problem\nmin λ∈∆(|X|−1) max z′∈Z′ max z 6=z′\n‖z′ − z‖2 ( ∑ i λixix T i ) −1 ((z′ − z)T (2z′ − 1))2 .\nWe then sample elements from X ′ according to this optimal λ without replacement and add them to X until |X | = 100." }, { "heading": "J JESTER JOKE RECOMMENDATION SETUP", "text": "We consider the Jester jokes dataset of Goldberg et al. (2001) that contains jokes ranging from punbased jokes to grossly offensive. We filter the dataset to only contain users that rated all 100 jokes, resulting in 14116 users. A rating of each joke was provided on a [−10, 10] scale which was shrunk to [−1, 1]. Denote this set of ratings as Θ̂ = {θi : i ∈ [14116], θi ∈ [−1, 1]100}, where θi encodes the ratings of all 100 jokes by user i. To construct the set of arms Z , we then clustered the ratings of these users to 10 groups to obtain Z = {zi : i ∈ [10], zi ∈ {0, 1}100} by minimizing the following metric:\nmin Z:|Z|=10 14116∑ i=1 max z∗∈{0,1}100 〈z∗, θi〉 −max z∈Z 〈z, θi〉.\nTo solve for Z , we adapt the k−means algorithm, with the metric above instead of the L−2 metric used traditionally." } ]
2,020
null
SP:53254b85dc51beed47567630a243e95537bb65ed
[ "The paper builds on the \"bias amplification\" aspect of fairness in machine learning literature i.e. the tendency of models to make predictions that are biased in a way that they amplify societal correlations. The paper claims three major contributions: a metric, discussion about the dependence of bias measurements on randomness, and a normative discussion about the use of bias amplification ideas in different domains. " ]
The conversation around the fairness of machine learning models is growing and evolving. In this work, we focus on the issue of bias amplification: the tendency of models trained from data containing social biases to further amplify these biases. This problem is brought about by the algorithm, on top of the level of bias already present in the data. We make two main contributions regarding its measurement. First, building off of Zhao et al. (2017), we introduce and analyze a new, decoupled metric for measuring bias amplification, BiasAmp→, which possesses a number of attractive properties, including the ability to pinpoint the cause of bias amplification. Second, we thoroughly analyze and discuss the normative implications of this metric. We provide suggestions about its measurement by cautioning against predicting sensitive attributes, encouraging the use of confidence intervals due to fluctuations in the fairness of models across runs, and discussing what bias amplification means in the context of domains where labels either don’t exist at test time or correspond to uncertain future events. Throughout this paper, we work to provide a deeply interrogative look at the technical measurement of bias amplification, guided by our normative ideas of what we want it to encompass.
[]
[ { "authors": [ "Vedika Agarwal", "Rakshith Shetty", "Mario Fritz" ], "title": "Towards causal VQA: Revealing and reducing spurious correlations by invariant and covariant semantic editing", "venue": "Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Solon Barocas", "Moritz Hardt", "Arvind Narayanan" ], "title": "Fairness and Machine Learning", "venue": "fairmlbook.org,", "year": 2019 }, { "authors": [ "Steve Bearman", "Neill Korobov", "Avril Thorne" ], "title": "The fabric of internalized sexism", "venue": "Journal of Integrated Social Sciences", "year": 2009 }, { "authors": [ "Richard Berk", "Hoda Heidari", "Shahin Jabbari", "Michael Kearns", "Aaron Roth" ], "title": "Fairness in criminal justice risk assessments: The state of the art", "venue": "Sociological Methods and Research,", "year": 2017 }, { "authors": [ "Jay Bhattacharya", "William B. Vogt" ], "title": "Do instrumental variables belong in propensity scores? NBER Technical Working Papers 0343", "venue": "National Bureau of Economic Research, Inc.,", "year": 2007 }, { "authors": [ "Tolga Bolukbasi", "Kai-Wei Chang", "James Zou", "Venkatesh Saligrama", "Adam Kalai" ], "title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2016 }, { "authors": [ "Leo Breiman" ], "title": "Statistical modeling: The two cultures", "venue": "Statistical Science,", "year": 2001 }, { "authors": [ "Joy Buolamwini", "Timnit Gebru" ], "title": "Gender shades: Intersectional accuracy disparities in commercial gender classification", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Aylin Caliskan", "Joanna J. Bryson", "Arvind Narayanan" ], "title": "Semantics derived automatically from language corpora contain human-like biases", "venue": null, "year": 2017 }, { "authors": [ "Mingliang Chen", "Min Wu" ], "title": "Towards threshold invariant fair classification", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2020 }, { "authors": [ "Kristy Choi", "Aditya Grover", "Trisha Singh", "Rui Shu", "Stefano Ermon" ], "title": "Fair generative modeling via weak supervision", "venue": null, "year": 1910 }, { "authors": [ "Alexandra Chouldechova" ], "title": "Fair prediction with disparate impact: A study of bias in recidivism prediction instrument", "venue": "Big Data,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2019 }, { "authors": [ "Jiayun Dong", "Cynthia Rudin" ], "title": "Variable importance clouds: A way to explore variable importance for the set of good models", "venue": null, "year": 1901 }, { "authors": [ "Cynthia Dwork", "Moritz Hardt", "Toniann Pitassi", "Omer Reingold", "Richard Zemel" ], "title": "Fairness through awareness", "venue": "Proceedings of the 3rd Innovations in Theoretical Computer Science Conference,", "year": 2012 }, { "authors": [ "Aaron Fisher", "Cynthia Rudin", "Francesca Dominici" ], "title": "All models are wrong, but many are useful: Learning a variable’s importance by studying an entire class of prediction models simultaneously", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "James Foulds", "Rashidul Islam", "Kamrun Naher Keya", "Shimei Pan" ], "title": "An intersectional definition of fairness", "venue": null, "year": 2018 }, { "authors": [ "Ben Green" ], "title": "The false promise of risk assessments: Epistemic reform and the limits of fairness", "venue": "ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT),", "year": 2020 }, { "authors": [ "Nina Grgic-Hlaca", "Muhammad Bilal Zafar", "Krishna P Gummadi", "Adrian Weller" ], "title": "The case for process fairness in learning: Feature selection for fair decision making", "venue": "NeurIPS Symposium on Machine Learning and the Law,", "year": 2016 }, { "authors": [ "Leif Hancox-Li" ], "title": "Robustness in machine learning explanations: Does it matter? Conference on Fairness, Accountability, and Transparency (FAccT), 2020", "venue": null, "year": 2020 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nathan Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": null, "year": 2016 }, { "authors": [ "Sam Havens", "Aneta Stal" ], "title": "Use BERT to fill in the blanks, 2019", "venue": "URL https://github. com/Qordobacode/fitbert", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "European Conference on Computer Vision (ECCV),", "year": 2016 }, { "authors": [ "Lisa Anne Hendricks", "Kaylee Burns", "Kate Saenko", "Trevor Darrell", "Anna Rohrbach" ], "title": "Women also snowboard: Overcoming bias in captioning models", "venue": "European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Niharika Jain", "Alberto Olmo", "Sailik Sengupta", "Lydia Manikonda", "Subbarao Kambhampati" ], "title": "Imperfect imaganation: Implications of gans exacerbating biases on facial data augmentation and snapchat selfie lenses", "venue": null, "year": 2001 }, { "authors": [ "Shengyu Jia", "Tao Meng", "Jieyu Zhao", "Kai-Wei Chang" ], "title": "Mitigating gender bias amplification in distribution by posterior regularization", "venue": "Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Os Keyes" ], "title": "The misgendering machines: Trans/HCI implications of automatic gender recognition", "venue": "Proceedings of the ACM on Human-Computer Interaction,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Sicong Kuang", "B.D. Davison" ], "title": "Semantic and context-aware linguistic model for bias detection", "venue": "Proc. of the Natural Language Processing meets Journalism", "year": 2016 }, { "authors": [ "Matt J. Kusner", "Joshua R. Loftus", "Chris Russell", "Ricardo Silva" ], "title": "Counterfactual fairness", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Brian N. Larson" ], "title": "Gender as a variable in natural-language processing: Ethical considerations", "venue": "Proceedings of the First ACL Workshop on Ethics in Natural Language Processing,", "year": 2017 }, { "authors": [ "Klas Leino", "Emily Black", "Matt Fredrikson", "Shayak Sen", "Anupam Datta" ], "title": "Feature-wise bias amplification", "venue": "International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Paul Pu Liang", "Irene Mengze Li", "Emily Zheng", "Yao Chong Lim", "Ruslan Salakhutdinov", "LouisPhilippe Morency" ], "title": "Towards debiasing sentence representations", "venue": "Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "Lubomir Bourdev", "Ross Girshick", "James Hays", "Pietro Perona", "Deva Ramanan", "C. Lawrence Zitnick", "Piotr Dollar" ], "title": "Microsoft COCO: Common objects in context", "venue": "European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of International Conference on Computer Vision (ICCV),", "year": 2015 }, { "authors": [ "Kaiji Lu", "Piotr Mardziel", "Fangjing Wu", "Preetam Amancharla", "Anupam Datta" ], "title": "Gender bias in neural natural language processing", "venue": null, "year": 2019 }, { "authors": [ "Charles T. Marx", "Flavio du Pin Calmon", "Berk Ustun" ], "title": "Predictive multiplicity in classification", "venue": null, "year": 1909 }, { "authors": [ "Chandler May", "Alex Wang", "Shikha Bordia", "Samuel R. Bowman", "Rachel Rudinger" ], "title": "On measuring social biases in sentence encoders", "venue": "Annual Conference of the North American Chapter of the Association for Computational Linguistics (NACCL),", "year": 2019 }, { "authors": [ "Ninareh Mehrabi", "Fred Morstatter", "Nripsuta Saxena", "Kristina Lerman", "Aram Galstyan" ], "title": "A survey on bias and fairness in machine learning", "venue": null, "year": 1908 }, { "authors": [ "Joel A. Middleton", "Marc A. Scott", "Ronli Diakow", "Jennifer L. Hill" ], "title": "Bias amplification and bias unmasking", "venue": "Political Analysis,", "year": 2016 }, { "authors": [ "Alexandra Olteanu", "Carlos Castillo", "Fernando Diaz", "Emre Kiciman" ], "title": "Social data: Biases, methodological pitfalls, and ethical boundaries", "venue": "Frontiers in Big Data,", "year": 2019 }, { "authors": [ "Martin Pawelczyk", "Klaus Broelemann", "Gjergji Kasneci" ], "title": "On counterfactual explanations under predictive multiplicity", "venue": "Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2020 }, { "authors": [ "Judea Pearl" ], "title": "On a class of bias-amplifying variables that endanger effect estimates", "venue": "Uncertainty in Artificial Intelligence,", "year": 2010 }, { "authors": [ "Yusu Qian", "Urwa Muaz", "Ben Zhang", "Jae Won Hyun" ], "title": "Reducing gender bias in word-level language models with a gender-equalizing loss function", "venue": "ACL-SRW,", "year": 2019 }, { "authors": [ "Morgan Klaus Scheuerman", "Kandrea Wade", "Caitlin Lustig", "Jed R. Brubaker" ], "title": "How we’ve taught algorithms to see identity: Constructing race and gender in image databases for facial analysis", "venue": "Proceedings of the ACM on Human-Computer Interaction,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": null, "year": 2014 }, { "authors": [ "Krishna Kumar Singh", "Dhruv Mahajan", "Kristen Grauman", "Yong Jae Lee", "Matt Feiszli", "Deepti Ghadiyaram" ], "title": "Don’t judge an object by its context: Learning to overcome contextual bias", "venue": "Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Peter M. Steiner", "Kim Yongnam" ], "title": "The mechanics of omitted variable bias: Bias amplification and cancellation of offsetting biases", "venue": "Journal of causal inference,", "year": 2016 }, { "authors": [ "Pierre Stock", "Moustapha Cisse" ], "title": "ConvNets and ImageNet beyond accuracy: Understanding mistakes and uncovering biases", "venue": "European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Harini Suresh", "John V. Guttag" ], "title": "A framework for understanding unintended consequences of machine learning", "venue": null, "year": 1901 }, { "authors": [ "Ruixiang Tang", "Mengnan Du", "Yuening Li", "Zirui Liu", "Xia Hu" ], "title": "Mitigating gender bias in captioning systems", "venue": null, "year": 2006 }, { "authors": [ "Sahil Verma", "Julia Rubin" ], "title": "Fairness definitions explained", "venue": "ACM/IEEE International Workshop on Software Fairness,", "year": 2018 }, { "authors": [ "Tianlu Wang", "Jieyu Zhao", "Mark Yatskar", "Kai-Wei Chang", "Vicente Ordonez" ], "title": "Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations", "venue": "International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Michael Wick", "Swetasudha Panda", "Jean-Baptiste Tristan" ], "title": "Unlocking fairness: a trade-off revisited", "venue": "Conference on Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Jeffrey M. Wooldridge" ], "title": "Should instrumental variables be used as matching variables", "venue": "Research in Economics,", "year": 2016 }, { "authors": [ "Jieyu Zhao", "Tianlu Wang", "Mark Yatskar", "Vicente Ordonez", "Kai-Wei Chang" ], "title": "Men also like shopping: Reducing gender bias amplification using corpus-level constraints", "venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2017 }, { "authors": [ "Jieyu Zhao", "Tianlu Wang", "Mark Yatskar", "Vicente Ordonez", "Kai-Wei Chang" ], "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", "venue": "North American Chapter of the Association for Computational Linguistics (NAACL),", "year": 2018 }, { "authors": [ "As noted", "done", "by Liang" ], "title": "2020), a large and diverse corpus of sentences is needed", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "The machine learning community is becoming increasingly cognizant of problems surrounding fairness and bias, and correspondingly a plethora of new algorithms and metrics are being proposed (see e.g., Mehrabi et al. (2019) for a review). The gatekeepers checking the systems to be deployed often take the form of fairness evaluation metrics, and it is vital that these be deeply investigated both technically and normatively. In this paper, we endeavor to do this for bias amplification. Bias amplification happens when a model exacerbates biases from the training data at test time. It is the result of the algorithm (Foulds et al., 2018), and unlike other forms of bias, cannot be solely attributed to the dataset.\nTo this end, we propose a new way of measuring bias amplification, BiasAmp→ 1, that builds off a prior metric from Men Also Like Shopping (Zhao et al., 2017), that we will call BiasAmpMALS. Our metric’s technical composition aligns with the real-world qualities we want it to encompass, addressing a number of the previous metric’s shortcomings by being able to: 1) generalize beyond binary attributes, 2) take into account the base rates that people of each attribute appear, and 3) disentangle the directions of amplification. Concretely, consider a visual dataset (Fig. 1) where each image has a label for the task, T , which is painting or not painting, and further is associated with a protected attribute, A, which is woman or man. If the gender of the person biases the prediction of the task, we consider this A→ T bias amplification; if the reverse happens, then T → A. In our normative discussion, we discuss a few topics. We consider whether predicting protected attributes is necessary in the first place; by not doing so, we can trivially remove T → A amplification. We also encourage the use of confidence intervals when using our metric because BiasAmp→, along with other fairness metrics, suffers from the Rashomon Effect (Breiman, 2001), or multiplicity of good models. In deep neural networks, random seeds have relatively little impact on accuracy; however, that is not the case for fairness, which is more brittle to randomness.\n1The arrow in BiasAmp→ is meant to signify the direction that bias amplification is flowing, and not intended to be a claim about causality.\nNotably, a trait of bias amplification is that it is not at odds with accuracy, unlike many other fairness metrics, because the goal of not amplifying biases and matching task-attribute correlations is aligned with that of accurate predictions. For example, imagine a dataset where the positive outcome is associated at a higher rate with group A than with group B. A classifier that achieves 100% accuracy at predicting the positive outcome is not amplifying bias; however, according to metrics like demographic parity, this perfect classifier is still perpetuating bias because it is predicting the positive label at different rates for both groups. While matching training correlations is desired in object detection where systems should perfectly predict the labels, we will explore the nuances of what this means in situations where the validity of the labels, and thus task-attribute correlations themselves, are up for debate. For example, in the risk prediction task which assesses someone’s likelihood of recidivism, the label represents whether someone with a set of input features ended up recidivating, but is not a steadfast indicator of what another person with the same input features will do. Here, we would not want to replicate the task-attribute correlations at test time, and it is important to keep this in mind when deciding what fairness metrics to apply. The notion of amplification also allows us to encapsulate the idea that systemic harms and biases can be more harmful than errors made without such a history (Bearman et al., 2009); for example, in images overclassifying women as cooking carries more of a negative connotation than overclassifying men as cooking.2 Distinguishing between which errors are more harmful than others is a pattern that can often be lifted from the training data.\nTo ground our work, we first distinguish what bias amplification captures that standard fairness metrics cannot, and then distinguish BiasAmp→ from BiasAmpMALS. Our key contributions are: 1) proposing a new way to measure bias amplification, addressing multiple shortcomings of prior work and allowing us to better diagnose where a model goes wrong, and 2) providing a technical analysis and normative discussion around the use of this measure in diverse settings, encouraging thoughtfulness with each application." }, { "heading": "2 RELATED WORK", "text": "Fairness Measurements. Fairness is nebulous and context-dependent, and approaches to quantifying it (Verma & Rubin, 2018; Buolamwini & Gebru, 2018) include equalized odds (Hardt et al., 2016), equal opportunity (Hardt et al., 2016), demographic parity (Dwork et al., 2012; Kusner et al., 2017), fairness through awareness (Dwork et al., 2012; Kusner et al., 2017), fairness through unawareness (Grgic-Hlaca et al., 2016; Kusner et al., 2017), and treatment equality (Berk et al., 2017). We examine bias amplification, which is a type of group fairness where correlations are amplified.\nBias Amplification. Bias amplification has been measured by looking at binary classifications (Leino et al., 2019), GANs (Jain et al., 2020; Choi et al., 2020), and correlations (Zhao et al., 2017). Wang et al. (2019) measures this using dataset leakage and model leakage. The difference between these values is the level of bias amplification, but this is not a fair comparison because the\n2We use the terms man and woman to refer to binarized socially-perceived gender expression, recognizing these labels are not inclusive, and in vision datasets are often assigned by annotators rather than self-disclosed.\nattribute classifier gets discrete labels for the former but continuous model outputs for the latter. Jia et al. (2020) looks at output distributions like we do, but with a different formulation.\nThe Word Embedding Association Test (WEAT) (Caliskan et al., 2017) measures bias amplification in de-contextualized word embeddings, looking at correlations but not causations (Bolukbasi et al., 2016). However, with newer models like BERT and ELMo that have contextualized embeddings, WEAT does not work (May et al., 2019), so new techniques have been proposed incorporating context (Lu et al., 2019; Kuang & Davison, 2016). We use these models to measure the directional aspect of these amplifications, as well as to situate them in the broader world of bias amplification.\nDirectionality. Directionality of amplification has been observed in computer vision (Stock & Cisse, 2018) and language (Qian et al., 2019). It has also been studied with causality (Bhattacharya & Vogt, 2007; Wooldridge, 2016; Pearl, 2010; Middleton et al., 2016; Steiner & Yongnam, 2016). We take a deeper and more empirical approach.\nPredictive Multiplicity. The Rashomon Effect (Breiman, 2001), or multiplicity of good models, has been studied in various contexts. The variables investigated that differ across good models include explanations (Hancox-Li, 2020), individual treatments (Marx et al., 2020; Pawelczyk et al., 2020), and variable importance (Fisher et al., 2019; Dong & Rudin, 2019). We build on these discoveries and investigate how fairness measurements also differ between equally “good” models." }, { "heading": "3 EXISTING FAIRNESS METRICS", "text": "In this section we present existing fairness metrics and show how bias amplification can distinguish errors resulting from under- and overclassification in a way that others cannot, followed by a discussion of the shortcomings of BiasAmpMALS." }, { "heading": "3.1 OVERVIEW OF EXISTING FAIRNESS METRICS", "text": "We begin with a review of existing fairness metrics in a concrete classification setting. We consider again the example from Fig. 1, where on this dataset women (a0) are correlated with painting (t0), and men (a1) with not painting (t1), such that there are N images each of (a0, t0) and (a1, t1) but only N/2 images of (a0, t1) and (a1, t0). A classifier trained to recognize painting on this data is likely to learn this association and over-predict painting on images of women and under-predict painting on images of men; however, algorithmic interventions may counteract this effect and in fact result in the opposite behavior.\nFig. 2 shows the behavior of fairness metrics under varying amounts of learned amplification of the correlation. The four fairness metrics are: False Positive Rate (FPR) and True Positive Rate (TPR) difference: the difference in false positive (true positive) rate of predicting the label t0 on images of a1 versus on images of a0 (Chouldechova, 2016; Hardt et al., 2016), accuracy difference: difference between the overall task prediction accuracy on images of a1 versus on images of a0 (Berk et al., 2017), and mean accuracy across subgroups: mean task prediction accuracy across the four image subgroups ((a0, t0), (a1, t0), (a0, t1), and (a1, t1)) (Buolamwini & Gebru, 2018). However, these metrics are not designed to account for the training correlations, and are unable to distinguish between cases of increased or decreased learned correlations, as seen in Fig. 2.\nZhao et al. (2017) introduced an alternative to these that explicitly captures the notion of bias amplification. Concretely, they consider P (A = a|T = t) of the training data as the fraction of times a protected attribute a ∈ A appears on images corresponding to task t ∈ T . They then compare this with the test-time predictions made by the model, P (Â = a|T̂ = t), or the number of times attribute a is predicted on images where the task is predicted as t, which allows them to measure bias amplification in the absence of any additional annotations on the hold-out set. Note that in this formulation they are assuming that the model is making predictions for both the task and the attribute. The full bias amplification metric (reformulated in our terms), is computed as BiasAmpMALS =\n1\n|T | |T |∑ t=1 |A|∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 |A| ) ︸ ︷︷ ︸\ny(t,a)\n( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) ︸ ︷︷ ︸\n∆(t,a)\n(1)\nFig. 2 empirically demonstrates that this metric is able to capture the level of increasing bias amplification. (For consistency in comparison with prior metrics, we assume the model always correctly predicts the protected attribute A.) However, as we discuss in the next section, there are some properties of bias amplification that this metric is not able to capture: for example, it does not distinguish between errors in predicting the protected attribute versus errors in predicting the task. Thus we introduce a new metric (last graph of Fig. 2) which maintains the desirable properties from Zhao et al. (2017) while including a number of innovations." }, { "heading": "3.2 SHORTCOMINGS OF BIASAMPMALS", "text": "Despite the advantages we just documented about BiasAmpMALS (Eqn. 1) and its ability to distinguish under- and overclassifications of training correlations, this metric also suffers from a number of shortcomings. To ground our discussion, we will work directly with the model outputs released by Zhao et al. (2017) from their Conditional Random Field (CRF) model on COCO (Lin et al., 2014), which has predictions for gender and objects detected for each image." }, { "heading": "3.2.1 NON-BINARY ATTRIBUTES", "text": "The first shortcoming is that the metric assumes that the protected attributes are binary, limiting its use: the indicator variable y(t, a) implicitly chooses only one of the attribute a ∈ A to be associated with every task t. Consider a task t0 ∈ T such that a0 ∈ A is associated with it, but none of the other ai ∈ A are, where i 6= 0. In this scenario, diff(t0, ai) is only considered when there is one other ai such that ai = ¬a, since diff(t, a) = −diff(t,¬a). A simple addition of −diff for all ai’s when y is 0 ensures that when there are more than two groups, their bias amplification is also counted." }, { "heading": "3.2.2 BASE RATES", "text": "The second shortcoming of BiasAmpMALS is the fact that the metric does not take into account the base rates of each attribute. Concretely, when determining in y(t, a) of Eqn. 1 whether the attribute a is correlated with the task t, P (A = a|T = t) is compared to 1|A| . However, this assumes that all a’s within A are evenly distributed, which may not be the case. For example, in COCO there are about 2.5x as many men as women, so it would appear that most objects positively correlate with men simply by nature of there being an overrepresentation of men. Consider the object oven; BiasAmpMALS calculates P (A = man|T = oven) = 0.56 > 12 and thus considers this object to be correlated with men rather than women. However, computing P (A = man, T = oven) = 0.0103 < 0.0129 = P (A = man)P (T = oven) reveals that men are in fact not correlated with oven, and the seeming overrepresentation comes from the fact that men are overrepresented in the dataset more generally. Not surprisingly, the model trained on this data learns to associate women with ovens and underpredicts men with ovens at test time, i.e., P (Â = man|T̂ = oven)− P (A = man|T = oven) = −0.10. BiasAmpMALS erroneously counts this as inverse bias amplification." }, { "heading": "3.2.3 ENTANGLING DIRECTIONS", "text": "Another shortcoming we observe is the inability to distinguish between different types of bias amplification. Zhao et al. (2017) discovers that “Technology oriented categories initially biased toward men such as keyboard ... have each increased their bias toward males by over 0.100.” Concretely, from Eqn. 1 P (A = man|T = keyboard) = .70 and P (Â = man|T̂ = keyboard) = .83, demonstrating an amplification of bias. However, the direction or cause of bias amplification remains unclear: is the presence of man in the image increasing the probability of predicting a keyboard, or vice versa? Looking more closely at the model’s disentangled predictions, we see that:\nP (T̂ = keyboard|A = man) = 0.0020 < 0.0032 = P (T = keyboard|A = man) (2) P (Â = man|T = keyboard) = 0.78 > 0.70 = P (A = man|T = keyboard) (3)\nindicating that keyboards are under-predicted on images with men yet men are over-predicted on images with keyboards. Thus the root cause of this amplification appears to be in the gender predictor rather than the object detector. Such disentangement allows us to properly focus algorithmic intervention efforts. This also highlights the need to consider the ground truth labels on the hold-out set when measuring bias amplification in addition to the predictions (since when considering only the predictions, it is impossible to decouple the different sources of bias)." }, { "heading": "3.3 THRESHOLD", "text": "We also fully replicate the original experiment from Zhao et al. (2017) using BiasAmpMALS on their model predictions and measure .040. However, we observe that “man” is being predicted at a higher rate (75.6%) than is actually present (71.2%). With this insight, we tune the decision threshold on the validation set such that the gender predictor is well-calibrated to be predicting the same percentage of images to have men as the dataset actually has. When we calculate BiasAmpMALS on these newly thresholded predictions for the test set, we see bias amplification drop from 0.040 to 0.001 just as a result of this threshold change, outperforming even the solution proposed in Zhao et al. (2017) of corpus-level constraints, which achieved a drop to only 0.021. Fairness can be quite sensitive to the threshold chosen (Chen & Wu, 2020), so careful selection should be done in picking the threshold, rather than using the default of .5. In Fig. 3 we show how the amount of bias amplification, as measured by BiasAmpMALS and BiasAmpT→A, changes as we vary the threshold, i.e., proportion of people classified to be a man. We can see that when the threshold is chosen to be the one wellcalibrated on the validation set rather than the default threshold, bias amplification is measured to be closer to zero for both metrics. From here on out when a threshold is needed, we will pick it to be well-calibrated on the validation set. Although we do not take this approach, one could also imagine integrating bias amplification across proportion in order to have a threshold-agnostic measure of bias amplification, similar to what is proposed by Chen & Wu (2020). We do not do this in our experiments because at deployment time, it is often the case that discrete predictions are required.\n4 BIASAMP→\nNow we present our metric, BiasAmp→, which retains the desirable properties of BiasAmpMALS, while addressing the shortcomings noted in the previous section. To account for the need to disentangle the two possible directions of bias amplification (Sec. 3.2.3) the metric consists of two values, BiasAmpA→T corresponding to the amplification of bias resulting from the protected attribute influencing the task prediction, and BiasAmpT→A, corresponding to the amplification of bias resulting from the task influencing the protected attribute prediction. Concretely, the metric is defined as:\nBiasAmp→ = 1 |A||T | ∑\nt∈T ,a∈A y(t, a)∆(t, a) + (1− y(t, a))(−∆(t, a)) (4)\nwhere y(t, a) = 1 [P (T = t, A = a) > P (T = t)P (A = a)] (5)\n∆(t, a) = { P (T̂ = t|A = a)− P (T = t|A = a) if measuring A→ T P (Â = a|T = t)− P (A = a|T = t) if measuring T → A\n(6)\nEqn. 4 generalizes BiasAmpMALS to measure the amplification of both positive and negative correlations between task t and attribute a, depending on y(t, a), when the attributes are non-binary,\nas discussed in Sec. 3.2.1. Eqn. 5 identifies the direction of correlation of attribute a with task t, properly accounting for base rates as described in Sec. 3.2.2. Finally, Eqn. 6 decouples the two possible directions of bias amplification as in Sec. 3.2.3. Since values may be negative, reporting the aggregated bias amplification value could obscure task-attribute pairs that exhibit strong bias amplification; thus, disaggregated results per pair can be returned for a more detailed diagnosis.\nAlthough in much of the examples we have and will look at, A = {woman,man}, this formulation allows for any attribute set to be defined, including intersectional identities. This is achieved by having A encompass the cross-product of possible attributes, for example A = {Black woman,Black man,white woman,white man}. We introduce a scenario for validating the decoupling aspect of our metric, that simultaneously serves as inspiration for an intervention approach to mitigating bias amplification. We use a baseline amplification removal idea of applying segmentation masks (noisy or full) over the people in an image to mitigate bias stemming from human attributes (Wang et al., 2019). We train on the COCO classification task a VGG16 (Simonyan & Zisserman, 2014) model pretrained on ImageNet (Russakovsky et al., 2015) to predict objects and gender, with a Binary Cross Entropy Loss over all outputs, and measure BiasAmpT→A and BiasAmpA→T ; we report 95% confidence intervals for 5 runs of each scenario. In Fig. 4 we see, as expected, that as less of the person is visible, A→T decreases because there are less human attribute visual cues to bias the task prediction. On the other hand, T→A increases because the model must lean into task biases to predict the person’s attribute. However, we can also see from the overlapping confidence intervals that this technique of bias amplification mitigation does not appear to be particularly robust; we continue a discussion of this phenomenon in Sec. 5.2. Further mitigation techniques are outside of our scope, but we look to works like Singh et al. (2020); Wang et al. (2019); Agarwal et al. (2020)." }, { "heading": "5 ANALYSIS AND DISCUSSION", "text": "We now discuss some of the normative issues surrounding bias amplification, starting in Sec. 5.1 with the existence of T→A bias amplification, which implies the prediction of sensitive attributes; in Sec. 5.2 about the need for confidence intervals to make robust conclusions; and in Sec. 5.3 about scenarios in which the original formulation of bias amplification as a desire to match base correlations may not be what is actually wanted." }, { "heading": "5.1 T→ A BIAS AMPLIFICATION", "text": "If we think more deeply about these bias amplifications, we might come to a normative conclusion that T → A, which measures sensitive attribute predictions conditioned on the tasks, should not exist in the first place. There are very few situations in which predicting sensitive attributes makes sense (Scheuerman et al., 2020; Larson, 2017), so we should carefully consider if this is strictly necessary for target applications. For the image domains discussed, by simply removing the notion of predicting gender, we trivially remove all T → A bias amplification. In a similar vein, there has been great work done on reducing gender bias in image captions (Hendricks et al., 2018; Tang et al., 2020), but it is often focused on targeting T → A amplification rather than A → T . When\ndisentangling the directions of bias, we find that the Equalizer model (Hendricks et al., 2018), which was trained with the intention of increasing the quality of gender-specific words in captions, inadvertently increases A → T bias amplification for certain tasks. In Fig. 5 we see examples where the content of the Equalizer’s caption exhibits bias coming from the person’s attribute. Even though the Equalizer model reduces T → A bias amplification in these images, it inadvertently increases A→ T . It is important to disentangle the two directions of bias and notice that while one direction is becoming more fair, another is actually becoming more biased. Although this may not always be the case, depending on the downstream application, perhaps this is a setting in which we could consider simply replacing all instances of gendered words like “man” and “woman” in the captions with “person” to trivially eliminate T → A, and focus on A → T bias amplification. Specifically when gender is the sensitive attribute, Keyes (2018) thoroughly explains how we should carefully think about why we might implement Automatic Gender Recognition (AGR), and avoid doing so.\nOn the other hand, sensitive attribute labels, ideally from self-disclosure, can be very useful. For example, these labels are necessary to measureA→ T amplification, which is important to discover, as we do not want our prediction task to be biased for or against people with certain attributes." }, { "heading": "5.2 LACK OF CONSISTENCY IN BIAS MEASUREMENT", "text": "Evaluation metrics, ours’ included, are specific to each model on each dataset. Under common loss functions such as Cross Entropy Loss, some evaluation metrics, like average precision, are not very sensitive to random seed. However, we noticed that bias amplification, along with other fairness metrics like FPR difference, often fluctuates greatly across runs. Because the loss functions that machine learning practitioners tend to default to using are proxies for accuracy, it makes sense that the various local minima, while equal in accuracy, are not necessarily equal in terms of other measurements. The phenomena of differences between equally predictive models has been termed the Rashomon Effect (Breiman, 2001), or predictive multiplicity (Marx et al., 2020).\nThus, like previous work (Fisher et al., 2019), we urge transparency, and advocate for the inclusion of confidence intervals. To illustrate the need for this, we look at the facial image domain of CelebA (Liu et al., 2015), defining the two tasks of classifying “big nose” and “young” as our T , and treating the gender labels as our attribute, A. Note that we are not going to classify gender, for reasons raised in Sec. 5.1, so we only measure A → T amplification. For these tasks, women are more correlated with no big nose and being young, and men with big nose and not being young. We examine two different scenarios, one where our independent variable is model architecture, and another where it is the ratio between number of images of the majority groups (e.g., young women and not young men) and minority groups (e.g., not young women and young men). By looking at the confidence intervals, we can determine which condition allows us to draw reliable conclusions about the impact of the variable on bias amplification.\nFor model architecture, we train 3 models pretrained on ImageNet (Russakovsky et al., 2015) across 5 runs: ResNet18 (He et al., 2016), AlexNet (Krizhevsky et al., 2012), and VGG16 (Simonyan & Zisserman, 2014). Training details are in Appendix A.3. In Fig. 6 we see from the confidence intervals that while model architecture does not result in differing enough of bias amplification to conclude anything about the relative fairness of these models, across-ratio differences are significant enough to draw conclusions about the impact of this ratio on bias amplification. We encourage researchers to include confidence intervals so that findings are more robust to random fluctuations." }, { "heading": "5.3 CONSIDERATIONS FOR UNCERTAIN PREDICTION PROBLEMS", "text": "A property of bias amplification is that it does not conflict with having perfect accuracy. However, in turn, such a model with perfect accuracy would exactly replicate the correlations present in the training data. In this section we will discuss two cases that challenge this desire for matching training correlations: in the first, there will be no ground-truth labels from which to lift these correlations, and in the second, our goal is actually to not match the training correlations.\nNo Ground Truth. When we don’t have labels to derive training correlation rates from, bias amplification becomes harder to measure. Consider the fill-in-the-blank NLP task, where there is no ground-truth for how to fill in a sentence. Given “The [blank] went on a walk”, a variety of words could be equally suitable. Therefore, to measure bias amplification in this setting, we can manually set the base correlations, e.g., P (T = t|A = a), P (A = a|T = t). To see the effect that adjusting base correlations has, we test the bias amplification between occupations and gender pronouns, conditioning on the pronoun and filling in the occupation and vice versa. In Table 1, we report our measured bias amplification results on the FitBERT (Fill in the blanks BERT) (Havens & Stal, 2019; Devlin et al., 2019) model using various sources as our base correlation of bias from which amplification is measured. The same outputs from the model are used for each set of pronouns, and the independent variable we manipulate is the source of: 1) equality amongst the pronouns (using 2 and 3 pronouns), 2) co-occurrence counts from English Wikipedia (one of the datasets BERT was trained on), and 3) WinoBias (Zhao et al., 2018) with additional information supplemented from the 2016 U.S. Labor Force Statistics data. Details are in Appendix A.4. It is interesting to note that relative to U.S. Labor Force data on these particular occupations, FitBERT actually exhibits no bias amplification. For the occupation of hairdresser, the Labor statistics are biased at 92% women while FitBERT is at 80%, reflecting in fact a reduction in bias amplification. This demonstrates the importance of setting appropriate base correlations, because picking one that already exhibits strong amounts of bias will only flag models that further amplify this. In the next section we discuss another manifestation of this phenomenon, where the training correlation itself would be misleading to compare to, because of the strong amount of bias it contains.\nFuture Outcome Prediction. Next, we examine the risk prediction setting, where matching the base correlations may not be our desired goal. The labels here do not represent an objective ground truth because they: 1) suffer from problems like historical and selection bias (Suresh & Guttag, 2019; Olteanu et al., 2019; Green, 2020), and 2) will be used for prediction of a future event for which no one knows the answer. We will put aside the significant problems stemming from the first point for this current discussion, and focus on the latter. In this section, when using the word “predict” we will revert from the machine learning meaning and adopt the colloquial sense of the word, as in the forecast of a future event. What we had called object prediction, we will now call object labeling.\nConsider bias amplification in the context of a domain like risk prediction relative to previous domains looked at, such as object detection. The difference is not in tabular versus vision, but rather in prediction versus labeling. The notion of “ground-truth” doesn’t quite exist the way we might think about it, because given the input features that define a particular person, one could imagine an individual with these features who does recidivate, and one who does not (Barocas et al., 2019). The training label is just how someone with these input features once acted, but is not necessarily a rigid indicator of how someone else with these features will act. On the other hand, in object labeling the labels are very much the ground-truth, and thus bias amplification is a reasonable metric to gauge fairness by. However, we do not know this for risk prediction, and thus, matching the training correlations should not be our intended goal (Wick et al., 2019). In Fig. 7 we show the metrics\nwe can see from the continued existence of a difference in FPR’s between the two groups, there is no threshold that could be picked at which this disparity does not exist (except at the trivial cases of a 0 or 10 threshold), even if there does exist thresholds where bias is not being amplified.\nFor each application, different metrics of fairness are more or less applicable, and BiasAmp→ is no different. It is crucial that we think thoughtfully when deciding how to evaluate a model. In previous applications we imagined the direction of systemic bias to be captured by the training data and thus we lifted base correlations of bias from there. However, one could imagine a similarly manual intervention as was done for NLP on other tasks like risk prediction, where a domain expert verifies the direction of bias determined by the training set, and even sets the base correlations." }, { "heading": "6 CONCLUSION", "text": "In this paper, we take a deep dive into the measure of bias amplification. We introduce a new metric, BiasAmp→, and through the use of this metric and its directional components, diagnosing models will provide more actionable insights. Additionally, we discuss normative considerations, such as thinking carefully about why we might be performing sensitive attribute prediction, incorporating confidence intervals as the norm when reporting fairness metrics, and exercising care when determining which fairness metrics are applicable, and what assumptions they are encoding." }, { "heading": "A APPENDIX", "text": "A.1 ADDITIONAL METRIC DETAILS\nWe provide additional details here about BiasAmp→, as defined in Sec. 4.\nIn practice the indicator variable, y(t, a), is computed over the statistics of the training set, whereas everything else is computed over the test set. The reason behind this, is that the direction of bias is determined by the existing biases in the training set. Additionally, for computing the integration across all thresholds, we sparsely sample from all of the probabilities outputted to compute an approximation.\nComparisons of the values outputted by BiasAmp→ should only be done relatively. In particular, within one of the directions at a time, either A → T or T → A, on one dataset. In particular, comparing A → T to T → A directly is not a signal as to which direction of amplification is stronger.\nA.2 WALKING THROUGH THE EQUATIONS FOR FIGURE 1 SCENARIO\nIn this section we concretely write out the equations for BiasAmpMALS and BiasAmp→ for the scenario shown in Fig. 1, to better clarify what each metric captures. As a reminder, in this scenario A = {a0, a1} and T = {t0, t1}, and a0 is correlated with t0, and a1 with t1.\nBiasAmpMALS = (7)\n1\n2 2∑ t=1 2∑ a=1 1 ( P (Aa = 1|Tt = 1) > 1 2 )( P (Âa = 1|T̂t = 1)− P (Aa = 1|Tt = 1) ) (8)\n= 1\n2\n[ 1× ( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) + (9)\n0× ( P (Â1 = 1|T̂0 = 1)− P (A1 = 1|T0 = 1) ) + (10)\n0× ( P (Â0 = 1|T̂1 = 1)− P (A0 = 1|T1 = 1) ) + (11)\n1× ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (12)\n= 1\n2\n[( P (Â0 = 1|T̂0 = 1)− P (A0 = 1|T0 = 1) ) (13)\n+ ( P (Â1 = 1|T̂1 = 1)− P (A1 = 1|T1 = 1) )] (14)\nFor BiasAmp→, our equation simplifies in the case of discrete predictions as follows:\nBiasAmpA→T = 1\n4\n[( P (T̂0 = 1|A0 = 1)− P (T0 = 1|A0 = 1) ) (15)\n− ( P (T̂0 = 1|A1 = 1)− P (T0 = 1|A1 = 1) ) (16)\n− ( P (T̂1 = 1|A0 = 1)− P (T1 = 1|A0 = 1) ) (17)\n+ ( P (T̂1 = 1|A1 = 1)− P (T1 = 1|A1 = 1) )] (18)\n(19)\nBiasAmpT→A = 1\n4\n[( P (Â0 = 1|T0 = 1)− P (A0 = 1|T0 = 1) ) (20)\n− ( P (Â0 = 1|T1 = 1)− P (A0 = 1|T1 = 1) ) (21)\n− ( P (Â1 = 1|T0 = 1)− P (A1 = 1|T0 = 1) ) (22)\n+ ( P (Â1 = 1|T1 = 1)− P (A1 = 1|T1 = 1) )] (23)\n(24)\nA.3 DETAILS AND EXPERIMENT FROM LACK OF CONSISTENCY IN BIAS\nFor the models we trained in Sec. 5.2, we performed hyperparameter tuning on the validation set, and ended up using the following: ResNet18 had a learning rate of .0001, AlexNet of .0003, and VGG16 of .00014. All models were trained with stochastic gradient descent, a batch size of 64, and 10 epochs.\nA.4 DETAILS ON MEASURING BIAS AMPLIFICATION IN FITBERT\nHere we provide additional details behind the numbers presented in Tbl. 1 in Sec. 5.3.\nAs noted, and done, by Liang et al. (2020), a large and diverse corpus of sentences is needed to sample from the large variety of contexts. However, that is out of scope for this work, where we simply use 2 sentences: “[he/she/(they)] is a(n) [occupation]” or “[he/she/(they)] was a(n) [occupation]” to test.\nWhen calculating the amount of bias amplification when the base rates are equal, we picked the direction of bias based on that provided by the WinoBias dataset. In practice, this can be thought of as setting the base correlation, P (A = a|T = t) for a men-biased job like “cook” to be .5 + for “he” and .5 − for “she” when there are two pronouns, and .33 + for “he” and .33 − for “she” and “they”, where in practice we used = 1e−7. This ensures that the indicator variable, y(t, a) from Eq. 5, is set in the direction fo the gender bias, but the magnitudes of ∆(t, a) from Eq. 6 are not affected to a significant degree.\nTo generate a rough approximation of what training correlation rates could look like in this domain, we look to one of the datasets that BERT was trained on, the Wikipedia dataset. We do so by simply counting the cooccurrences of all the occupations along with gendered words such as “man”, “he”, “him”, etc. There are flaws with this approach because in a sentence like “She went to see the doctor.”, the pronoun is in fact not referring to the gender of the person with the occupation. However, we leave a more accurate measurement of this to future work, as our aim for showing these results was more for demonstrative purposes illustrating the manipulation of the correlation rate, rather than in rigorously measuring the training correlation rate.\nWe use 32 rather than 40 occupations in WinoBias Zhao et al. (2018), because when we went to the 2016 U.S. Labor Force Statistics data (of Labor Statistics, 2016) to collect the actual numbers of each gender and occupation in order to be able to calculate P (T = t|A = a), since WinoBias only had P (A = a|T = t), we found 8 occupations to be too ambiguous to be able to determine the actual numbers. For example, for “attendant”, there were many different attendant jobs listed, such as “flight attendants” and “parking lot attendant”, so we opted rather to drop these jobs from the list of 40. The 8 from the original WinoBias dataset that we ignored are: supervisor, manager, mechanician, CEO, teacher, assistant, clerk, and attendant. The first four are biased towards men, and the latter four towards women, so that we did not skew the distribution of jobs biased towards each gender." } ]
2,020
null
SP:016ad5eea3a81bb32ae907535e694d8171f996a5
[ "This paper proposes an analysis for an approximate dynamic of SGD which captures the heavy-tailed noise distributions seen practically at local minima. The authors derive this new dynamic (which they call Power-law dynamic) using basic principles and the assumption that the noise variance depends on the state. The dynamics becomes a modified Langevin equation. They prove also that the expected time to escape a barrier is polynomial in the parameters, as well as a generalization error bound." ]
Stochastic gradient descent (SGD) and its variants are mainstream methods to train deep neural networks. Since neural networks are non-convex, more and more works study the dynamic behavior of SGD and its impact to generalization, especially the escaping efficiency from local minima. However, these works make the over-simplified assumption that the distribution of gradient noise is stateindependent, although it is state-dependent. In this work, we propose a novel power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. Then, we prove that the stationary distribution of power-law dynamic is heavy-tailed, which matches the existing empirical observations. Next, we study the escaping efficiency from local minimum of power-law dynamic and prove that the mean escaping time is in polynomial order of the barrier height of the basin, much faster than exponential order of previous dynamics. It indicates that SGD can escape deep sharp minima efficiently and tends to stop at flat minima that have lower generalization error. Finally, we conduct experiments to compare SGD and power-law dynamic, and the results verify our theoretical findings.
[]
[ { "authors": [ "Balduzzi", "David", "Frean", "Marcus", "Leary", "Lennox", "JP Lewis", "Ma", "Kurt Wan-Duo", "McWilliams", "Brian" ], "title": "The shattered gradients problem: If resnets are the answer, then what is the question? arXiv preprint arXiv:1702.08591", "venue": null, "year": 2017 }, { "authors": [ "Bottou", "Léon", "Bousquet", "Olivier" ], "title": "The tradeoffs of large scale learning. Pages 161–168 of: Advances in neural information processing systems", "venue": null, "year": 2008 }, { "authors": [ "Chaudhari", "Pratik", "Soatto", "Stefano" ], "title": "Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks", "venue": "Pages 1–10", "year": 2018 }, { "authors": [ "Choromanska", "Anna", "Henaff", "Mikael", "Mathieu", "Michael", "Arous", "Gérard Ben", "LeCun", "Yann" ], "title": "The loss surfaces of multilayer networks. Pages 192–204 of: Artificial intelligence and statistics", "venue": null, "year": 2015 }, { "authors": [ "Draxler", "Felix", "Veschgini", "Kambis", "Salmhofer", "Manfred", "Hamprecht", "Fred" ], "title": "Essentially No Barriers in Neural Network Energy Landscape", "venue": "Pages 1309–1318 of: International Conference on Machine Learning", "year": 2018 }, { "authors": [ "Guo", "Ran", "Du", "Jiulin" ], "title": "Are power-law distributions an equilibrium distribution or a stationary nonequilibrium distribution", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2014 }, { "authors": [ "Gurbuzbalaban", "Mert", "Simsekli", "Umut", "Zhu", "Lingjiong" ], "title": "The Heavy-Tail Phenomenon in SGD. arXiv preprint arXiv:2006.04740", "venue": null, "year": 2020 }, { "authors": [ "HaoChen", "Jeff Z", "Wei", "Colin", "Lee", "Jason D", "Ma", "Tengyu" ], "title": "Shape Matters: Understanding the Implicit Bias of the Noise Covariance", "venue": "arXiv preprint arXiv:2006.08680", "year": 2020 }, { "authors": [ "He", "Di", "Xia", "Yingce", "Qin", "Tao", "Wang", "Liwei", "Yu", "Nenghai", "Liu", "Tie-Yan", "Ma", "Wei-Ying" ], "title": "2016a. Dual learning for machine", "venue": null, "year": 2016 }, { "authors": [ "He", "Fengxiang", "Liu", "Tongliang", "Tao", "Dacheng" ], "title": "2019a. Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence", "venue": "Pages 1141–1150", "year": 2019 }, { "authors": [ "He", "Haowei", "Huang", "Gao", "Yuan", "Yang" ], "title": "Asymmetric Valleys: Beyond Sharp and Flat Local Minima", "venue": "Pages 2549–2560", "year": 2019 }, { "authors": [ "He", "Kaiming", "Zhang", "Xiangyu", "Ren", "Shaoqing", "Sun", "Jian" ], "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "venue": "Pages 1026–1034 of: Proceedings of the IEEE international conference on computer vision", "year": 2015 }, { "authors": [ "He", "Kaiming", "Zhang", "Xiangyu", "Ren", "Shaoqing", "Sun", "Jian" ], "title": "Deep residual learning for image recognition. Pages 770–778", "venue": "of: Proceedings of the IEEE conference on computer vision and pattern recognition", "year": 2016 }, { "authors": [ "He", "Li", "Meng", "Qi", "Chen", "Wei", "Ma", "Zhi-Ming", "Liu", "Tie-Yan" ], "title": "Differential equations for modeling asynchronous algorithms. Pages 2220–2226", "venue": "of: Proceedings of the 27th International Joint Conference on Artificial Intelligence", "year": 2018 }, { "authors": [ "Hodgkinson", "Liam", "Mahoney", "Michael W" ], "title": "Multiplicative noise and heavy tails in stochastic optimization. arXiv preprint arXiv:2006.06293", "venue": null, "year": 2020 }, { "authors": [ "Hu", "Wenqing", "Li", "Chris Junchi", "Lei", "Liu", "Jian-Guo" ], "title": "On the diffusion approximation of nonconvex stochastic gradient descent", "venue": "Annals of Mathematical Sciences and Applications,", "year": 2019 }, { "authors": [ "Keskar", "Nitish Shirish", "Mudigere", "Dheevatsa", "Nocedal", "Jorge", "Smelyanskiy", "Mikhail", "Tang", "Ping Tak Peter" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836", "venue": null, "year": 2016 }, { "authors": [ "LeCun", "Yann" ], "title": "LeNet-5, convolutional neural networks. URL: http://yann", "venue": "lecun. com/exdb/lenet,", "year": 2015 }, { "authors": [ "Li", "Dawei", "Ding", "Tian", "Sun", "Ruoyu" ], "title": "2018a. Over-parameterized deep neural networks have no strict local minima for any continuous activations", "venue": "arXiv preprint arXiv:1812.11039", "year": 2018 }, { "authors": [ "Li", "Hao", "Xu", "Zheng", "Taylor", "Gavin", "Studer", "Christoph", "Goldstein", "Tom" ], "title": "Visualizing the loss landscape of neural nets. Pages 6389–6399", "venue": null, "year": 2018 }, { "authors": [ "Li", "Qianxiao", "Tai", "Cheng" ], "title": "Stochastic modified equations and adaptive stochastic gradient algorithms. Pages 2101–2110", "venue": "of: Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org", "year": 2017 }, { "authors": [ "Liu", "Tianyi", "Chen", "Zhehui", "Zhou", "Enlu", "Zhao", "Tuo" ], "title": "Toward deeper understanding of nonconvex stochastic optimization with momentum using diffusion approximations. arXiv preprint arXiv:1802.05155", "venue": null, "year": 2018 }, { "authors": [ "Mahoney", "Michael", "Martin", "Charles" ], "title": "Traditional and heavy tailed self regularization in neural network models. Pages 4284–4293", "venue": "of: International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Mandt", "Stephan", "Hoffman", "Matthew D", "Blei", "David M" ], "title": "Stochastic gradient descent as approximate bayesian inference", "venue": "The Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "McAllester", "David A." ], "title": "PAC-Bayesian model averaging", "venue": "Pages 164–170 of: Proceedings of the twelfth annual conference on Computational learning theory.", "year": 1999 }, { "authors": [ "Rakhlin", "Alexander", "Shamir", "Ohad", "Sridharan", "Karthik" ], "title": "Making gradient descent optimal for strongly convex stochastic optimization. Pages 1571–1578", "venue": "of: Proceedings of the 29th International Coference on International Conference on Machine Learning", "year": 2012 }, { "authors": [ "Sagun", "Levent", "Bottou", "Leon", "LeCun", "Yann" ], "title": "Eigenvalues of the hessian in deep learning: Singularity and beyond", "venue": "arXiv preprint arXiv:1611.07476", "year": 2016 }, { "authors": [ "Şimşekli", "Umut", "Gürbüzbalaban", "Mert", "Nguyen", "Thanh Huy", "Richard", "Gaël", "Sagun", "Levent" ], "title": "On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks. arXiv preprint arXiv:1912.00018", "venue": null, "year": 2019 }, { "authors": [ "Simsekli", "Umut", "Sagun", "Levent", "Gurbuzbalaban", "Mert" ], "title": "A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks. Pages 5827–5837", "venue": "of: International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Smith", "Samuel L", "Le", "Quoc V" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "arXiv preprint arXiv:1710.06451", "year": 2017 }, { "authors": [ "Tsallis", "Constantino", "Bukman", "Dirk Jan" ], "title": "Anomalous diffusion in the presence of external forces: exact time-dependent solutions and entropy. arXiv preprint cond-mat/9511007", "venue": null, "year": 1995 }, { "authors": [ "Tsallis", "Constantino", "Bukman", "Dirk Jan" ], "title": "Anomalous diffusion in the presence of external forces: Exact time-dependent solutions and their thermostatistical basis", "venue": "Physical Review E,", "year": 1996 }, { "authors": [ "Van Kampen", "Nicolaas Godfried." ], "title": "Stochastic processes in physics and chemistry", "venue": "Vol. 1. Elsevier.", "year": 1992 }, { "authors": [ "Vaswani", "Ashish", "Shazeer", "Noam", "Parmar", "Niki", "Uszkoreit", "Jakob", "Jones", "Llion", "Gomez", "Aidan N", "Kaiser", "Łukasz", "Polosukhin", "Illia" ], "title": "Attention is all you need. Pages 5998–6008", "venue": null, "year": 2017 }, { "authors": [ "Wu", "Jingfeng", "Hu", "Wenqing", "Xiong", "Haoyi", "Huan", "Jun", "Zhu", "Zhanxing" ], "title": "2019a. The Multiplicative Noise in Stochastic Gradient Descent: Data-Dependent Regularization, Continuous and Discrete Approximation", "venue": null, "year": 2019 }, { "authors": [ "Wu", "Jingfeng", "Hu", "Wenqing", "Xiong", "Haoyi", "Huan", "Jun", "Braverman", "Vladimir", "Zhu", "Zhanxing" ], "title": "On the Noisy Gradient Descent that Generalizes", "venue": null, "year": 2019 }, { "authors": [ "Wu", "Lei", "Ma", "Chao" ], "title": "How sgd selects the global minima in over-parameterized learning: A dynamical stability perspective", "venue": "Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xiao", "Han", "Rasul", "Kashif", "Vollgraf", "Roland" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747", "venue": null, "year": 2017 }, { "authors": [ "Xie", "Zeke", "Sato", "Issei", "Sugiyama", "Masashi" ], "title": "A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast. arXiv preprint arXiv:2002.03495", "venue": null, "year": 2020 }, { "authors": [ "Zhang", "Yao", "Saxe", "Andrew M", "Advani", "Madhu S", "Lee", "Alpha A" ], "title": "Energy–entropy competition and the effectiveness of stochastic gradient descent in machine learning", "venue": "Molecular Physics,", "year": 2018 }, { "authors": [ "Zhou", "Mo", "Liu", "Tianyi", "Li", "Yan", "Lin", "Dachao", "Enlu", "Zhao", "Tuo" ], "title": "Toward Understanding the Importance of Noise in Training", "venue": "Neural Networks. In: International Conference on Machine Learning", "year": 2019 }, { "authors": [ "Zhou", "Yanjun", "Du", "Jiulin" ], "title": "Kramers escape rate in overdamped systems with the power-law distribution", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2014 }, { "authors": [ "Zhu", "Zhanxing", "Wu", "Jingfeng", "Yu", "Bing", "Lei", "Ma", "Jinwen" ], "title": "The anisotropic noise in stochastic gradient descent: Its behavior of escaping from sharp minima and regularization effects. Pages 7654–7663", "venue": "of: Proceedings of International Conference on Machine Learning", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning has achieved great success in various AI applications, such as computer vision, natural language processing, and speech recognition (He et al., 2016b; Vaswani et al., 2017; He et al., 2016a). Stochastic gradient descent (SGD) and its variants are the mainstream methods to train deep neural networks, since they can deal with the computational bottleneck of the training over large-scale datasets (Bottou & Bousquet, 2008).\nAlthough SGD can converge to the minimum in convex optimization (Rakhlin et al., 2012), neural networks are highly non-convex. To understand the behavior of SGD on non-convex optimization landscape, on one hand, researchers are investigating the loss surface of the neural networks with variant architectures (Choromanska et al., 2015; Li et al., 2018b; He et al., 2019b; Draxler et al., 2018; Li et al., 2018a); on the other hand, researchers illustrate that the noise in stochastic algorithm may make it escape from local minima (Keskar et al., 2016; He et al., 2019a; Zhu et al., 2019; Wu et al., 2019a; HaoChen et al., 2020). It is clear that whether stochastic algorithms can escape from poor local minima and finally stop at a minimum with low generalization error is crucial to its test performance. In this work, we focus on the dynamic of SGD and its impact to generalization, especially the escaping efficiency from local minima.\nTo study the dynamic behavior of SGD, most of the works consider SGD as the discretization of a continuous-time dynamic system and investigate its dynamic properties. There are two typical types of models to approximate dynamic of SGD. (Li et al., 2017; Zhou et al., 2019; Liu et al., 2018; Chaudhari & Soatto, 2018; He et al., 2019a; Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020) approximate the dynamic of SGD by Langevin dynamic with constant diffusion coefficient and proved its escaping efficiency from local minima.These works make over-simplified assumption that the covariance matrix of gradient noise is constant, although it is state-dependent in general. The simplified assumption makes the proposed dynamic unable to explain the empirical observation that the distribution of parameters trained by SGD is heavy-tailed (Mahoney & Martin, 2019). To model the heavy-tailed phenomenon, Simsekli et al. (2019); Şimşekli et al. (2019) point that the variance of stochastic gradient may be infinite, and they propose to approximate SGD by dynamic driven by α-stable process with the strong infinite variance condition. However, as shown in the work (Xie\net al., 2020; Mandt et al., 2017), the gradient noise follows Gaussian distribution and the infinite variance condition does not satisfied. Therefore it is still lack of suitable theoretical explanation on the implicit regularization of dynamic of SGD.\nIn this work, we conduct a formal study on the (state-dependent) noise structure of SGD and its dynamic behavior. First, we show that the covariance of the noise of SGD in the quadratic basin surrounding the local minima is a quadratic function of the state (i.e., the model parameter). Thus, we propose approximating the dynamic of SGD near the local minimum using a stochastic differential equation whose diffusion coefficient is a quadratic function of state. We call the new dynamic power-law dynamic. We prove that its stationary distribution is power-law κ distribution, where κ is the signal to noise ratio of the second order derivatives at local minimum. Compared with Gaussian distribution, power-law κ distribution is heavy-tailed with tail-index κ. It matches the empirical observation that the distribution of parameters becomes heavy-tailed after SGD training without assuming infinite variance of stochastic gradient in (Simsekli et al., 2019).\nSecond, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. By using the random perturbation theory for diffused dynamic systems, we analyze the mean escaping time for power-law dynamic. Our results show that: (1) Power-law dynamic can escape from sharp minima faster than flat minima. (2) The mean escaping time for power-law dynamic is only in the polynomial order of the barrier height, much faster than the exponential order for dynamic with constant diffusion coefficient. Furthermore, we provide a PAC-Bayes generalization bound and show power-law dynamic can generalize better than dynamic with constant diffusion coefficient. Therefore, our results indicate that the state-dependent noise helps SGD to escape from sharp minima quickly and implicitly learn well-generalized model.\nFinally, we corroborate our theory by experiments. We investigate the distributions of parameters trained by SGD on various types of deep neural networks and show that they are well fitted by power-law κ distribution. Then, we compare the escaping efficiency of dynamics with constant diffusion or state-dependent diffusion to that of SGD. Results show that the behavior of power-law dynamic is more consistent with SGD.\nOur contributions are summarized as follows: (1) We propose a novel power-law dynamic with state-dependent diffusion to approximate dynamic of SGD based on both theoretical derivation and empirical evidence. The power-law dynamic can explain the heavy-tailed phenomenon of parameters trained by SGD without assuming infinite variance of gradient noise. (2) We analyze the mean escaping time and PAC-Bayes generalization bound for power-law dynamic and results show that power-law dynamic can escape sharp local minima faster and generalize better compared with the dynamics with constant diffusion. Our experimental results can support the theoretical findings." }, { "heading": "2 BACKGROUND", "text": "In empirical risk minimization problem, the objective is L(w) = 1n ∑n i=1 `(xi, w), where xi, i = 1, · · · , n are n i.i.d. training samples, w ∈ Rd is the model parameter, and ` is the loss function. Stochastic gradient descent (SGD) is a popular optimization algorithm to minimize L(w). The update rule is wt+1 = wt − η · g̃(wt), where g̃(wt) = 1b ∑ x∈Sb ∇w`(x,wt) is the minibatch gradient calculated by a randomly sampled minibatch Sb of size b and η is the learning rate. The minibatch gradient g̃(wt) is an unbiased estimator of the full gradient g(wt) = ∇L(wt), and the term (g(wt)− g̃(wt)) is called gradient noise in SGD. Langevin Dynamic In (He et al., 2019a; Zhu et al., 2019), the gradient noise is assumed to be drawn from Gaussian distribution according to central limit theorem (CLT), i.e., g(w)− g̃(w) ∼ N (0, C), where covariance matrix C is a constant matrix for all w. Then SGD can be regarded as the numerical discretization of the following Langevin dynamic,\ndwt = −g(wt)dt+ √ ηC1/2dBt, (1)\nwhere Bt is a standard Brownian motion in Rd and √ ηC1/2dBt is called the diffusion term.\nα-stable Process Simsekli et al. (2019) assume the variance of gradient noise is unbounded. By generalized CLT, the distribution of gradient noise is α-stable distribution S(α, σ), where σ is the α-th moment of gradient noise for given α with α ∈ (0, 2]. Under this assumption, SGD is approximated by the stochastic differential equation (SDE) driven by an α-stable process." }, { "heading": "2.1 RELATED WORK", "text": "There are many works that approximate SGD by Langevin dynamic and most of the theoretical results are obtained for Langevin dynamic with constant diffusion coefficient. From the aspect of optimization, the convergence rate of SGD and its optimal hyper-parameters have been studied in (Li et al., 2017; He et al., 2018; Liu et al., 2018; He et al., 2018) via optimal control theory. From the aspect of generalization, Chaudhari & Soatto (2018); Zhang et al. (2018); Smith & Le (2017) show that SGD implicitly regularizes the negative entropy of the learned distribution. Recently, the escaping efficiency from local minima of Langevin dynamic has been studied (Zhu et al., 2019; Hu et al., 2019; Xie et al., 2020). He et al. (2019a) analyze the PAC-Bayes generalization error of Langevin dynamic to explain the generalization of SGD.\nThe solution of Langevin dynamic with constant diffusion coefficient is Gaussian process, which does not match the empirical observations that the distribution of parameters trained by SGD is a heavy-tailed (Mahoney & Martin, 2019; Hodgkinson & Mahoney, 2020; Gurbuzbalaban et al., 2020). Simsekli et al. (2019); Şimşekli et al. (2019) assume the variance of stochastic gradient is infinite and regard SGD as discretization of a stochastic differential equation (SDE) driven by an α-stable process. The escaping efficiency for the SDE is also shown in (Simsekli et al., 2019).\nHowever, these theoretical results are derived for dynamics with constant diffusion term, although the gradient noise in SGD is state-dependent. There are some related works analyze state-dependent noise structure in SGD, such as label noise in (HaoChen et al., 2020) and multiplicative noise in (Wu et al., 2019b). These works propose new algorithms motivated by the noise structure, but they do not analyze the escaping behavior of dynamic of SGD and the impact to generalization. Wu et al. (2018) analyze the escaping behavior of SGD with considering the fluctuations of the second order derivatives and propose the concept linearly stability. In our work, we propose power-law dynamic to approximate SGD and analyze the stationary distribution and the mean escaping time for it." }, { "heading": "3 APPROXIMATING SGD BY POWER-LAW DYNAMIC", "text": "In this section, we study the (state-dependent) noise structure of SGD (in Section 3.1) and propose power-law dynamic to approximate the dynamic of SGD. We first study 1-dimensional power-law dynamic in Section 3.2 and extend it to high dimensional case in Section 3.3." }, { "heading": "3.1 NOISE STRUCTURE OF STOCHASTIC GRADIENT DESCENT", "text": "For non-convex optimization, we investigate the noise structure of SGD around local minima so that we can analyze the escaping efficiency from it. We first describe the quadratic basin where the local minimum is located. Suppose w∗ is a local minimum of the training loss L(w) and g(w∗) = 0. We name the -ball B(w∗, ) with center w∗ and radius as a quadratic basin if the loss function for w ∈ B(w∗, ) is equal to its second-order Taylor expansion as L(w) = L(w∗) + 12 (w − w\n∗)TH(w∗)(w − w∗). Here, H(w∗) is the Hessian matrix of loss at w∗, which is (semi) positive definite.\nThen we start to analyze the gradient noise of SGD. The full gradient of training loss is g(w) = H(w∗)(w − w∗). The stochastic gradient is g̃(w) = g̃(w∗) + H̃(w∗)(w − w∗) by Taylor expansion where g̃(·) and H̃(·) are stochastic version of gradient and Hessian calculated by the minibatch. The randomness of gradient noise comes from two parts: g̃(w∗) and H̃(w∗), which reflects the fluctuations of the first-order and second-order derivatives of the model at w∗ over different minibatches, respectively. The following proposition gives the variance of the gradient noise.\nProposition 1 For w ∈ B(w∗, ) ⊂ R, the variance of gradient noise is σ(g(w) − g̃(w)) = σ(g̃(w∗)) + 2ρ(g̃(w∗), H̃(w∗))(w − w∗) + σ(H̃(w∗))(w − w∗)2, where σ(·) and ρ(·, ·) are the variance and covariance in terms of the minibatch.\nFrom Proposition 1, we can conclude that: (1) The variance of noise is finite if g̃(w∗) and H̃(w∗) have finite variance because ρ(g̃(w∗), H̃(w∗)) ≤ √ σ(g̃(w∗)) · σ(H̃(w∗)) according to Cauchy–Schwarz\ninequality. For fixed w∗, a sufficient condition for that g̃(w∗) and H̃(w∗) have finite variance is that\nthe training data x are sampled from bounded domain. This condition is easy to be satisfied because the domain of training data are usually normalized to be bounded before training. In this case, the infinite variance assumption about the stochastic gradient in α-stable process is not satisfied. (2) The variance of noise is state-dependent, which contradicts the assumption in Langevin dynamic.\nNotations: For ease of the presentation, we use C(w), σg, σH , ρg,H to denote σ(g(w) − g̃(w∗)), σ(g̃(w∗)), σ(H̃(w∗)), ρ(g̃(w∗), H̃(w∗)) in the following context, respectively. 1" }, { "heading": "3.2 POWER-LAW DYNAMIC", "text": "According to CLT, the gradient noise follows Gaussian distribution if it has finite variance, i.e.,\ng(w)− g̃(w)→d N (0, C(w)) as b→∞, (2)\nwhere→d means “converge in distribution”. Using Gaussian distribution to model the gradient noise in SGD, the update rule of SGD can be written as:\nwt+1 = wt − ηg(wt) + ηξt, ξt ∼ N (0, C(w)). (3)\nEq.3 can be treated as the discretization of the following SDE, which we call it power-law dynamic: dwt = −g(wt)dt+ √ ηC(w)dBt. (4)\nPower-law dynamic characterizes how the distribution of w changes as time goes on. The distribution density of parameterw at time t (i.e., p(w, t)) is determined by the Fokker-Planck equation (Zwanzig’s type (Guo & Du, 2014)):\n∂ ∂t p(w, t) = ∇p(w, t)g(w) + η 2 · ∇ (C(w) · ∇p(w, t)) . (5)\nThe stationary distribution of power-law dynamic can be obtained if we let the left side of FokkerPlanck equation be zero. The following theorem shows the analytic form of the stationary distribution of power-law dynamic, which is heavy-tailed and the tail of the distribution density decays at polynomial order of w − w∗. This is the reason why we call the stochastic differential equation in Eq.4 power-law dynamic.\nTheorem 2 The stationary distribution density for 1-dimensional power-law dynamic (Eq.4) is\np(w) = 1\nZ (C(w))\n− H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , (6) whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function.\nWe make discussions on property of p(w). The decreasing rate of p(w) as w goes away from the center w∗ is mainly determined by the term C(w)− H ησH (because the function ArcTan(·) is bounded) which is a polynomial function about w − w∗. Compared with Gaussian distribution the probability density which follows exponential decreasing rate, power-law distribution is less concentrated in the quadratic basin B(w∗, ) and heavy-tailed. We call HησH the tail-index of p(w) and denote it as κ in the following context.\nWe can conclude that the state-dependent noise results in heavy-tailed distribution of parameters, which matches the observations in (Mahoney & Martin, 2019). Langevin dynamic with constant diffusion can be regarded as special case of power-law dynamic when ρH,g = 0 and σH = 0. In this case, p(w) degenerates to Gaussian distribution. Compared with α-stable process, we do not assume infinite variance on gradient noise and demonstrate another mechanism that results in heavy-tailed distribution of parameters.\nWe empirically observe the covariance matrix around the local minimum of training loss on deep neural networks. The results are shown in Figure.1. Readers can refer more details in Appendix 7.1. We have the following observations: (1) The traces of covariance matrices for the deep neural\n1In the following context, we assume σg is positive number.\nnetworks can be well approximated by quadratic curves, which supports Proposition 1. (2) The minimum of the quadratic curve is nearly located at the local minimum w∗. It indicates that the coefficient of the first-order term ρg,H ≈ 0. Based on the fact that ρg,H is not the determinant factor of the tail of the distribution in Eq.6 and the observations in Figure.1, we consider a simplified form of C(w) that C(w) = σg + σH(w − w∗)2.\nCorollary 3 If C(w) = σg + σH(w−w∗)2, the stationary distribution of 1-dimensional power-law dynamic (Eq.4) is\np(w) = 1\nZ (1 + σHσ\n−1 g (w − w∗)2)−κ, (7)\nwhere Z is the normalization constant and κ = HησH is the tail-index.\nThe distribution density in Eq.7 is known as the power-law κ distribution (Zhou & Du, 2014) (It is also named as q-Gaussian distribution in (Tsallis & Bukman, 1996)). As κ→∞, the distribution density tends to be Gaussian, i.e., p(w) ∝ exp(−H(w−w ∗)2\nησg ). Power-law κ distribution becomes more\nheavy-tailed as κ becomes smaller. Meanwhile, it produces higher probability to appear values far away from the center w∗. Intuitively, smaller κ helps the dynamic to escape from local minima faster.\nIn the approximation of dynamic of SGD, κ equals the signal (i.e., H(w∗)) to noise (i.e., ησH ) ratio of second-order derivative at w∗ in SGD, and κ is linked with three factors: (1) the curvature H(w∗); (2) the fluctuation of the curvature over training data; (3) the hyper-parameters including η and minibatch size b. Please note that σH linearly decreases as the batch size b increases." }, { "heading": "3.3 MULTIVARIATE POWER-LAW DYNAMIC", "text": "In this section, we extend the power-law dynamic to d-dimensional case. We first illustrate the covariance matrix C(w) of gradient noise in SGD. We use the subscripts to denote the element in a vector or a matrix. We use Σg to denote the covariance matrix of g̃(w∗) and assume that Σg is isotropic (i.e., Σg = σg · I). We also assume that Cov(H̃i(w∗), H̃j(w∗)) are equal for all i, j. It can be shown that C(w) = Σg(1 + (w−w∗)TΣHΣ−1g (w−w∗)). Similarly as 1-dimensional case, we omit the first-order term (w − w∗) in C(w). Readers can refer Proposition 10 in Appendix 7.2 for the detailed derivation.\nWe suppose that the signal to noise ratio of H̃(w∗) can be characterized by a scalar κ, i.e., ηΣH = 1 κ ·H(w ∗). Then C(w) can be written as\nC(w) = Σg(1 + 1 ηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)). (8)\nTheorem 4 Ifw ∈ Rd andC(w) has the form in Eq.(8) forw ∈ B(w∗, ). The stationary distribution density of power-law dynamic is\np(w) = 1\nZ [1 +\n1\nηκ (w − w∗)TH(w∗)Σ−1g (w − w∗)]−κ (9)\nfor w ∈ B(w∗, ), where Z is the normalization constant and κ satisfies ηΣH = 1κ ·H(w ∗).\nRemark: The multivariate power-law κ distribution (Eq.9) is a natural extension of the 1-dimensional case. Actually, the assumptions on Σg and κ can be replaced by just assuming Σg, H(w∗),ΣH are codiagonalized. Readers can refer Proposition 11 in Appendix 7.2 for the derivation.\n4 ESCAPING EFFICIENCY OF POWER-LAW DYNAMIC\nIn this section, we analyze the escaping efficiency of power-law dynamic from local minima and its relation to generalization. Specifically, we analyze the mean escaping time for wt to escape from a basin. As shown in Figure.2, we suppose that there are two basins whose bottoms are denoted as a and c respectively and the saddle point b is the barrier between two basins. The barrier height is denoted as ∆L = L(b)−L(a).\nDefinition 5 Suppose wt starts at the local minimum a, we denote the time for wt to first reach the saddle point b as inf{t > 0|w0 = a,wt = b}. The mean escaping time τ is defined as τ = Ewt [inf{t > 0|w0 = a,wt = b}].\nWe first give the mean escaping time for 1-dimensional case in Lemma 6 and then we give the mean escaping time for high-dimensional power-law dynamic in Theorem 7. To analyze the mean escaping time, we take the following assumptions.\nAssumption 1: The loss function around critical points can be written as L(w) = L(w∗) + 12 (w − w∗)TH(w∗)(w − w∗), where w∗ is a critical point.\nAssumption 2: The system is in equilibrium near minima, i.e., ∂p(w,t)∂t = 0.\nAssumption 3: (Low temperature assumption) The gradient noise is small, i.e., ησg ∆L. These three assumptions are commonly used in analyzing escaping time (Xie et al., 2020; Zhou & Du, 2014) for a dynamic. Because both a and b are critical points, we can apply Assumption 1 to get the loss surface around them. We put more discussions about the assumptions in Appendix 7.3.2.\nWe suppose the basin a is quadratic and the variance of noise has the form that C(w) = σga +σHa(w− a)2, which can also be written as C(w) = σga + 2σHa Ha\n(L(w)− L(a)). Furthermore, we suppose that C(w) = σga + 2σHa Ha\n(L(w) − L(a)) on the whole escaping path from a to b (not just near the local minimum a). It means that the variance of gradient noise becomes larger as the loss becomes larger. The following lemma gives the mean escaping time of power-law dynamic for 1-dimensional case.\nLemma 6 Suppose that Assumption 1-3 are satisfied and C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of 1-dimensional power-law dynamic is,\nτ = 2π\n(1− 1 2κ\n) √ Ha|Hb|\n( 1 + 2\nκησga ∆L\n)κ− 1 2\n, (10)\nwhere κ = HaησHa > 1 2 , Ha and Hb are the second-order derivatives of training loss at local minimum a and at saddle point b, respectively.\nThe proof of Lemma 6 is based on the results in (Zhou & Du, 2014). We provide a full proof in Appendix 7.3.1. For the dynamic near the saddle point, we just assume that its dynamic is the same as that near the local minimum for simplicity. This assumption is not necessary and we put the extension to more complex dynamic in Appendix 7.3.3.\nWe summarize the mean escaping time of power-law dynamic and dynamics in previous works in Table 1. Based on the results, we have the following discussions.\nComparison with other dynamics: (1) Both power-law dynamic and Langevin dynamic can escape sharp minima faster than flat minima, where the sharpness is measured by Ha and larger Ha corresponds to sharper minimum. Power-law dynamic improves the order of barrier height (i.e., ∆L) from exponential to polynomial compared with Langevin dynamic, which implies a faster escaping efficiency of SGD to escape from deep basin. (2) The mean escaping time for α-stable process is independent with the barrier height, but it is in polynomial order of the width of the basin (i.e.,\nwidth=|b− a|). Compared with α-stable process, the result for power-law dynamic is superior in the sense that it is also in polynomial order of the width (if ∆L ≈ O(|b− a|2)) and power-law dynamic does not rely on the infinite variance assumption.\nBased on Lemma 6, we analyze the mean escaping time for d-dimensional case. Under the low temperature condition, the probability density concentrates only along the most possible escaping paths in the high-dimensional landscape. For rigorous definition of most possible escaping paths, readers can refer section 3 in (Xie et al., 2020). For simplicity, we consider the case that there is only one most possible escaping path between basin a and basin c. Specifically, the Hessian at saddle point b has only one negative eigenvalue and the most possible escaping direction is the direction corresponding to the negative eigenvalue of the Hessian at b.\nTheorem 7 Suppose that Assumption 1-3 are satisfied. For w ∈ Rd, we suppose C(w) = Σga + 2 ηκ\n(L(w) − L(a)) on the whole escaping path from a to b and there is only one most possible path path between basin a and basin c. The mean escaping time for power-law dynamic escaping from basin a to basin c is\nτ = 2π √ −det(Hb)\n(1− d 2κ\n) √ det(Ha) 1 |Hbe|\n( 1 + 1\nηκσe ∆L\n)κ− 1 2\n, (11)\nwhere e indicates the most possible escaping direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga that corresponds to the escaping direction, ∆L = L(b)− L(a), and det(·) is the determinant of a matrix.\nRemark: In d-dimensional case, the flatness is measured by det(Ha). IfHa has zero eigenvalues, we can replace Ha by H+a in above theorem, where H + a is obtained by projecting Ha onto the subspace composed by the eigenvectors corresponding to the positive eigenvalues of Ha. This is because by Taylor expansion, the loss L(w) only depends on the positive eigenvalues and the corresponding eigenvectors of Ha, i.e., L(w) = L(a) + 12 (w − a) THa(w − a) = L(a) + 12 (P(w − a)) TΛ H+a P(w − a), where Λ H+a\nis a diagonal matrix composed by non-zero eigenvalues of Ha and the operator P(·) operates the vector to the subspace corresponding to non-zero eigenvalues of Ha. Therefore, the dimension d in Theorem 7 can be regarded as the dimension of subspace that is composed by directions with large eigenvalues. It has been observed that most of the eigenvalues in H is very small (Sagun et al., 2016). Therefore, d will not be a large number and power-law dynamic in multi-dimensional case will inherit the benefit of that in 1-dimensional case compared with Langevin dynamic and α-stable process.\nThe next theorem give an upper bound of the generalization error of the stationary distribution of power-law dynamic, which shows that flatter minimum has smaller generalization error.\nTheorem 8 Suppose that w ∈ Rd and κ > d2 . For δ > 0, with probability at least 1 − δ, the stationary distribution of power-law dynamic has the following generalization error bound,\nEw∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) +\n√ KL(p||p′) + log 1δ + log n+ 2\nn− 1 ,\nwhere KL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η , p(w) is the stationary distribution of d-dimensional power-law dynamic, p′(w) is a prior distribution which is selected to be standard\nGaussian distribution, and P(x) is the underlying distribution of data x, det(·) and Tr(·) are the determinant and trace of a matrix, respectively.\nWe make the following discussions on results in Theorem 8. For 1-dimensional case, we have if H > η\n2(1+ 1 2κ\n) , KL divergence is decreasing as H decreases. For d > 1 and fixed Tr(ΣgH−1)\nand det(Σg), the generalization error (i.e., Ew∼p(w),x∼P(x)`(w, x)− Ew∼p(w)L(w)) is decreasing as det(H) decreases, which indicates that flatter minimum has smaller generalization error. Moreover, if 2d > Tr(ηΣgH−1), the generalization error is decreasing as κ increases. When κ → ∞, the generalization error tends to that for Langevin dynamic. Combining the mean escaping time and the generalization error bound, we can conclude that state-dependent noise makes SGD escape from sharp minima faster and implicitly tend to learn a flatter model which generalizes better." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we conduct experiments to verify the theoretical results. We first study the fitness between parameter distribution trained by SGD and power-law κ distribution. Then we compare the escaping behavior for power-law dynamic, Langevin dynamic and SGD." }, { "heading": "5.1 FITTING PARAMETER DISTRIBUTION USING POWER-LAW DISTRIBUTION", "text": "We investigate the distribution of parameters trained by SGD on deep neural networks and use power-law κ distribution to fit the parameter distribution. We first use SGD to train various types of deep neural networks till it converge. For each network, we run SGD with different minibatch sizes over the range {64, 256, 1024}. For the settings of other hyper-parameters, readers can refer Appendix 7.5.2. We plot the distribution of model parameters at the same layer using histogram. Next, we use power-law κ distribution to fit the distribution of the parameters and estimate the value of κ via the embedded function \"TsallisQGaussianDistribution[]\" in Mathematica software.\nWe show results for LeNet-5 with MNIST dataset and ResNet-18 with CIFAR10 dataset (LeCun et al., 2015; He et al., 2016b) in this section, and put results for other network architectures in Appendix 7.5.2. In Figure 3, we report the generalization error (i.e., Test error - Training error) and the values of κ that best fit the histogram. 2 We have the following observations: (1) The distribution of the parameter trained by SGD can be well fitted by power-law κ distribution (blue curve). (2) As the minibatch size becomes larger, κ becomes larger. It is because the noise σH linearly decreases as minibatch size becomes larger and κ = HησH . (3) As κ becomes smaller, the generalization error becomes lower. It indicates that κ also plays a role as indicator of generalization. These results are consistent with the theory in Section 4." }, { "heading": "5.2 COMPARISON ON ESCAPING EFFICIENCY", "text": "We use a 2-dimensional model to simulate the escaping efficiency from minima for power-law dynamic, Langevin dynamic and SGD. We design a non-convex 2-dimensional function written as L(w) = 1n ∑n i=1 `(w − xi), where `(w) = 15 ∑2 j=1 |wj − 1|2.5 · |wj + 1|3 and training data xi ∼ N (0, 0.01I2). We regard the following optimization iterates as the numerical discretization of the power-law dynamic, wt+1 = wt − ηg(wt) + ηλ2 √ 1 + λ1(wt − w∗)2 ξ, where ξ ∼ N (0, I2), λ1, λ2 are two hyper-parameters and stands for Hadamard product. Note that if we set λ1 = 0, it can be regarded as discretization of Langevin dynamic. We set learning rate η = 0.025, and we take 500 iterations in each training. In order to match the trace of covariance matrix of stochastic gradient at minimum point w∗ with the methods above, λ2 is chosen to satisfy Tr(Cov(λ2ξ)) = Tr(Cov(g(w∗))).\nWe compare the success rate of escaping for power-law dynamic, Langevin dynamic and SGD by repeating the experiments 100 times. To analyze the noise term λ1, we choose different λ1 and evaluate corresponding success rate of escaping, as shown in Figure.4(c). The results show that: (1) there is a positive correlation between λ1 and the success rate of escaping; (2) power-law dynamic can mimic the escaping efficiency of SGD, while Langevin dynamic can not. We then scale the loss\n2The training errors under the six settings are almost zero.\nfunction by 0.9 to make the minima flatter and repeat all the algorithms under the same setting. The success rate for the scaled loss function is shown in Figure.4(d). We can observe that all dynamics escape flatter minima slower." }, { "heading": "6 CONCLUSION", "text": "In this work, we study the dynamic of SGD via investigating state-dependent variance of the stochastic gradient. We propose power-law dynamic with state-dependent diffusion to approximate the dynamic of SGD. We analyze the escaping efficiency from local minima and the PAC-Bayes generalization error bound for power-law dynamic. Results indicate that state-dependent noise helps SGD escape from poor local minima faster and generalize better. We present direct empirical evidence to support our theoretical findings.This work may motivate many interesting research topics, for example, nonGaussian state-dependent noise, new types of state-dependent regularization tricks in deep learning algorithms and more accurate characterization about the loss surface of deep neural networks. We will investigate these topics in future work." }, { "heading": "7 APPENDIX", "text": "" }, { "heading": "7.1 POWER-LAW DYNAMIC AND STATIONARY DISTRIBUTION", "text": "Theorem 9 (Theorem 2 in main paper) The stationary distribution density for 1-dimensional powerlaw dynamic (Eq.4) is\np(w) = 1\nZ (C(w))\n− H ησH exp H ( 4ρg,H ·ArcTan ( C′(w)/ √ 4σHσg − 4ρ2g,H )) ησH √ 4σHσg − 4ρ2g,H , whereC(w) = σg+2ρg,H(w−w∗)+σH(w−w∗)2, Z is the normalization constant andArcTan(·) is the arctangent function.\nProof: We denote the function H(4ρg,H ·ArcTan(C′(w)/\n√ 4σHσg−4ρg,H))\nησH √ 4σHσg−4ρ2g,H as h(w). According to the\nFokker-Planck equation, p(w) satisfies\n0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w))\n= ∇ · [ (p(w) · ∇L(w)) + η\n2 C(w)∇p(w) ] = ∇ · [η 2 C(w) − HησH +1eh(w)∇(C(w) H ησH · e−h(w) · p(w))\n] Readers can check the third equality by calculating∇(C(w) H ησH · e−h(w) · p(w)) with C(w) = σg + 2ρg,H(w−w∗)+σH(w−w∗)2. Because the left side equals zero, we have C(w) H ησH ·e−h(w) ·p(w) equals to constant. So p(w) ∝ C(w)− H ησH ·eh(w) ·p(w). So we can get the conclusion in the theorem.\nTheorem 10 (Corollary 3 in main paper) If C(w) = σg + σH(w−w∗)2, the stationary distribution density of power-law dynamic is\np(w) = 1\nZ (1 + σHσ\n−1 g (w − w∗)2)−κ, (12)\nwhere Z = ∫ w (1 + σHσ −1 g (w − w∗)2)−κdw is the normalization constant and κ = HησH is the tail-index.\nProof: According to the Fokker-Planck equation, p(w) satisfies\n0 = ∇p(w)g(w) + η 2 · ∇ · (C(w)∇p(w))\n= ∇(p(w) · ∇L(w)) + η 2 ∇ · (σg + 2σH H (L(w)− L(w∗)))∇p(w) = ∇ · η 2 C(w)(1 + 2σH Hσg (L(w)− L(w∗))) H −ησH ∇(1 + 2σH Hσg (L(w)− L(w∗))) H ησH p(w)\nBecause the left side equals zero, we have (1 + 2σHHσg (L(w)− L(w ∗))) H ησH p(w) equals to constant. So p(w) ∝ (1 + 2σHHσg (L(w)− L(w ∗))) H −ησH . So we can get the conclusion in the theorem.\nWe plot the un-normalized distribution density for 1-dimensional power-law dynamics with different κ in Figure 5. For the four curves, we set β = 10. We set κ = 1, 0.5, 0.1, 0 and use green, red,\npurple and blue line to illustrate their corresponding density function, respectively. When κ = 0, it is Gaussian distribution. From the figure, we can see that the tail for power-law κ-distribution is heavier than Gaussian distribution.\nActually, for any given time t, the distribution p(w, t) for wt that satisfies power-law dynamic has analytic form, i.e., p(w, t) ∝ (1 + Hηκσ(t) (w −w(t))\n2)−κ, where w(t) = w∗ + (w0 −w∗)e−Ht and σ(t) is a function of σg and t. Readers can refer Eq.18 - Eq.23 in (Tsallis & Bukman, 1995) for the detailed expression." }, { "heading": "7.2 SGD AND MULTIVARIATE POWER-LAW DYNAMIC", "text": "The following proposition shows the covariance of stochastic gradient in SGD in d-dimensional case. We use the subscripts to denote the elements in a vector or a matrix.\nProposition 11 For w ∈ Rd, we use C(w) to denote the covariance matrix of stochastic gradient g̃(w) = g̃(w∗)+H̃(w−w∗) and Σ to denote the covariance matrix of g̃(w∗). IfCov(g̃i(w∗), H̃jk) = 0,∀i, j, k, we have\nCij(w) = Σij + (w − w∗)TA(ij)(w − w∗), (13)\nwhere Σij = Cov(g̃i(w∗), g̃j(w∗)), A(ij) is a d × d matrix with elements A(ij)ab = Cov(H̃ia, H̃jb) with a ∈ [d], b ∈ [d].\nEq.13 can be obtained by directly calculating the covariance of g̃i(w) and g̃j(w) where g̃i(w) = g̃i(w ∗) + ∑d a=1 H̃ia(wa − w∗a), g̃j(w) = g̃j(w∗) + ∑d b=1 H̃jb(wb − w∗b ).\nIn order to get a analytic tractable form of C(w), we make the following assumptions: (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0, A (ij)\nΣij are equal for all i ∈ [d], j ∈ [d]. The first assumption is\nreasonable because both Σij andA(ij) reflect the dependence of the derivatives along the i-th direction and j-th direction. Let ΣH = A (ij)\nΣij ,C(w) can be written asC(w) = Σg(1+(w−w∗)TΣH(w−w∗)).\nThe d-dimensional power-law dynamic is written as dwt = −H(w − w∗)dt+ √ ηC(w)dBt, (14)\nwhere C(w) = Σg(1 + (w − w∗)TΣH(w − w∗)) which is a symmetric positive definite matrix that C(w)1/2 exists. The following proposition shows the stationary distribution of the d-dimensional power-law dynamic.\nProposition 12 Suppose Σg,ΣH , H are codiagonalizable, i.e., there exist orthogonal matrix Q and diagonal matrices Λ,Γ,Π to satisfy Σg = QTΛQ,ΣH = QTΓQ,H = QTΠQ. Then, the stationary distribution of power-law dynamic is\np(w) = 1\nZ (1 + (w − w∗)TΣH(w − w∗))−κ, (15)\nwhere Z is the normalization constant and κ = Tr(H)ηTr(ΣHΣg) .\nProof: Under the codiagonalization assumption on Σg,ΣH , H , Eq.15 can be rewritten as dvt = −Πvtdt+ √ ηΛ(1 + vTt Γvt)dBt if we let vt = Q(wt − w∗).\nWe use φ(v) = ηC(v)2 = η 2 Λ(1 + v TΓv), the stationary probability density p(v) satisfies the Smoluchowski equation:\n0 = d∑ i=1 ∂ ∂vi (Πivi · p(v)) + d∑ i=1 ∂ ∂vi · ( φi(w) ∂ ∂vi p(v) ) (16)\n= d∑ i=1 ∂ ∂vi (Πi·vi · p(v)) + d∑ i=1 ∂ ∂vi · ( ηΛi 2 (1 + vTΓv) ∂ ∂vi p(v) ) . (17)\nAccording to the result for 1-dimensional case, we have the expression of p(v) is p(v) ∝ (1 + vTΓv)−κ. To determine the value of κ, we put p(v) in the Smoluchowski equation to obtain\nd∑ i=1 Πip(v)− 2κ d∑ i=1 Πivi · Γivi · (1 + vTΓv)−κ−1\n= d∑ i=1 ∂ ∂vi ( ηΛiκ(1 + v TΓv)−κ · Γivi )\n= d∑ i=1 ( ηΛiκ(1 + v TΓv)−κ · Γi ) − 2 d∑ i=1 ( ηΛiκ 2(1 + vTΓv)−κ−1 · (Γivi)2 ) .\nThe we have ∑d i=1 Πi = ηκ ∑d i=1 ΛiΓi. So we have κ = Tr(H) ηTr(ΣHΣg) .\nAccording to Proposition 11, we can also consider another assumption on Σg,ΣH , H without assuming their codiagonalization. Instead, we assume (1) If Σij = 0, A(ij) is a zero matrix; (2) For Σij 6= 0,A(ij) are equal for all i ∈ [d], j ∈ [d] and we denoteA(ij) = ΣH . We suppose η ·ΣH = κH . (3) Σg = σg · Id which is isotropic. Under these assumptions, we can get the following theorem.\nTheorem 13 (Theorem 4 in main paper) If w is d-dimensional and C(w) has the form in Eq.(8). The stationary distribution density of multivariate power-law dynamic is\np(w) = 1\nZ [1 +\n1\nηκ (w − w∗)THΣ−1g (w − w∗)]−κ (18)\nwhere Z = ∫∞ −∞[1 + 1 ηκ (w − w ∗)THΣ−1g (w − w∗)]−κdw is the normalization constant.\nThe proof for Theorem 12 is similar to that for Proposition 11. Readers can check that p(w) satisfies the Smoluchowski equation.\nAn example to illustrate why C(w) is diagonally dominant. In Theorem 13, C(w) is assumed to be diagonally dominant. Diagonally dominant indicates that the variance of each dimension of g̃(w) is significantly larger than the covariance of two different dimensions of g̃(w). Consider a two layer fully-connected linear neural network fw,v(x) = wvx where w ∈ R1×m, v ∈ Rm×d, x ∈ Rd and h(·) is the ReLU activation. We consider the regression loss `(w, v) = 12 (y − fw,v(x))\n2. The gradient of wi and vjk can be written as\n∂`(w, v)\n∂wi = (fw,v(x)− y) · vix (19)\n∂`(w, v)\n∂vjk = (fw,v(x)− y) · wjxk, (20)\nwhere vi denotes the i-th row of matrix v. Suppose that the initialization of w and v is: wi i.i.d∼ N(0, δ1) and vij i.i.d∼ N(0, δ2) . We also assume that Exi = Exj = 0 and xi, xj are independent with each other for i 6= j where xi is the i-th dimension. We have\nEw,v ∂`(w, v)\n∂wi\n∂`(w, v)\n∂wj = Ew,v(fw,v(x)− y)2 · vix · vjx (21)\n= Ew,vy2 · vix · vjx+ Ew,v m∑ i=1 (wivix) 2 · vix · vjx− 2Ew,v( m∑ i=1 ywivix) · vix · vjx (22)\nBecause the independence of vi, vj and their expectations are zero, we can obtain Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂wj = 0 for i 6= j. Similarly, we can get Ew,v ∂`(w,v)∂wi ∂`(w,v) ∂vjk = 0 and Ew,v ∂`(w,v)∂vj′k′ ∂`(w,v) ∂vjk = 0 for (j, k) 6= (j′, k′).\nThe above analyses show that the gradients for different dimensions are independent at initialization. It has been observed that many weights are kept random during training because of the over-parameterization Balduzzi et al. (2017). So, diagonalization dominant property of C(w) is reasonable." }, { "heading": "7.3 SUPPLEMENTARY MATERIALS FOR RESULTS IN SECTION 4", "text": "" }, { "heading": "7.3.1 PROOF FOR MEAN ESCAPING TIME", "text": "Lemma 14 (Lemma 6 in main paper) We suppose C(w) = σga + 2σHa Ha (L(w)− L(a)) on the whole escaping path from a to b. The mean escaping time of the 1-dimensional power-law dynamic is,\nτ = 2π\n(1− 1 2κ\n) √ Ha|Hb|\n( 1 + 2\nκησga ∆L\n)κ− 1 2\n, (23)\nwhere κ = HaησHa , Ha, Hb are the second-order derivatives of training loss at local minimum a and saddle point b.\nProof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ\n, where Va is the volume of basin a, J is the probability current that satisfies\n−∇J(w, t) = ∂ ∂w (g(w) · p(w, t)) + ∂ ∂w\n( φ(w) ∂p(w, t)\n∂w\n)\n= ∂\n∂w φ(w) · (1 + µ σg ∆L(w) )−κ ∂ ((1 + µ σg ∆L(w) )κ p(w, t) ) ∂w ,\nwhere φ(w) = η2C(w) and µ = 2σHa Ha , σg = σga and ∆L(w) = L(w) − L(a). Integrating\nboth sides, we obtain J(w) = −φ(w) · (\n1 + µ σg\n∆L(w) )−κ ∂((1+ µσg ∆L(w))κp(w,t))\n∂w . Because there\nis no field source on the escape path, J(w) is fixed constant on the escape path. Multiplying φ(w)−1 · ( 1 + µσg ∆L(w) )κ on both sizes, we have\nJ · ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw = − ∫ c a\n∂ (( 1 + µσg ∆L(w) )κ p(w, t) ) ∂w dw\n= −0 + p(a).\nThen we get J = p(a)∫ c a φ(w)−1· ( 1+ µσg ∆L(w) )κ dw\n. As for the term ∫ c a φ(w)−1 · ( 1 + µσg ∆L(w) ) 1 κ dw,\nwe have ∫ c a φ(w)−1 · ( 1 + µ σg ∆L(w) )κ dw (24)\n= 2\nησg ∫ c a ( 1 + µ σg ∆L(w) )−1+κ dw\n= 2\nησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw\n= 2\nησg ∫ b c ( 1 + µ σg (∆L(b)− 1 2 |Hb|(w − b)2) )−1+κ dw\n= 2\nησg (1 +\nµ\nσg ∆L(b))−1+κ ∫ b c ( 1− µ σg · 1 2 |Hb|(w − b)2 1 + µ σg ∆L(b) )−1+κ dw\n= 2\nησg (1 +\nµ\nσg ∆L(b))−1+κ ·\n( 1 2 µ σg |Hb|\n1 + µ σg ∆L(b) )−1/2 ∫ 1 0 y−1/2(1− y)−1+κdy\n= 2\nησg (1 +\nµ\nσg ∆L(b))− 1 2 +κ √ 2σg µ|Hb| B( 1 2 , κ),\nwhere the third formula is based on the second order Taylor expansion. Under the low temperature assumption, we can use the second-order Taylor expansion around the saddle point b.\nAs for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = ∫ w∈Va p(a)(1 + µ σg ∆L(w))−κ =\np(a) √\n2σg µHa B( 1 2 , κ − 1 2 ), where we use Taylor expansion of L(w) near local minimum a. Then we\nhave τ = P (w∈Va)∫ Ω JdΩ\n= P (w∈Va)J because J is a constant. Combining all the results, we can get the result in the lemma.\nTheorem 15 (Theorem 7 in main paper) Suppose w ∈ Rd and there is only one most possible path path between basin a and the outside of basin a. The mean escaping time for power-law dynamic escaping from basin a to the outside of basin a is\nτ = 2π √ −det(Hb)\n(1− d 2κ\n) √ det(Ha) 1 |Hbe|\n( 1 + 1\nηκσe ∆L\n)κ− 1 2\n, (25)\nwhere e indicates the most possible escape direction, Hbe is the only negative eigenvalue of Hb, σe is the eigenvalue of Σga corresponding to the escape direction and ∆L = L(b)− L(a).\nProof: According to (Van Kampen, 1992), the mean escaping time τ is expressed as τ = P (w∈Va)∫ Ω JdΩ , where Va is the volume of basin a, J is the probability current that satisfies −∇ · J(w, t) = ∂p(w,t)∂t . Under the low temperature assumption, the probability current J concentrates along the direction corresponding the negative eigenvalue of Hbe, and the probability flux of other directions can be ignored. Then we have∫\nΩ\nJdΩ = Je · ∫\nΩ\n( 1 + 1\nηκ (w − b)T (HbΣ−1g )⊥e(w − b)\n)−κ+ 12 dΩ, (26)\nwhere Je = p(a) · η(1+µσe∆L(b))\n−κ+ 1 2 √ µσe|Hbe|\n2 √ 2B( 12 ,κ) which is obtained by the calculation of Je for\n1-dimensional case in the proof of Lemma 13, and (·)⊥e denotes the directions perpendicular to the escape direction e.\nSuppose HbΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HbΣ−1g = QTΛQ. We also denote v = Q(w − b).\nWe define a sequence as Tk = 1 + 1ηκ · ∑d j=k λjv\n2 j for k = 1, · · · , d. As for the term∫\nΩ ( 1 + 1ηκ (w − b) T (HbΣ −1 g ) ⊥e(w − b) )−κ+ 12 dΩ, we have∫\nΩ\n( 1 + 1\nηκ (w − b)T (HbΣ−1g )⊥e(w − b)\n)−κ+ 12 dΩ\n= ∫ (1 + 1\nηκ · vTΛv)−κ+ 12 dw\n= ∫ (1 + 1 ηκ · d∑ j 6=e λjv 2 j ) −κ+ 12 dv\n=((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1\n2 , κ)dv\n= d−2∏ j=0 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 )\n= d−2∏ j=0 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ)\n=\n√ (ηκπ)d−1 · Γ(κ− d−22 )\nΓ(κ+ 12 ) √ det((HbΣ −1 g )⊥e) .\nAs for the term P (w ∈ Va), we have P (w ∈ Va) = ∫ Va p(w)dV = p(a) ∫ w∈Va ( 1 + (w − w∗)THaΣ−1g (w − w∗) ) dw (27)\n=p(a) · √ (ηκπ)d · Γ(κ− d2 )\nΓ(κ) √ det((HaΣ −1 g ))\n(28)\nwhere we use Taylor expansion of L(w) near local minimum a.\nCombined the results for P (w ∈ Va) and J , we can get the result." }, { "heading": "7.3.2 FURTHER EXPLANATION ABOUT ASSUMPTION 1-3", "text": "We adopt the commonly used assumptions to analyze mean escaping time for dynamic system (Xie et al., 2020; Smith & Le, 2017; Zhou & Du, 2014). Assumption 2 can be replaced by weaker assumption that the system is quasi-equilibrium which is adopted in (Xie et al., 2020). For the differences between quasi-equilibrium and equilibrium, readers can refer to (Xie et al., 2020) for detailed discussions. Assumption 3 is commonly used (Xie et al., 2020; Zhou & Du, 2014). Under Assumption 3, the probability densities will concentrate around minima and the most possible paths. Assumption 3 will make the second order Taylor approximation more reasonable." }, { "heading": "7.3.3 EXTENSION TO MORE COMPLEX DYNAMIC ON THE ESCAPING PATH", "text": "In Lemma 6, we assume that C(w) = σga + 2σHa Ha (L(w) − L(a)) on the whole escaping path from a to b for ease of comparison and presentation. This assumption is not necessary and we can assume a different dynamic near saddle point b. Specially, we can assume the point z is the midpoint on the most possible path beween a and b, where L(z) = (1 − z)L(a) + zL(b). The dynamic with C(w) = σga + 2σHa Ha\n(L(w) − L(a)) dominates the path a → z and the dynamic with C(w) = σgb + 2σHb Hb\n(L(b)−L(w)) dominates the path z → b. Then only two things will be changed in proof of Lemma 6. First, we need to change the stationary distribution near saddle points according to its own dynamic in Eq.20. Second, we need to change the integral about probability density on\nthe whole path to sum of integrals on these two sub-paths. Similar proof techniques are adopted for analyzing escaping time of Langevin dynamic in proof of Theorem 4.1 in the work Xie et al. (2020). Since the proof is analogous, we omit the details here." }, { "heading": "7.4 PAC-BAYES GENERALIZATION BOUND", "text": "We briefly introduce the basic settings for PAC-Bayes generalization error. The expected risk is defined as Ex∼P(x)`(w, x). Suppose the parameter follows a distribution with density p(w), the expected risk in terms of p(w) is defined as Ew∼p(w),x∼P(x)`(w, x). The empirical risk in terms of p(w) is defined as Ew∼p(w)L(w) = Ew∼p(w) 1n ∑n i=1 `(w, xi). Suppose the prior distribution over the parameter space is p′(w) and p(w) is the distribution on the parameter space expressing the learned hypothesis function. For power-law dynamic, p(w) is its stationary distribution and we choose p′(w) to be Gaussian distribution with center w∗ and covariance matrix I . Then we can get the following theorem.\nTheorem 16 (Theorem 8 in main paper) For w ∈ Rd, we select the prior distribution p′(w) to be standard Gaussian distribution. For δ > 0, with probability at least 1− δ, the stationary distribution of power-law dynamic has the following generalization error bound,\nEw∼p(w),x∼P(x)`(w, x) ≤ Ew∼p(w)L(w) +\n√ KL(p||p′) + log 1δ + log n+ 2\nn− 1 , (29)\nwhereKL(p||p′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η andP(x) is the underlying distribution of data x.\nProof: Eq.(29) directly follows the results in (McAllester, 1999). Here we calculate the Kullback–Leibler (KL) divergence between prior distribution and the stationary distribution of power-law dynamic. The prior distribution is selected to be standard Gaussion distribution with distribution density p′(w) = 1√\n(2π)d det (I) exp{− 12 (w−w ∗)T I(w−w∗)}. The posterior distribution density is the\nstationary distribution for power-law dynamic, i.e., p(w) = 1Z ·(1+ 1 ηκ ·(w−w ∗)THΣ−1g (w−w∗))−κ.\nSuppose HΣ−1g are symmetric matrix. Then there exist orthogonal matrix Q and diagonal matrix Λ = diag(λ1, · · · , λd) that satisfy HΣ−1g = QTΛQ. We also denote v = Q(w − w∗). We have\nlog\n( p(w)\np′(w) ) = −κ log(1 + 1\nηκ · (w − w∗)THΣ−1g (w − w∗))− logZ +\n1 2 (w − w∗)T I(w − w∗) + d 2 log 2π\nThe KL-divergence is defined as KL(p(w)||p′(w)) = ∫ w p(w) log ( p(w) p′(w) ) dw. Putting v = Q(w − w∗) in the integral, we have\nKL(p(w)||p′(w))\n= d 2 log 2π − logZ + 1 2Z ∫ v vT v ( 1 + 1 ηκ · vTΛv )−κ dv − 1 Zη ∫ v vTΛv · (1 + 1 ηκ · vTΛv)−κdv,\n(30)\nwhere we use the approximation that log(1 + x) ≈ x. We define a sequence as Tk = 1 + 1ηκ ·∑d j=k λjv 2 j for k = 1, · · · , d. We first calculate the normalization constant Z.\nZ = ∫ (1 + 1\nηκ · vTΛv)−κdw =\n∫ (1 + 1 ηκ · d∑ j=1 λjv 2 j ) −κdv\n=((ηκ)−1λ1) − 12 ∫ T −κ+ 12 2 B( 1\n2 , κ− 1 2 )dv = d∏ j=1 ((ηκ)−1λj) − 12B( 1 2 , κ− j 2 )\n= d∏ j=1 ((ηκ)−1λj) − 12 · √ πdΓ(κ− d2 ) Γ(κ)\nWe define Zj = ((ηκ)−1λj)− 1 2B ( 1 2 , κ− j 2 ) . For the third term in Eq.(30), we have\n2Z · III\n= ∫ v vT v(1 + 1 ηκ vTΛv)−κdv\n= ∫ v2,···vd ∫ v1 v21 ( 1 + 1 ηκ · vTΛv )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd\n= ∫ v2,···vd T−κ2 ∫ v1 v21 ( 1 + (ηκ)−1λ1v 2 1 T2 )−κ dv1 + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd\n= ∫ v2,··· ,vd T−κ2 ∫ ( T2 (ηκ)−1λ1 ) 3 2 y 1 2 (1 + y)−κ dy + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd\n= ∫ v2,··· ,vd ((ηκ)−1λ1) − 3 2 T −κ+ 3 2 2 B ( 3 2 , κ− 3 2 ) + Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd\n=( λ1 ηκ )− 3 2B\n( 3\n2 , κ− 3 2 )∫ v2,··· ,vd T −κ+ 3 2 2 dv2··· ,vd + ∫ v2,··· ,vd Z1 ( d∑ j=2 v2j )( 1 + 1 ηκ · d∑ j=2 λjv 2 j )−κ+ 1 2 dv2··· ,vd\nFor term ∫ v2,··· ,vd T − 1κ+ 3 2 2 dv2··· ,vd in above equation, we have\n∫ v2,··· ,vd T −κ+ 32 2 dv2··· ,vd\n= ∫ v3,··· ,vd T−κ+23 ((ηκ) −1λ2) − 12B ( 1 2 , κ− 2 ) dv3,··· ,vd\n= ∫ v4,··· ,vd T −κ+ 52 4 ((ηκ) −1λ2) − 12 ((ηκ)−1λ3) − 12B ( 1 2 , κ− 5 2 ) B ( 1 2 , κ− 2 ) dv4,··· ,vd\n= ∫ vd T −κ+ 12 + 1 2×d d d−1∏ j=2 ((ηκ)−1λj) − 12 d−1∏ j=2 B ( 1 2 , κ− ( j 2 + 1) ) dvd\n= d∏ j=2 ((ηκ)−1λj) − 12 d∏ j=2 B ( 1 2 , κ− ( j 2 + 1) )\nLet Aj = ((ηκ)−1λj)− 3 2B ( 3 2 , κ− ( j 2 + 1) ) . According to the above two equations, we can get the recursion\n2Z ∫ vT vT−κ1 dv\n=A1 · ∫ T −κ+ 32 2 + Z1 ∫ v2,··· ,vd d∑ j=2 v2j T−κ+ 122 dv2··· ,vd =A1 · ∫ T −κ+ 3−12 2 dv2···vd + Z1 ·A2 ∫ T −κ+ 42 3 dv3··· ,vd + Z1Z2 ∫ d∑ j=3 v2j\nT−κ+ 123 dv3··· ,vd =\nd−1∑ j=1 Aj j−1∏ k=1 Zk ∫ T −κ+ j+1+12 j+1 dvj+1,··· ,vd + d−1∏ k=1 Zk ∫ v2dT −κ+ d−12 d dvd\n= d−1∑ j=1 ( λj ηκ )− 3 2B ( 3 2 , κ− ( j 2 + 1) ) j−1∏ k=1 ( λk ηκ )− 1 2B ( 1 2 , κ− k 2 ) d∏ s=j+1 (( λs ηκ )− 1 2 d∏ s=j+1 B ( 1 2 , κ− (s 2 + 1) )\n+ d−1∏ j=1 ( λj ηκ )− 1 2B( 1 2 , κ− j 2 − 1) · (λd ηκ )− 3 2B( 3 2 , κ− (d 2 + 1))\n=\n√ πdΓ(κ− d2 − 1)Tr(H −1Σg)\n2Γ(κ) √ (ηκ)−(d+2) det(H−1Σg)\nWe have\nIII =\n√ πdΓ(κ− d2 − 1)Tr(H −1Σg)\n4Γ(κ) √ (ηκ)−(d+2) det(H−1Σg) · d∏ j=1 ((ηκ)−1λj) 1 2 · Γ(κ)√ πdΓ(κ− d2 )\n= ηκTr(H−1Σg)\n4(κ− d2 − 1)\nSimilarly, for the fourth term in Eq.(30), we have IV = κd 2(κ− d2−1) . Combining all the results together, we can get KL(p||p′) = 12 log det(H) (ηκ)d det(Σg) + log Γ(κ) Γ(κ− d2 ) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2. Using the fact that log Γ(κ) Γ(κ− d2 ) ≤ d2 log κ, we have KL(p||p ′) ≤ 12 log det(H) det(Σg) + Tr(ηΣgH −1)−2d 4(1− 1κ ( d 2−1)) + d2 log 2 η ." }, { "heading": "7.5 IMPLEMENTATION DETAILS OF THE EXPERIMENTS", "text": "" }, { "heading": "7.5.1 OBSERVATIONS ON THE COVARIANCE MATRIX", "text": "In this section, we introduce the settings on experiments of the quadratic approximation of covariance of the stochastic gradient on plain convolutional neural network (CNN) and ResNet. For each model, we use gradient descent with small constant learning rate to train the network till it converges. The converged point can be regarded as a local minimum, denoted as w∗.\nAs for the detailed settings of the CNN model, the structure for plain CNN model is input → Conv1→ maxpool → Conv2→ maxpool → fc1→ Relu→ fc2→ output. Both Conv1 and Conv2 use 5 × 5 kernels with 10 channels and no padding. Dimensions of full connected layer fc1 and fc2 are 1600 × 50 and 50 × 10 respectively. We randomly sample 1000 images from FashionMNIST (Xiao et al., 2017) dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.1. After 3000 iterations, GD converges with almost 100% training accuracy and the training loss being 1e−3.\nAs for ResNet, we use the ResNet-18 model (He et al., 2016b) and randomly sample 1000 images from Kaggle’s dogs-vs-cats dataset as training set. The initialization method is the Kaiming initialization (He et al., 2015) in PyTorch. The learning rate of gradient descent is set to be 0.001. After 10000 iterations, GD converges with 100% training accuracy and the training loss being 1e−3.\nWe then calculate the covariance matrix of the stochastic gradient at some points belonging to the local region around w∗. The points are selected according to the formula: w∗layerL ± (i× Scale), where w∗layerL denotes the parameters at layer L, and i × Scale, i ∈ [N ] determines the distance away from w∗layerL. When we select points according to this formula by changing the parameters at layer L, we fixed the parameters at other layers. For both CNN model and ResNet18 model, we select 20 points by setting i = 1, · · · , 10. For example, for CNN model, we choose the 20 points by changing the parameters at the Conv1 layer with Scale = 0.001 and Conv2 layer with Scale = 0.0001, respectively. For ResNet18, we choose the 20 points by changing the parameters for a convolutional layer at the first residual block with Scale = 0.0001 and second residual block with Scale = 0.0001, respectively.\nThe results are shown in Figure.1. The x-axis denotes the distance of the point away from the local minimum and the y-axis shows the value of the trace of covariance matrix at each point. The results show that the covariance of noise in SGD is indeed not constant and it can be well approximated by quadratic function of state (the blue line in the figures), which is consistent with our theoretical results in Section 3.1." }, { "heading": "7.5.2 SUPPLEMENTARY EXPERIMENTS ON PARAMETER DISTRIBUTIONS OF DEEP NEURAL NETWORKS", "text": "For Figure. 3(a), we train LeNet-5 on MNIST dataset using SGD with constant learning rate η = 0.03 for each batchsize till it converges. Parameters are conv2.weight in LeNet-5. For Figure 3(b), we train ResNet-18 on CIFAR10 using SGD with momentum. We do a RandomCrop on training set scaling to 32× 32 with padding = 4 and then a RandomHorizontalF lip. In training, momentum is set to be 0.9 and weight decay is set to be 5e− 4. Initial learning rate in SGD is set to be 0.1 and we using a learning rate decay of 0.1 on {150, 250}-th epoch respectively. We train it until converges after 250 epoch. Parameters are layer1.1.conv2.weight in ResNet-18.\nWe also observe the parameter distribution on many pretrained models. Details for pre-trained models can be found on https://pytorch.org/docs/stable/torchvision/models.html. Figure.7 shows the distribution of parameters trained by SGD can be well fitted by powerlaw distribution. Parameters in this figure are all randomly selected to be features.10.weight, features.14.weight, features.5.expand3 × 3.weight, Mixed_6d.branch7 × 7_3.conv.weight, layer4.2.conv3.weight and features.denseblock2.denselayer1.conv2.weight for VGG-16, AlexNet, SqueezeNet 1.0, Inception v3, Wide ResNet-50-2 and DenseNet-121 respectively.\nA Q-Q plot is created by plotting quantiles of two probability distributions against one another, which can provide an assessment of \"goodness of fit\" by how much the solid line close to the dashed line. From Figure.8, it is clear that the solid lines in bottom pictures are closer to dashed lines on most cases, which indicates network parameters can be better fitted by power-law distribution. Moreover, solid lines in the upper plots severely deviate from dashed lines on the tail of distribution but those in the bottom plot do not, which means the distribution of parameters is indeed heavy-tailed." }, { "heading": "7.5.3 FURTHER EXPLANATION ON EXPERIMENTS IN SECTION 5.2", "text": "As for the experiments for 2-D model, we also calculate coefficient of the second-order term for the quadratic curve shown in Figure.4(b), and its value is roughly 30, which matches the result in Figure.4(c) in the sense that the result for SGD is similar with the result for power-law dynamic with λ1 ≈ 32." }, { "heading": "7.5.4 ESCAPING EFFICIENCY ON NEURAL NETWORK", "text": "We follow the settings in (Zhu et al., 2019). For convenience of the readers, here we give the details of this setting again. We use corrupted FashionMNIST dataset which contains 1000 images with correct labels and another 200 images with random labels to be training data. A small LeNet-like network with 11,330 parameters is used. Firstly we run the full gradient decent to reach the parameters w∗ near the global minima. Then we continue training using both Langevin dynamic(GLD) and power-law dynamic(PLD). Following Zhu’s setting, the learning rates for GD, GLD and PLD are ηGD = 0.1, ηGLD = 0.07 and ηPLD = 0.07, respectively. For GLD, noise std σ = 10−4 as Zhu already tuned. For our PLD, wt+1 = wt − η∇L(wt) + η · α∇L(wt) √ 1 + β(wt − w∗)2 ξ, where α, β are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we select α = 2.4, β = 2 after grid search. Expected sharpness is measured as Eν∼N(0,δ2I)[L(w+ ν)]−L(w) where δ = 0.01, and the expectation is computed by average on 1000 times sampling.\nThe numbers at the first column of the legend show the test accuracy and the numbers in the bracket show the sharpness of the model trained by the three algorithms. From Figure 9, we can conclude that PLD generalizes better than GLD and GD. Moreover, PLD can find flatter critical points than GLD and GD." }, { "heading": "7.5.5 COMPARISON OF MEAN ESCAPING TIME WITH DIFFERENT BARRIER HEIGHTS", "text": "We design this 1-dimensional model to help to validate the theoretical results of escaping time in Table 1. Loss function L(w) = 1n ∑n i=1 `(w − xi), where `(w) = { w2 + bw , w < 0 −w2 + bw , w ≥ 0 and\nxi ∼ N (0, 0.05). L(w) is plotted in Figure.10(a), and we can adjust barrier height through parameter b in `(w) without changing the Hessian on minima w∗ and saddle point.\nFor power-law dynamic (PLD),wt+1 = wt−η∇L(wt)+ηλ2 √\n1 + λ1(wt − w∗)2 ξ, where λ1, λ2 are hyperparameters, ξ ∼ N (0, I), and stands for Hadamard product. Here we let λ1 = 1, λ2 = 4. For Langevin dynamic (GLD), we set noise std σ = 4 in consistence with PLD. Learning rate η = 0.1 for both methods. We initialize w0 = w∗ and apply both methods on L(w) with different barrier heights. Then we record the number of iterations t when wt firstly escaping from the barrier. We repeat this procedure 100 rounds for each method and each barrier height and utilize the average to estimate the mean escaping time, of which the results are shown in Figure.10(b).\nFrom Figure.10(b), the mean escaping time of GLD grows much faster than PLD along barrier height, which validates that power-law dynamic improves the order of barrier height compared with Langevin dynamic." } ]
2,020
null
SP:45ebf4f0cb747eadb32b5254b75d142755f67af6
[ "In this paper, the authors address the model identifiability in a general setting that can be adapted to several recent deep learning models (DNN supervised learning, CPC, BERT and GPT). Since model parameters (NN weights) are not identifiable, the authors hypothesize that vector f and g can be identifiably up to a linear transformation. Although the purpose of the work is appealing, there are some issues related to the current structure of the paper, proposed theory and its relationship to the provided experiments. See my detailed comments and questions below:" ]
Identifiability is a desirable property of a statistical model: it implies that the true model parameters may be estimated to any desired precision, given sufficient computational resources and data. We study identifiability in the context of representation learning: discovering nonlinear data representations that are optimal with respect to some downstream task. When parameterized as deep neural networks, such representation functions lack identifiability in parameter space, because they are overparameterized by design. In this paper, building on recent advances in nonlinear Independent Components Analysis, we aim to rehabilitate identifiability by showing that a large family of discriminative models are in fact identifiable in function space, up to a linear indeterminacy. Many models for representation learning in a wide variety of domains have been identifiable in this sense, including text, images and audio, state-of-the-art at time of publication. We derive sufficient conditions for linear identifiability and provide empirical support for the result on both simulated and real-world data.
[]
[ { "authors": [ "J. Bradbury", "R. Frostig", "P. Hawkins", "M.J. Johnson", "C. Leary", "D. Maclaurin", "S. Wandermanmilne. Jax" ], "title": "Composable transformations of Python+NumPy programs, 2018", "venue": "URL Http: //Github.Com/Google/Jax", "year": 2018 }, { "authors": [ "T.B. Brown", "B. Mann", "N. Ryder", "M. Subbiah", "J. Kaplan", "P. Dhariwal", "A. Neelakantan", "P. Shyam", "A.G. Sastry" ], "title": "Askell, and Others. Language Models are Few-Shot Learners", "venue": "Arxiv Preprint Arxiv:2005.14165,", "year": 2020 }, { "authors": [ "C. Chelba", "T. Mikolov", "M. Schuster", "Q. Ge", "T. Brants", "P. Koehn", "t. Robinson" ], "title": "One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling", "venue": "Arxiv Preprint Arxiv:1312.3005,", "year": 2013 }, { "authors": [ "A.M. Dai", "Q.V. Le" ], "title": "Semi-Supervised Sequence Learning", "venue": "In Advances in Neural information Processing Systems,", "year": 2015 }, { "authors": [ "J. Devlin", "M.-W. Chang", "K. Lee", "K. Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "Arxiv Preprint", "year": 2018 }, { "authors": [ "D. Erhan", "Y. Bengio", "A. Courville", "P.-A. Manzagol", "P. Vincent", "S. Bengio" ], "title": "Why Does Unsupervised Pre-training Help Deep Learning", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "O.J. Hénaff", "A. Razavi", "C. Doersch", "S. Eslami", "A.V.D. Oord" ], "title": "Data-Efficient Image Recognition with Contrastive Predictive Coding", "venue": "Arxiv Preprint Arxiv:1905.09272,", "year": 2019 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long Short-Term Memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "E. Hoffer", "N. Ailon" ], "title": "Deep Metric Learning Using Triplet Network", "venue": "In International Workshop On Similarity-Based Pattern Recognition,", "year": 2015 }, { "authors": [ "K. Hornik", "M. Stinchcombe", "H. White" ], "title": "Multilayer Feedforward Networks are Universal Approximators", "venue": "Neural Networks,", "year": 1989 }, { "authors": [ "H. Hotelling" ], "title": "Relations Between Two Sets of Variates", "venue": "Biometrika, 28(3/4):321–377,", "year": 1936 }, { "authors": [ "A. Hyvärinen", "H. Morioka" ], "title": "Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA", "venue": "In Advances in Neural information Processing Systems,", "year": 2016 }, { "authors": [ "A. Hyvärinen", "H. Sasaki", "R.E. Turner" ], "title": "Nonlinear ICA Using Auxiliary Variables and Generalized Contrastive Learning", "venue": "Arxiv Preprint", "year": 2018 }, { "authors": [ "F. Johansson", "U. Shalit", "D. Sontag" ], "title": "Learning representations for counterfactual inference", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "I. Khemakhem", "D.P. Kingma", "A. Hyvärinen" ], "title": "Variational Autoencoders and Nonlinear ICA: A Unifying Framework", "venue": "Arxiv Preprint Arxiv:1907.04809,", "year": 2019 }, { "authors": [ "I. Khemakhem", "R.P. Monti", "D.P. Kingma", "A. Hyvärinen" ], "title": "ICE-BeeM: Identifiable Conditional Energy-based Deep Models", "venue": "Arxiv Preprint Arxiv:2002.11537,", "year": 2020 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": "Arxiv Preprint Arxiv:1412.6980,", "year": 2014 }, { "authors": [ "P.J. Liu", "M. Saleh", "E. Pot", "B. Goodrich", "R. Sepassi", "L. Kaiser", "N. Shazeer" ], "title": "Generating Wikipedia by Summarizing Long Sequences", "venue": "Arxiv Preprint", "year": 2018 }, { "authors": [ "C. Louizos", "U. Shalit", "J.M. Mooij", "D. Sontag", "R. Zemel", "M. Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "T. Mikolov", "M. Karafiát", "L. Burget", "J. Černockỳ", "S. Khudanpur" ], "title": "Recurrent Neural Network Based Language Model", "venue": "In Eleventh Annual Conference of The international Speech Communication Association,", "year": 2010 }, { "authors": [ "T. Mikolov", "I. Sutskever", "K. Chen", "G.S. Corrado", "J. Dean" ], "title": "Distributed Representations of Words and Phrases and their Compositionality", "venue": "In Advances in Neural information Processing Systems,", "year": 2013 }, { "authors": [ "A. Mnih", "G.E. Hinton" ], "title": "A Scalable Hierarchical Distributed Language Model", "venue": "In Advances in Neural information Processing Systems,", "year": 2009 }, { "authors": [ "A. Mnih", "Y.W. Teh" ], "title": "A Fast and Simple Algorithm for Training Neural Probabilistic Language Models", "venue": "Arxiv Preprint Arxiv:1206.6426,", "year": 2012 }, { "authors": [ "A.S. Morcos", "M. Raghu", "S. Bengio" ], "title": "Insights on Representational Similarity in Neural Networks with Canonical Correlation, 2018", "venue": null, "year": 2018 }, { "authors": [ "A.V.D. Oord", "Y. Li", "O. Vinyals" ], "title": "Representation Learning with Contrastive Predictive Coding", "venue": "Arxiv Preprint", "year": 2018 }, { "authors": [ "K. Pearson" ], "title": "LIII. On Lines and Planes of Closest Fit to Systems of Points in Space", "venue": "The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science,", "year": 1901 }, { "authors": [ "A. Radford", "K. Narasimhan", "T. Salimans", "I. Sutskever" ], "title": "Improving Language Understanding by Generative Pre-training", "venue": null, "year": 2018 }, { "authors": [ "A. Radford", "J. Wu", "R. Child", "D. Luan", "D. Amodei", "I. Sutskever" ], "title": "Language Models are Unsupervised Multitask Learners", "venue": "Openai Blog,", "year": 2019 }, { "authors": [ "M. Raghu", "J. Gilmer", "J. Yosinski", "J. Sohl-Dickstein" ], "title": "SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and interpretability", "venue": "In Advances in Neural information Processing Systems,", "year": 2017 }, { "authors": [ "A. Sharif Razavian", "H. Azizpour", "J. Sullivan", "S. Carlsson" ], "title": "CNN Features Off-the-Shelf: An Astounding Baseline for Recognition", "venue": "In Proceedings of The Ieee Conference On Computer Vision and Pattern Recognition Workshops,", "year": 2014 }, { "authors": [ "K. Sohn" ], "title": "Improved Deep Metric Learning with Multi-class N-Pair Loss Objective", "venue": "In Advances in Neural information Processing Systems,", "year": 2016 }, { "authors": [ "P. Sorrenson", "C. Rother", "U. Köthe" ], "title": "Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (Gin)", "venue": null, "year": 2020 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "Ł. Kaiser", "I. Polosukhin" ], "title": "Attention is All You Need", "venue": "In Advances in Neural information Processing Systems,", "year": 2017 }, { "authors": [ "P. Vincent", "H. Larochelle", "Y. Bengio", "P.-A. Manzagol" ], "title": "Extracting and Composing Robust Features with Denoising Autoencoders", "venue": "In Proceedings of The 25th international Conference On Machine Learning,", "year": 2008 }, { "authors": [ "T. Wolf", "L. Debut", "V. Sanh", "J. Chaumond", "C. Delangue", "A. Moi", "P. Cistac", "T. Rault", "R. Louf", "M. Funtowicz", "J. Brew" ], "title": "Huggingface’s Transformers: State-of-the-art", "venue": "Natural Language Processing. Arxiv,", "year": 2019 }, { "authors": [ "Z. Yang", "Z. Dai", "Y. Yang", "J. Carbonell", "R. Salakhutdinov", "Q.V. Le" ], "title": "XLNET: Generalized Autoregressive Pretraining for Language Understanding", "venue": "Arxiv Preprint Arxiv:1906.08237,", "year": 2019 }, { "authors": [ "HuggingFace Wolf" ], "title": "REMARK ON EFFECT OF INITIALIZATION AND HYPERPARAMETERS OF MODELS One question that may be of interest is whether initialization affects whether learned representations will be within a linear transformation of each other. This depends on whether the optimization routines (like Adam, AdaGrad", "venue": null, "year": 2019 }, { "authors": [ "pk(xt+k|ct", "Oord" ], "title": "2018) model a density ratio in order to preserve the mutual information between xt+k and ct", "venue": null, "year": 2018 }, { "authors": [ "gθ. D" ], "title": "NEURAL PROBABILISTIC LANGUAGE MODELS (NPLMS) Figure 1 shows results from a neural probabilistic language model as proposed in Mnih and Teh (2012). Mnih and Teh (2012) propose using a log-bilinear model (Mnih and Hinton, 2009) which, given some context h, learns a context word vectors rw and target word vectors qw", "venue": null, "year": 2009 } ]
[ { "heading": "1 INTRODUCTION", "text": "An increasingly common methodology in machine learning is to improve performance on a primary down-stream task by first learning a high-dimensional representation of the data on a related, proxy task. In this paradigm, training a model reduces to fine-tuning the learned representations for optimal performance on a particular sub-task (Erhan et al., 2010). Deep neural networks (DNNs), as flexible function approximators, have been surprisingly successful in discovering effective high-dimensional representations for use in downstream tasks such as image classification (Sharif Razavian et al., 2014), text generation (Radford et al., 2018; Devlin et al., 2018), and sequential decision making (Oord et al., 2018).\nWhen learning representations for downstream tasks, it would be useful if the representations were reproducible, in the sense that every time a network relearns the representation function on the same data distribution, they were approximately the same, regardless of small deviations in the initialization of the parameters or the optimization procedure. In some applications, such as learning real-world causal relationships from data, such reproducible learned representations are crucial for accurate and robust inference (Johansson et al., 2016; Louizos et al., 2017). A rigorous way to achieve reproducibility is to choose a model whose representation function is identifiable in function space. Informally speaking, identifiability in function space is achieved when, in the limit of infinite data, there exists a single, global optimum in function space. Interestingly, Figure 1 exhibits learned representation functions that appear to be the same up to a linear transformation, even on finite data and optimized without convergence guarantees (see Appendix A.1 for training details).\nIn this paper, we account for Figure 1 by making precise the relationship it exemplifies. We prove that a large class of discriminative and autoregressive models are identifiable in function space, up to a linear transformation. Our results extend recent advances in the theory of nonlinear Independent Components Analysis (ICA), which have recently provided strong identifiability results for generative models of data (Hyvärinen et al., 2018; Khemakhem et al., 2019; 2020; Sorrenson et al., 2020). Our key contribution is to bridge the gap between these results and discriminative models, commonly used for representation learning (e.g., (Hénaff et al., 2019; Brown et al., 2020)).\nThe rest of the paper is organized as follows. In Section 2, we describe a general discriminative model family, defined by its canonical mathematical form, which generalizes many supervised, self-\nsupervised, and contrastive learning frameworks. In Section 3, we prove that learned representations in this family have an asymptotic property desirable for representation learning: equality up to a linear transformation. In Section 4, we show that this family includes a number of highly performant models, state-of-the-art at publication for their problem domains, including CPC (Oord et al., 2018), BERT (Devlin et al., 2018), and GPT-2 and GPT-3 (Radford et al., 2018; 2019; Brown et al., 2020). Section 5 investigates the actually realizable regime of finite data and partial optimization, showing that representations learned by members of the identifiable model family approach equality up to a linear transformation as a function of dataset size, neural network capacity, and optimization progress." }, { "heading": "2 MODEL FAMILY AND DATA DISTRIBUTION", "text": "The learned embeddings of a DNN are a function not only of the parameters, but also the network architecture and size of dataset (viewed as a sample from the underlying data distribution). This renders any analysis in full generality challenging. To make such an analysis tractable, in this section, we begin by specifying a set of assumptions about the underlying data distribution and model family that must hold for the learned representations to be similar up to a linear transformation. These assumptions are, in fact, satisfied by a number of already published, highly performant models. We establish definitions in this section, and discuss these existing approaches in depth in Section 4.\nData Distribution We assume the existence of a generalized dataset in the form of an empirical distribution pD(x,y,S) over random variables x, y and S with the following properties:\n• The random variable x is an input variable, typically high-dimensional, such as text or an image. • The random variable y is a target variable whose value the model predicts. In case of object\nclassification, this would be some semantically meaningful class label. However, in our model family, y may also be a high-dimensional context variable, such a text, image, or sentence fragment. • S is a set containing the possible values of y given x, so pD(y|x,S) > 0 ⇐⇒ y ∈ S.\nNote that the set of labels S is not fixed, but a random variable. This allows supervised, contrastive, and self-supervised learning frameworks to be analyzed together: the meaning of S encodes the task. For supervised classification, S is deterministic and contains class labels. For self-supervised pretraining, S contains randomly-sampled high-dimensional variables such as image embeddings. For deep metric learning (Hoffer and Ailon, 2015; Sohn, 2016), the set S contains one positive and k negative samples of the class to which x belongs.\nCanonical Discriminative Form Given a data distribution as above, a generalized discriminative model family may be defined by its parameterization of the probability of a target variable y conditioned on an observed variable x and a set S that contains not only the true target label y, but\nalso a collection of distractors y′:\npθ(y|x,S) = exp(fθ(x) >gθ(y))∑ y′∈S exp(fθ(x) >gθ(y′)) , (1)\nThe codomain of the functions fθ(x) and gθ(y) is RM , and the domains vary according to modelling task. For notational convenience both are parameterized by θ ∈ Θ, but f and g may use disjoint parts of θ, meaning that they do not necessarily share parameters.\nWith F and G we denote the function spaces of fθ and gθ respectively. Our primary domain of interest is when fθ and gθ are highly flexible function approximators, such as DNNs. This brings certain analytical challenges. In neural networks, different choices of parameters θ can result in the same functions fθ and gθ, hence the map Θ → F × G is many-to-one. In the context of representation learning, the function fθ is typically viewed as a nonlinear feature extractor, e.g., the learned representation of the input data. While other choices meet the membership conditions for the family defined by the canonical form of Equation (1), in the remainder, we will focus on DNNs in the remainder. We next present a definition of identifiability suitable for DNNs, and prove that members of the above family satisfy it under additional assumptions." }, { "heading": "3 MODEL IDENTIFIABILITY", "text": "In this section, we derive identifiability conditions for models in the family defined in Section 2." }, { "heading": "3.1 IDENTIFIABILITY IN PARAMETER SPACE", "text": "Identifiability analysis answers the question of whether it is theoretically possible to learn the parameters of a statistical model exactly. Specifically, given some estimator θ′ for model parameters θ∗, identifiability is the property that, for any {θ′,θ∗} ⊂ Θ, pθ′ = pθ∗ =⇒ θ′ = θ∗. (2) Models that do not have this property are said to be non-identifiable. This happens when different values {θ′,θ∗} ⊂ Θ can give rise to the same model distribution pθ′(y|x,S) = pθ∗(y|x,S). In such a case, observing an empirical distribution pθ∗(y|x,S), and fitting a model pθ′(y|x,S) to it perfectly does not guarantee that θ′ = θ∗.\nNeural networks exhibit various symmetries in parameter space such that there is almost always a many-to-one correspondence between a choice of θ and resulting probability function pθ. A simple example in neural networks is that one can swap the (incoming and outgoing) connections of two neurons in a hidden layer. This changes the value of the parameters, but does not change the network’s function. Thus, when representation functions fθ or gθ are parameterized as DNNs, equation 2 is not satisfiable." }, { "heading": "3.2 IDENTIFIABILITY IN FUNCTION SPACE", "text": "For reliable and efficient representation learning, we want learned representations fθ from two identifiable models to be sufficiently similar for interchangeable use in downstream tasks. The most general property we wish to preserve among learned representations is their ability to discriminate among statistical patterns corresponding to categorical groupings. In the model family defined in Section 2, the data and context functions fθ and gθ parameterize pθ(y|x,S), the probability of label assignment, through a normalized inner product. This induces a hyperplane boundary, for discrimination, in a joint space of learned representations for data x and context y. Therefore, in the following, we will derive identifiability conditions up to a linear transformation, using a notion of similarity in parameter space inspired by Hyvärinen et al. (2018).\nDefinition 1. Let L∼ be a pairwise relation on Θ defined as:\nθ′ L∼ θ∗ ⇐⇒ fθ ′(x) = Afθ∗(x)\ngθ′(y) = Bgθ∗(y) (3)\nwhereA andB are invertible M ×M matrices. See Appendix B for proof that L∼ is an equivalence relation. In the remainder, we refer to identifiability up to the equivalence relation L∼ as L∼-identifiable or linearly identifiable." }, { "heading": "3.3 LINEAR IDENTIFIABILITY OF LEARNED REPRESENTATIONS", "text": "We next present a simple derivation of the L∼-identifiability of members of the generalized discriminative family defined in Section 2. This result reveals sufficient conditions under which a discriminative probabilistic model pθ(y|x,S) has a useful property: the learned representations of the input x and target random variables y for any two pairs of parameters (θ′,θ∗) are related as θ′ L∼ θ∗, that is, fθ′(x) = Afθ∗(x) and gθ′(y) = Bgθ∗(y).\nWe first review the notation for the proof, which is introduced in detail in Section 2. We then highlight an important requirement on the diversity of the data distribution, which must be satisfied for the proof statement to hold. We prove the result immediately after.\nNotation. The target random variables y, associated with input random variables x, may be class labels (as in supervised classification), or they could be stochastically generated from datapoints x as, e.g., perturbed image patches (as in self-supervised learning). We account for this additional stochasticity as a set-valued random variable S, containing all possible values of y conditioned on some x. For brevity, we will use shorthands that drop the parameters θ: p′ := pθ′ , p∗ := pθ∗ , f∗ := fθ∗ , f ′ := fθ′ ,g ′ := gθ′ .\nDiversity condition. We assume that for any (θ′,θ∗) for which it holds that p′ = p∗, and for any given x, by repeated sampling S ∼ pD(S|x) and picking yA,yB ∈ S, we can construct a set of M distinct tuples {(y(i)A ,y (i) B )}Mi=1 such that the matrices L′ and L∗ are invertible, where L′ consists of columns (g′(y(i)A )− g′(y (i) B )), and L ∗ consists of columns g∗(y(i)A )− g∗(y (i) B ), i ∈ {1, . . . ,M}. See Section 3.4 for detailed discussion.\nTheorem 1. Under the diversity condition, models in the family defined by Equation (1) are linearly identifiable. That is, for any θ′,θ∗ ∈ Θ, and f∗, f ′,g∗,g′, p∗, p′ defined as in Section 2,\np′ = p∗ =⇒ θ′ L∼ θ∗. (4)\nTo establish the result, we proceed by directly constructing an invertible linear transformation that satisfies Definition 1. Consider yA,yB ∈ S. The likelihood ratios for these points\np′(yA|x,S) p′(yB |x,S) = p∗(yA|x,S) p∗(yB |x,S)\n(5)\nare equal. Substituting our model definition from equation (1), we find:\nexp(f ′(x)>g′(yA)) exp(f ′(x)>g′(yB)) = exp(f∗(x)>g∗(yA)) exp(f∗(x)>g∗(yB)) , (6)\nwhere the normalizing constants cancelled out on the left- and right-hand sides. Taking the logarithm, this simplifies to:\n(g′(yA)− g′(yB))>f ′(x) = (g∗(yA)− g∗(yB))>f∗(x). (7)\nNote that this equation is true for any triple (x,yA,yB) for which pD(x,yB ,yB) > 0.\nWe next collect M distinct tuples (y(i)A ,y (i) B ) so that by repeating Equation (7) M times and by the diversity condition noted above, the resulting difference vectors are linearly independent. We collect these vectors together as the columns of (M ×M)-dimensional matrices L′ and L∗, forming the following system of M linear equations:\nL′>f ′(x) = L∗>f∗(x).\nSince L′ and L∗ are invertible, we rearrange:\nf ′(x) = (L∗L′−1)> f∗(x). (8)\nHence, f ′(x) = Af∗(x) where A = (L∗L′−1). This completes the first half of the proof. See Appendix C for the second half of the proof, which is similar, and handles the function g." }, { "heading": "3.4 DISCUSSION: WHEN DOES THE DIVERSITY CONDITION HOLD?", "text": "Theorem 1 is a constructive proof of existence that exhibits invertible (M ×M) matrices L′ and L∗. We require the diversity condition to hold in order to guarantee invertibility. Such a requirement is similar to the conditions in earlier work on nonlinear ICA such as (Hyvärinen et al., 2018), as discussed in Section 6. Informally, this means that there needs to be a sufficient number of possible values y ∈ S. In the case of supervised classification with K classes, S is fixed and of size K. Then, we need K ≥M + 1 in order to generate M difference vectors gθ(y(1))− gθ(y(j)), j = 2, . . . ,M + 1. In case of self-supervised or deep metric learning, where S and y may be algorithmically generated from x, this requirement is easy to satisfy, as there will typically be a diversity of values of y. The same holds for language models with large vocabularies. However, for supervised classification with a small number of classes, this requirement on the size of S may be restrictive, as we discuss further in Section 4.\nNote that by placing the diversity requirement on the number of classes K, we implicitly assumed that the context representation function gθ has the following property: the M difference vectors span the range of gθ. This is a mild assumption in the context of DNNs: for random initialization and iterative weight updates, this property follows from the stochasticity of the distribution used to initialize the network. Briefly, a set of M + 1 unique points y(j) such that the M vectors gθ(y\n(1)) − gθ(y(j)), j = 2, . . . ,M + 1 are not linearly independent has measure zero. For other choices of gθ, care must be taken to ensure this condition is satisfied.\nWhat can be said when L′ and L∗ are ill-conditioned, that is, the ratio between maximum and minimum singular value σmax(L)σmin(L) (dropping superscripts when a statement apply to both) is large? In the context of a data representation matrix such as L, this implies that there exists at least one column `j of L and constants λk for k 6= j such that ‖`j − ∑ k 6=j λk`k‖2 < ε for small ε. In other words, some column is nearly a linear combination of the others. This implies, in turn, that there exists some tuple (y(k),y(i)) such that the resulting difference vector `j = gθ(y (k) A )− gθ(y (i) B ) can nearly (in the sense above) be written as a linear combination of the other columns. Such near singularity is in this case a function of the choice of samples y that yield the difference vectors. The issue could be handled by resampling different data points until the condition number of the matrices is satisfactory. This amounts to strengthening the diversity condition. We leave more detailed analysis to future work, as the result will depend on the choice of architectures for f and g." }, { "heading": "4 EXAMPLES OF LINEARLY IDENTIFIABLE MODELS", "text": "The form of Equation (1) is already used as a general approach for a variety of machine learning problems. We present a non-exhaustive sample of such publications, chosen to exhibit the range of applications. Many of these approaches were state-of-the-art at the time of their release: Contrastive Predictive Coding (Hénaff et al., 2019), BERT (Devlin et al., 2018), GPT-2 and GPT-3 (Radford et al., 2018; 2019; Brown et al., 2020), XLNET (Yang et al., 2019), and the triplet loss for deep metric learning (Sohn, 2016). In this section, we discuss how to interpret the functional components of these frameworks with respect to the generalized data distribution of Section 2 and canonical parameterization of Equation (1). See Appendix D for reductions to the canonical form of Equation (1).\nSupervised Classification. Although the scope of this paper is identifiable representation learning, under certain conditions, standard supervised classifiers can learn identifiable representations as well. In this case, the number of classes must be strictly greater than the feature dimension, as noted in Section 3.4. We simulate such a model in Section 5.1 to show evidence of its linear identifiability. We stress that representation learning as pretraining for classification is a way to ensure that the conditions on label diversity are met, rather than relying on the supervised classifier itself to generate identifiable representations. This paradigm is discussed in the next subsection.\nRepresentations learned during supervised classification can be linearly identifiable under the following model specification. The input random variables x represent some data domain to be classified, such as images or word embeddings. The target variables y represent label assignments for x, typically semantically meaningful. These are often encoded these as the standard basis vectors ey, a “one-hot encoding.\" The set S contains all K possible values of y. In this case, notice that S is\nnot stochastic: the empirical distribution pD(S|x) is modelled as a Dirac measure with all probability mass on the set S = {0, . . . ,K − 1} (using integers, here, to represent distinct labels) . The representation function fθ(x) of a classifier is often implemented as DNN that maps from the input layer to the layer just prior to the model logits. The context map gθ(y) is given by the weights in the final, linear projection layer, which outputs unnormalized logits. Concretely, gθ(y) = Wey, where W ∈ RM×M is a learnable weight matrix. In order satisfy the diversity condition, the dimension M of the number of classes K must be strictly greater than the dimension of the learned representation M , that is, |S| ≥M + 1. Finally, the output of the final, linear projection layer is normalized through a Softmax function, yielding the parameterization of Equation (1).\nSelf-Supervised Pretraining for Image Classification. Self-supervised learning is a framework that first pretrains a DNN before deploying it on some other, related task. The pretraining task often takes the form of Equation (1) and meets the sufficient conditions to be linearly identifiable. A paradigmatic example is Contrastive Predictive Coding (CPC) (Oord et al., 2018). CPC is a general pretraining framework, but we focus for the sake of clarity on its use in image models here. CPC as applied to images involves: (1) preprocessing an image into augmented patches, (2) assigning labels according to which image the patch came from, and then (3) predicting the representations of the patches whether below, to the right, to the left, or above a certain level (Oord et al., 2018).\nThe context function of CPC, gθ(y), encodes a particular position in the sequence of patches, and the representation function, fθ(x), is an autoregressive function of the previous k patches, according to some predefined patch ordering. Given some x, the collection of all patches from the sequence, from a given minibatch of images, is the set S ∼ pD(S|x), where the randomness enters via the patch preprocessing algorithm. Since the preprocessing phase is part of the algorithm design, it is straightforward to make it sufficiently diverse (enough transformations of enough patches) so as to meet the requirements for the model to be linearly identifiable.\nMulti-task Pretraining for Natural Language Generation. Autoregressive language models, such as (Mikolov et al., 2010; Dai and Le, 2015) and more recently GPT-2 and GPT-3 (Radford et al., 2018; 2019; Brown et al., 2020), are typically also instances of the model family of Equation 1. Data points x are the past tokens, fθ(x) is a nonlinear representation of the past estimated by either an LSTM (Hochreiter and Schmidhuber, 1997) or an autoregressive Transformer model (Vaswani et al., 2017), y is the next token, and wi = gθ(y = i) is a learned representation of the next token, often implemented as a simple look-up table, as in supervised classification.\nBERT (Devlin et al., 2018) is also a member of the linearly identifiable family. This model pretrains word embeddings through a denoising autoencoder-like (Vincent et al., 2008) architecture. For a given sequence of tokenized text, some fixed percentage of the symbols are extracted and set aside, and their original values set to a special null symbol, “corrupting\" the original sequence. The pretraining task in BERT is to learn a continuous representation of the extracted symbols conditioned on the remainder of the text. A transformer (Vaswani et al., 2017) function approximator is used to map from the corrupted sequence into a continuous space. The transformer network is the fθ(x) function of Equation 1. The context map gθ(y) is a lookup map into the learned basis vector for each token." }, { "heading": "5 EXPERIMENTS", "text": "The derivation in Section 3 shows that, for models in the general discriminative family defined in Section 2, the functions fθ and gθ are identifiable up to a linear transformation given unbounded data and assuming model convergence. The question remains as to how close a model trained on finite data and without convergence guarantees will approach this limit. One subtle issue is that poor architecture choices (such as too few hidden units, or inadequate inductive priors) or insufficient data samples when training can interfere with model estimation and thereby linear identifiability of the learned representations, due to underfitting. In this section, we study this issue over a range of models, from low-dimensional language embedding and supervised classification (Figures 1 and 2 respectively) to GPT-2 (Radford et al., 2019), an approximately 1.5 ∗ 109-parameter generative model of natural language (Figure 4). See Appendix A and the code release for details needed to reproduce.\nThrough these experiments, we show that (1) in the small dimensional, large data regime, linearly identifiable models yield learned representations that lie approximately within a linear transformation\nof each other (Figures 1 and 2) as predicted by Theorem 1; and (2) in the high dimensional, large data regime, linearly identifiable models yield learned representations that exhibit a strong trend towards linear identifiability. The learned representations approach a linear transformation of each other monotonically, as a function of dataset sample size, neural network capacity (number of hidden units), and optimization progress. In the case of GPT-2, which has benefited from substantial tuning by engineers to improve model estimation, we find strong evidence of linear identifiability.\nMeasuring linear similarity between learned representations. How can we measure whether pairs of learned representations live within a linear transformation of each other in function space? We adapt Canonical Correlation Analysis (CCA) (Hotelling, 1936) for this purpose, which finds the optimal linear transformations to maximize correlation among two random vectors. On a randomly selected held-out subset B ⊂ D of the training data we compute fθ1(B) and fθ2(B) for two models with parameters θ1 and θ2 respectively. Assume without loss of generality that fθ1(B) and fθ2(B) are centered. CCA finds the optimal linear transformations C andD such that the pairwise correlations ρi between the ith columns of C>fθ1(B) and D>fθ2(B) are maximized. We collect correlations together in ρ. If after linear transformation the two matrices are aligned, the mean of ρ will be 1; if they are instead uncorrelated, then the mean of ρ will be 0. We use the mean of ρ as a proxy for the existence of a linear transformation between fθ1(B) and fθ2(B). For DNNs, it is a well known phenomenon that most of the variability in a learned representation tends to concentrate in a low-dimensional subspace, leaving many noisy, random dimensions (Morcos et al., 2018). Such random noise can result in spurious high correlations in CCA. A solution to this problem is to apply Principal Components Analysis (PCA) (Pearson, 1901) to each of the two matrices fθ2(B) and fθ1(B), projecting onto their top-k principal components, before applying CCA. This technique is known as SVCCA (Raghu et al., 2017).\n5.1 SIMULATION STUDY: CLASSIFICATION BY DNNS\nWe report first on a simulation study of linearly identifiable K-way classification, where all assumptions and sufficient conditions of Theorem 1 are guaranteed to be met. We generated a synthetic data distribution with the properties required by Section 2, and chose DNNs that had sufficient capacity to learn a specified nonlinear relationship between inputs x and targets y. In short, the data distribution pD(x,y,S) consists of inputs x sampled from a 2-D Gaussian with σ = 3. The targets y were assigned among K = 18 classes according to their radial position (angle swept out by a ray fixed at the origin). The number of classes K was chosen to ensure K ≥ dim[fθ(x)] + 1, the diversity condition. See Appendix D.1 for more details.\nTo evaluate linear similarity, we trained two randomly initialized models of pD(y|x,S). Plots show fθ(x), the data representation function, on random x. Figure 2b shows that the mean CCA increases to its maximum value over training, demonstrating that the feature spaces converge to the same solution up to a linear transformation modulo model estimation noise. Similarly, Figure 2c shows that the learned representations exhibit a strongly linear relationship." }, { "heading": "5.2 SELF-SUPERVISED LEARNING FOR IMAGE CLASSIFICATION", "text": "We next investigate high-dimensional, self-supervised representation learning on CIFAR-10 (Krizhevsky et al., 2009) using CPC (Oord et al., 2018; Hénaff et al., 2019). For a given input image, this model predicts the identity of a bottom image patch representation given a top patch representation (Figure 3a.) Here, S comprises the true patch with a set of distractor patches from across the current minibatch. For each model we define both fθ′ and gθ′ as a 3-layer MLP with 256 units per layer (except where noted otherwise) and fix output dimensionality of 64.\nIn Figure 3b, CCA coefficients are plotted over the course of training. As training progresses, alignment between the learned representations increases. In Figure 3c, we artificially limited the size of the dataset, and plot mean correlation after training and convergence. This shows that increasing availability of data correlates with closer alignment. In Figure 3d, we fix dataset size and artificially limit the model capacity (number hidden units) to investigate the effect of model size on the learned representations, varying the number of hidden units from 64 to 8192. This show that increasing model capacity correlates with increase in alignment of learned representations." }, { "heading": "5.3 GPT-2", "text": "Finally, we report on a study of GPT-2 (Radford et al., 2019), a massive-scale language model. The identifiable representation is the set of features just before the last linear layer of the model. We use pretrained models from HuggingFace (Wolf et al., 2019). HuggingFace provides four different versions of the GPT-2: gpt2, gpt2-medium, gpt2-large and gpt2-xl, which differ mainly in the hyper-parameters that determine the width and depth of the neural network layers. For approximately 2000 input sentences, per timestep, for each model, we extracted representations at the last layer (which is identifiable) in addition to the representations per timestep given by three earlier layers in the model. Then, we performed SVCCA on each possible pair of models, on each of\nthe four representations. SVCCA was performed with 16, 64, 256 and 768 principal components, computed by applying SVD separately for each representations of each model. We chose 768 as the largest number of principal components, since that is the representation size for the smallest model in the repository (gpt2). We then averaged the CCA correlation coefficients across the pairs of models. Figure 4 shows the results. The results align well with our theory, namely that the representations at the last layer are more linearly related than the representations at other layers of the model." }, { "heading": "5.4 INTERPRETATION AND SUMMARY", "text": "Theorem 1 establishes linear identifiability as an asymptotic property of a model that holds in the limit of infinite data and exact estimation. The experiments of this section have shown that for linear identifiable models, when the dimensionality is small relative to dataset size (Figures 1 and 2), the learned embeddings are closely linearly related, up to noise. Problems of model estimation and sufficient dataset size are more pronounced in high dimensions. Nevertheless, in GPT2, representations among different trained models do in fact approach a mean correlation coefficient of 1.0 after training (Figure 4, blue line), providing strong evidence of linear identifiability." }, { "heading": "6 RELATED WORKS", "text": "Prior to Hyvärinen and Morioka (2016), identifiability analysis was uncommon in deep learning. We build on advances in the theory of nonlinear ICA (Hyvärinen and Morioka, 2016; Hyvärinen et al., 2018; Khemakhem et al., 2019). In this section, we carefully distinguish our results from prior and concurrent works. Our diversity assumption is similar to diversity assumptions in these earlier works, while differing on certain conditions. The main difference is that their results apply to related but distinct families of models compared to the general discriminative family outlined in this paper. Arguably most related is Theorem 3 of Hyvärinen et al. (2018) and its proof, which shows that a class of contrastive discriminative models will estimate, up to an affine transformation, the true latent variables of a nonlinear ICA model. The main difference with our result is that they additionally assume: (1) that the mapping between observed variables and latent representations is invertible; and (2) that the discriminative model is binary logistic regression exhibiting universal approximation (Hornik et al., 1989), estimated with a contrastive objective. In addition, (Hyvärinen et al., 2018) does not present conditions for affine identifiability for their version of the context representation function g. It should be noted that Theorem 1 in (Hyvärinen et al., 2018) provides a potential avenue for further generalization of our theorem 1 to discriminative models with non-linear interaction between f and g.\nConcurrent work (Khemakhem et al., 2020) has expanded the theory of identifiable nonlinear ICA to a class of conditional energy-based models (EBMs) with universal density approximation capability, therefore imposing milder assumptions than previous nonlinear ICA results. Their version of affine identifiability is similar to our result of linear identifiability in Section 3.2. The main differences are that Khemakhem et al. (2020) focus in both theory and experiment on EBMs. This allows for alternative versions of the diversity condition, assuming that the Jacobians of their versions of f or g are full rank. This is only possible if x or y are assumed continuous-valued; note that we do not make such an assumption. Khemakhem et al. (2020) also presents an architecture for which the conditions provably hold, in addition to sufficient conditions for identifiability up to element-wise scaling, which we did not explore in this work. While we build on these earlier results, we are, to the best of our knowledge, the first to apply identifiability analysis to state-of-the-art discriminative and autoregressive generative models." }, { "heading": "7 CONCLUSION", "text": "We have shown that representations learned by a large family of discriminative models are identifiable up to a linear transformation, providing a novel perspective on representation learning using DNNs. Since identifiability is a property of a model class, and identification is realized in the asymptotic limit of data and compute, we perform experiments in the more realistic setting with finite datasets and finite compute. Our empirical results show that as the representational capacity of the model and dataset size increases, learned representations indeed tend towards solutions that are equal up to only a linear transformation." }, { "heading": "A REPRODUCING EXPERIMENTS AND FIGURES", "text": "In this section, we present training and optimization details needed to reproduce our empirical validation of Theorem 1. We also published notebooks and check-pointed weights for two crucial experiments that investigate the result in the small and massive scale regimes, for Figure 1 and GPT-2 (ANONYMIZED)." }, { "heading": "A.1 FIGURE 1", "text": "We provide a Jupyter notebook and model checkpoints for reproducing Figure 1. Please refer to this for hyperparameter settings. In short, we implemented a model (Mnih and Teh, 2012) in the family of Section 2 and trained it on the Billion Word dataset (Chelba et al., 2013). This is illustrative of the property of Theorem 1 because the relatively modest size of the parameter space (see notebook) and massive dataset minimizes model convergence and data availability restrictions, e.g., approaches the asymptotic regime.\nThe word embedding space is 2-D for ease of visualization. We randomly selected a subset of words, mapped them into their learned embeddings, and visualized them as points in the left and middle panes. We then regress pane one onto pane two in order to learn the best linear transformation between them. Note that if the two are linear transformations of each other, regression will recover that transformation exactly." }, { "heading": "A.2 SIMULATION STUDY: CLASSIFICATION BY DNNS", "text": "For this experiment, we want to ensure that the chosen model can fit the data distribution exactly. Controlling this removes one possible factor that could prevent linear identifiability of learned representations despite the model formally having that property. We do this by making sure that the process that generates the dataset matches the model chosen to learn the relationships between inputs and labels.\nThis is achieved through the following algorithm. We first randomly assign initialization labels based on angular position, then fit two neural networks fθ? and gθ? to predict the final labels, using the discriminative model of Equation (1) and Appendix D.1. Both fθ? and gθ? 4-hidden-layer MLPs with two 64 unit layers and one 2-D bottle neck layer. After training these representation functions to convergence, generated new batch of points x, and used the trained networks to predict the ground truth labels y.\nFinally, to conduct experiments, we chose fθ′ and gθ′ to be the same architecture as fθ? and gθ? . This ensures that the supervised classifier we attempted to learn would using the function approximators fθ′ and gθ′ would be able to capture the true data generating process, e.g, would not fail due to too few hidden units, or too complex a relationship between targets and inputs.\nRemaining training details are as follows. We optimize weights using Adam with a learning rate of 10−4 for 5 ∗ 104 iterations. To make the classification problem more challenging, we additionally add 20 input dimensions of random noise to the data. The Adam optimizer Kingma and Ba (2014) with a learning rate of 3 · 10−4 is used." }, { "heading": "A.3 SELF-SUPERVISED LEARNING FOR IMAGE CLASSIFICATION", "text": "To compute linear similarity between representations, we train two independent models in parallel. For each model we define both fθ and gθ as a 3-layer fully connected neural network with 28 units per layer and a fixed output dimensionality of 26. We define our model following Equation (1), where S is the set of the other image patches from the current minibatch and optimize the objective of (Hénaff et al., 2019). We augment both sampled patches independently with randomized brightness, saturation, hue, and contrast adjustments, following the recipe of (Hénaff et al., 2019). We train on the CIFAR10 dataset (Krizhevsky et al., 2009) with batchsize 28, using the Adam optimizer with a learning rate of 10−4 and the JAX (Bradbury et al., 2018) software package. For each model, we early stop based on a validation loss failing to improve further.\nAdditional details about the experiments that generated Figure 3:\nFigure 3 a. Patches are sampled randomly from training images.\nFigure 3 b. For each model, we train for at most 3 ∗ 104 iterations, early stopping when necessary based on validation loss.\nFigure 3 c. For each model, we train for at most 3 ∗ 104 iterations, early stopping when necessary based on validation loss.\nFigure 3 d. Error bars show standard error computed over 5 pairs of models after 1.5 ∗ 104 training iterations.\nA.4 GPT-2\nWe include all details through a notebook in the code release. Pretrained GPT-2 weights as specified in the main text are publicly available from HuggingFace Wolf et al. (2019)." }, { "heading": "A.5 REMARK ON EFFECT OF INITIALIZATION AND HYPERPARAMETERS OF MODELS", "text": "One question that may be of interest is whether initialization affects whether learned representations will be within a linear transformation of each other. This depends on whether the optimization routines (like Adam, AdaGrad, etc.) are robust to wider initialization within a certain range. If so, model convergence will be unaffected. However, this cannot make up for poor initialization or poor optimization: just as in any deep neural network, a poor initialization and inadequate optimizer will interfere with learning the model parameters. In the case of a linearly identifiable model, means that the learned representations would not live within a linear transformation of each other (up to noise from model fitting), since the models have failed to converge to a reasonable solution for the task at hand.\nWhen the hyperparameters of a DNN are changed, this changes the class of functions that the network can represent (i.e., the size and stride of convolution filters will change which input pixels could be correlated in deeper layers). Typically, hyperparameters are carefully tuned using cross validation based on held-out data. We did so in our experiments also. We expect that such a tuning procedure would yield hyperparameters that are as good as possible for the model to be optimized, allowing sufficient optimization so that the linear identifiability of the learned representations is realized. If the hyperparameters are sufficiently bad and optimization suffers, this will interfere with model fitting, and with linear identifiability of the learned representations also." }, { "heading": "B PROOF THAT LINEAR SIMILARITY IS AN EQUIVALENCE RELATION", "text": "We claim that L∼ is an equivalence relation. It suffices to show that it is reflexive, transitive, and symmetric.\nProof. Consider some function gθ and some θ′,θ?,θ† ⊂ Θ. Suppose θ′ L∼ θ?. Then, there exists an invertible matrix B such that gθ′(x) = Bgθ?(x). Since gθ?(x) = B−1gθ′(x), L∼ is symmetric. Reflexivity follows from setting gθ? to gθ′ and B to the identity matrix. To show transitivity, suppose also that θ? L∼ θ†. Then, there exists an invertible C such that gθ?(x) = Cgθ†(x). Since gθ′\nL∼ gθ? , B−1gθ′(x) = Cgθ†(x). Rearranging terms, gθ′(x) = BCgθ†(x), so that θ′ L∼ θ† as required." }, { "heading": "C SECTION 3.2 CONTINUED: CASE OF CONTEXT REPRESENTATION", "text": "FUNCTION g\nOur derivation of identifiability of gθ is similar to the derivation of fθ . The primary difference is that the normalizing constants in Equation (6) do not cancel out. First, note that we can rewrite Equation 1 as:\npθ(y|x,S) = exp(f̃θ(x,S)>g̃θ(y)) (9)\nwhere:\nf̃θ(x,S) = [−Z(x,S); fθ(x)] (10) g̃θ(y) = [1;gθ(y)] (11)\nZ(x,S) = log ∑ y′∈S exp(fθ(x) >gθ(y ′)). (12)\nBelow, we will show that for the model family defined in Section 2,\npθ′ = pθ∗ =⇒ gθ′(y) = Bgθ?(y), (13) where B is an invertible (M×M)-dimensional matrix, concluding the proof of the linear identifiability of models in the family defined by Equation (1). We adopt the same shorthands as in the main text." }, { "heading": "C.1 DIVERSITY CONDITION", "text": "We assume that for any (θ′,θ∗) ⊂ Θ for which it holds that p′ = p∗, and for any given y, there exist M+1 tuples {(x(i),S(i))}Mi=0, such that pD(x(i),y,S(i)) > 0, and such that the ((M+1)×(M+1)) matrices M′ and M∗ are invertible, where M′ consists of columns f̃ ′(x(i),S(i)), and M∗ consists of columns f̃∗(x(i),S(i)).\nThis is similar to the diversity condition of Section 3.2 but milder, since a typical dataset will have multiple x for each y." }, { "heading": "C.2 PROOF", "text": "With the data distribution pD(x,y,S), for a given y, there exists a conditional distribution pD(x,S|y). Let (x,S) be a sample from this distribution. From equation 1 and the statement to prove, it follows that:\np′(y|x,S) = p∗(y|x,S) (14) Substituting in the definition of our model from equation (9), we find:\nexp(f̃ ′(x,S)>g̃′(y)) = exp(f̃∗(x,S)>g̃∗(y)), (15)\nwhich, evaluating logarithms, becomes\nf̃ ′(x,S)>g̃′(y) = f̃∗(x,S)>g̃∗(y), (16)\nwhich is true for any triple (x,y,S) where pD(y|x,S) > 0. From M′ and M∗ (Section C.1) and equation 16 we form a linear system of equations, collecting the M + 1 relationships together:\nM′ > g̃′(y) = M∗>g̃∗(y) (17)\ng̃′(y) = Ag̃∗(y), (18)\nwhere A = (M∗M′−1)>, an invertible (M + 1)× (M + 1) matrix. It remains to show the existence of an invertible M ×M matrix B such that\ng′(y) = Bg∗(y). (19)\nWe proceed by constructing B from A. Since A is invertible, there exist j elementary matrices {E1, . . . ,Ej} such that their action R = EjEj−1 . . .E1 converts A to a (non-unique) row echelon form. Without loss of generality, we build R such that the a1,1 entry of A is the first pivot, leading to the particular row echelon form:\nRA = a1,1 a1,2 a1,3 . . . a1,m×1 0 ã2,2 ã2,3 . . . ã2,m×1 0 0 ã3,3 . . . ã2,m×1 ... ... ... . . . ...\n0 0 . . . 0 ãm×1,m×1\n , (20)\nwhere ãi,j indicates that the corresponding entry in RA may differ from A due to the action of R. Applying R to Equation (17), we have\nRg̃′(y) = RAg̃∗(y). (21)\nWe now show that removing the first row and column of RA and R generates matrices of rank M . Let RA and R denote the (M ×M) submatrices formed by removing the first row and column of RA and R respectively.\nEquation (20) shows that RA has a pivot in each column, and thus has rank M . To show that R is invertible, we must show that removing the first row and column reduces the rank of R = EjEj−1 . . .E1 by exactly 1. Clearly, each Ek is invertible, and their composition is invertible. We must show the same for the composition of Ek.\nThere are three cases to consider, corresponding to the three unique types of elementary matrices. Each elementary matrix acts on A by either (1) swapping rows i and j, (2) replacing row j by a multiple m of itself, or (3) adding a multiple m of row i to row j. We denote elementary matrix types by superscripts.\nIn Case (1), E1k is an identity matrix with row i and row j swapped. For Case (2), E 2 l is an identity matrix with the j, jth entry replaced by some m. For each E1k and E 2 l in R , where 1 ≤ k, l ≤ j, we know that the indices i, j ≥ 2, because we chose the first entry of the first row of A to be the pivot, and hence do not swap the first row, or replace the first row by itself multiplied by a constant. This implies that removing the first row and column of E1k and E 2 l removes a pivot entry 1 in the (1, 1) position, and removes zeros elsewhere. Hence, the (M ×M) submatrices E1k and E2l are elementary matrices with rank M .\nFor Case (3), E3k has some value m ∈ R in the j, ith entry, and 1s along the diagonal. In this case, we may find a non-zero entry in some E3k, so that, e.g., the second row has a pivot at position (2, 2). Without loss of generality, suppose i = 1, j = 2 and let m be some nonzero constant. Removing the first row and column of E31 removes this m also. Nevertheless, E31 = IM , the rank M identity matrix. For any other E3k 1 < i ≤M + 1, j ≥ 2 because we chose a1,1 as the first pivot, and hence do not swap the first row, or replace the first row by itself multiplied by a constant. In both cases, removing the first row and first column creates an E3k that is a rank M elementary matrix.\nWe have shown by the above that R is a composition of rank M matrices. We now construct the matrix B by removing the first entries of g̃′ and g̃?, and removing the first row and first column of R and RA in Equation (equation 21). Then, we have\nRg′(y) = RAg∗(y), (22)\ng′(y) = R −1 RAg∗(y). (23)\nChoosing B = R −1 RA proves the result." }, { "heading": "D REDUCTIONS TO CANONICAL FORM OF EQUATION (1)", "text": "In the following, we show membership in the model family of Equation 1 using the mathematical notation of the papers under discussion in Section 4. Note that each subsection will change notation to match the papers under discussion, which varies quite widely. We employ the following colour-coding scheme to aid in clarity:\nlog pθ(y|x,S) = fθ(x)>gθ(y)− log ∑ y′∈S exp(fθ(x) >gθ(y ′)),\nwhere fθ(x) is generalized to a data representation function, gθ(y) is generalized to a context representation function, and ∑ y′∈S exp(fθ(x) >gθ(y ′)) is some constant." }, { "heading": "D.1 SUPERVISED CLASSIFICATION", "text": "Supervised classifiers commonly employ a neural network feature extractor followed by a linear projection of the output of this network into a space of unnormalized logits. All the layers prior to the logits are the representation function fθ, and the final projection layer is the context map gθ(y = i) = wi, where wi is the i-th column of a weight matrix W. The set S in this case contains human-chosen labels and has no stochasticity. The loss function is the negative log-likelihood of the data under a categorical distribution with a softmax parameterization:\nlog pθ(y = i|x;S) = fθ(x)>wi − log |S|∑ j=1 exp(fθ(x) >wj)\nSupervised classification is thus an member of the family defined in Section 2. It exhibits the simplest functional form for the g function while allowing f to be arbitrarily complicated." }, { "heading": "D.2 CPC", "text": "Consider a sequence of points xt. We wish to learn the parameters φ to maximize the k-step ahead predictive distribution p(xt+k|xt,φ). In the image patch example, each patch center i, j is indexed by t. Each xt is mapped to a sequence of feature vectors zt = fθ(xt) An autoregressive model, already updated with the previous latent representations z≤t−1, transforms the zt into a “context\" latent representation ct = gAR(z≤t). Instead of predicting future observations k steps ahead, xt+k, directly through a generative model pk(xt+k|ct), Oord et al. (2018) model a density ratio in order to preserve the mutual information between xt+k and ct.\nObjective Let X = {x1, . . . ,xN} be a set of N random samples containing one positive sample from p(xt+k|ct) andN−1 samples from the proposal distribution p(xt+k). Oord et al. (2018) define the following link function: lk(xt+k, ct) , exp ( z>t+kWkct ) . Then, CPC optimizes\n−EX [ log\nlk(xt+k, ct)∑ xj∈X lk(xj , ct)\n] = −EX [ log exp ( zt+k >Wkct )∑\nxj∈X exp ( z>j Wkct ) ] . (24) Substituting in the definition of lk makes equation (24) identical to the model family (Equation 1)." }, { "heading": "D.3 AUTOREGRESSIVE LANGUAGE MODELS (E.G. GPT-2)", "text": "Let U = {u1, . . . , un} be a corpus of tokens. Autoregressive language models maximize a loglikelihood L(U) = ∑n i=1 logP (ui|ui−k, . . . , ui−1; Θ), Concretely, the conditional density is modelled as\nlogP (ui|ui−k:i−1; Θ) = Wi:hi − log ∑ j exp(Wj:hi),\nwhere hi is the m × 1 output of a function approximator (e.g. a Transformer decoder (Liu et al., 2018)), and Wi: is the i’th row of the |U| ×m token embedding matrix." }, { "heading": "D.4 BERT", "text": "Consider a sequence of text x = [x1, . . . , xT ]. Some proportion of the symbols in x are extracted into a vector x̄, and then set in x to a special null symbol, “corrupting\" the original sequence. This operation generates the corrupted sequence\n¯ x. The representational learning task is to predict x̄\nconditioned on ¯ x, that is, to maximize w.r.t. θ:\nlog pθ(x̄| ¯ x) ≈ T∑ t=1 mt log pθ(xt| ¯ x) = T∑ t=1 mt\n( Hθ(\n¯ x)t >e(xt)− log ∑ x′ exp ( Hθ( ¯ x)>t e(x\n′) ) ) ,\nwhere H is a transformer, e is a lookup table, and mt = 1 if symbol xt is masked. That is, corrupted symbols are “reconstructed\" by the model, meaning that their index is predicted. As noted in Yang et al. (2019), BERT models the joint conditional probability p(x̄|\n¯ x) as factorized so that each masked\ntoken is separately reconstructed. This means that the log likelihood is approximate instead of exact." }, { "heading": "D.5 QUICKTHOUGHT VECTORS", "text": "Let f and g be functions that take a sentence as input and encode it into an fixed length vector. Let s be a given sentence, and Sctxt be the set of sentences appearing in the context of s for a fixed context size. Let Scand be the set of candidate sentences considered for a given context sentence sctxt ∈ Sctxt. Then, Scand contains a valid context sentence sctxt as well as many other non-context sentences. Scand is used for the classification objective. For any given sentence position in the context of s (for example, the preceding sentence), the probability that a candidate sentence scand ∈ Scand is the correct sentence for that position is given by\nlog p(scand|s, Scand) = fθ(s)>gθ(scand))− log ∑\ns′∈Scand\nexp ( fθ(s) >gθ(s ′ cand) ) ." }, { "heading": "D.6 DEEP METRIC LEARNING", "text": "The multi-class N-pair loss in Sohn (2016) is proportional to\nlogN − 1 N N∑ i=1 log 1 +∑ j 6=i exp{fθ(xi)>fθ(yj)− fθ(xi)>fθ(yi))} , which can be simplified as\n− 1 N N∑ i=1 log 1 K K∑ j=1 exp{fθ(xi)>fθ(yj)− fθ(xi)>fθ(yi)} = 1\nN N∑ i=1 log\n( 1\n1 K ∑K j=1 exp{fθ(xi)>fθ(yj)− fθ(xi)>fθ(yi)}\n)\n= 1\nN N∑ i=1 log\n( exp{fθ(xi)>fθ(yi)}\n1 K ∑K j=1 exp{fθ(xi)>fθ(yj)}\n) .\nSetting N to 1 and evaluating the log gives\nfθ(xi) >fθ(yi)−\n1\nK K∑ j=1 exp(fθ(xi) >fθ(yj)),\nwhich is Equation 1 where fθ = gθ." }, { "heading": "D.7 NEURAL PROBABILISTIC LANGUAGE MODELS (NPLMS)", "text": "Figure 1 shows results from a neural probabilistic language model as proposed in Mnih and Teh (2012). Mnih and Teh (2012) propose using a log-bilinear model (Mnih and Hinton, 2009) which, given some context h, learns a context word vectors rw and target word vectors qw. Two different embedding matrices are maintained, in other words: one to capture the embedding of the word and the other the context. The representation for the context vector, q̂, is then computed as the linear combination of the context words and a context weight matrix Ci so that q̂ = ∑n−1 i=1 Cirwi . The score for the match between the context and the next word is computed as a dot product, e.g., sθ(w, h) = q̂ >q̃w 1 and substituting into the definition of Phθ (w), we see that\nlogPhθ (w) = q̂ >q̃w − log ∑ w′ exp ( q̂>q̃w′ ) 1We have absorbed the per-token baseline offset b into the qw defined in Mnih and Teh (2012), forming the\nvector q̃w whose i’th entry is (qw)i = (qw)i + bw/(q̂)i\nshows that Mnih and Teh (2012) is a member of the model family.\nInterestingly, a touchstone work in the area of NPLMs, Word2Vec (Mikolov et al., 2013), does not fall under the model family due to an additional nonlinearity applied to the score of Mnih and Teh (2012)." } ]
2,020
null
SP:a3b2fa4479b45b1a59e573b452c73cae507485ba
[ "The author extends generative models with multi-generators by restricting the generators to share weights and all bias to be regularized in order to enforce that the inverse maps of the generators can be represented by a single encoder. The regularizer proposed minimizes an upper bound the the sum of the bias variances. This extension is evaluated on a set of visual datasets (MNIST, 3DChair and UT-Zap50k) with respect to density estimation (evaluated with the FID score) and disentanglement (evaluated with their own disentanglement score)." ]
One of the difficulties in modeling real-world data is their complex multi-manifold structure due to discrete features. In this paper, we propose quotient manifold modeling (QMM), a new data-modeling scheme that considers generic manifold structure independent of discrete features, thereby deriving efficiency in modeling and allowing generalization over untrained manifolds. QMM considers a deep encoder inducing an equivalence between manifolds; but we show it is sufficient to consider it only implicitly via a bias-regularizer we derive. This makes QMM easily applicable to existing models such as GANs and VAEs, and experiments show that these models not only present superior FID scores but also make good generalizations across different datasets. In particular, we demonstrate an MNIST model that synthesizes EMNIST alphabets.
[]
[ { "authors": [ "Mathieu Aubry", "Daniel Maturana", "Alexei A. Efros", "Bryan C. Russell", "Josef Sivic" ], "title": "Seeing 3D Chairs: Exemplar Part-Based 2D-3D Alignment Using a Large Dataset of CAD Models", "venue": "In 2014 IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Alon Brutzkus", "Amir Globerson", "Eran Malach", "Shai Shalev-Shwartz" ], "title": "SGD LEARNS OVER-PARAMETERIZED NETWORKS THAT PROVABLY GENERALIZE", "venue": "ON LINEARLY SEPARA- BLE DATA", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Emilien Dupont" ], "title": "Learning Disentangled Joint Continuous and Discrete Representations", "venue": null, "year": 2018 }, { "authors": [ "Arnab Ghosh", "Viveka Kulharia", "Vinay Namboodiri", "Philip H.S. Torr", "Puneet K. Dokania" ], "title": "MultiAgent Diverse Generative Adversarial Networks", "venue": null, "year": 2017 }, { "authors": [ "Ross Girshick" ], "title": "Fast R-CNN", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2015 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative Adversarial Nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Swaminathan Gurumurthy", "Ravi Kiran Sarvadevabhatla", "R. Venkatesh Babu" ], "title": "DeLiGAN : Generative Adversarial Networks for Diverse and Limited Data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Irina Higgins", "Loic Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "Beta-Vae: Learning Basic Visual Concepts with a Constrained Variational Framework", "venue": null, "year": 2016 }, { "authors": [ "Quan Hoang", "Tu Dinh Nguyen", "Trung Le", "Dinh Phung" ], "title": "MGAN: Training Generative Adversarial Nets with Multiple Generators", "venue": null, "year": 2018 }, { "authors": [ "Yeonwoo Jeong", "Hyun Oh Song" ], "title": "Learning Discrete and Continuous Factors of Data via Alternating Disentanglement", "venue": null, "year": 2019 }, { "authors": [ "Mahyar Khayatkhoei", "Maneesh K. Singh", "Ahmed Elgammal" ], "title": "Disconnected Manifold Learning for Generative Adversarial Networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A Method for Stochastic Optimization", "venue": null, "year": 2014 }, { "authors": [ "Diederik P. Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": null, "year": 2013 }, { "authors": [ "Yann Lecun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "In Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Fangchang Ma", "Ulas Ayaz", "Sertac Karaman" ], "title": "Invertibility of Convolutional Generative Networks from Partial Measurements", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral Normalization for Generative Adversarial Networks", "venue": null, "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", "venue": null, "year": 2015 }, { "authors": [ "Hang Shao", "Abhishek Kumar", "P. Thomas Fletcher" ], "title": "The Riemannian Geometry of Deep Generative Models", "venue": null, "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "venue": "[cs],", "year": 2014 }, { "authors": [ "Jakub M. Tomczak", "Max Welling" ], "title": "VAE with a VampPrior", "venue": "[cs, stat],", "year": 2018 }, { "authors": [ "Chang Xiao", "Peilin Zhong", "Changxi Zheng" ], "title": "BourGAN: Generative Networks with Metric Embeddings", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Aron Yu", "Kristen Grauman" ], "title": "Fine-Grained Visual Comparisons with Local Learning", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "[cs],", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Real-world data are usually considered to involve a multi-manifold structure by having discrete features as well as continuous features; continuous features such as size or location induce a smooth manifold structure in general, whereas discrete features such as digit-class or a new object in the background induce disconnections in the structure, making it a set of disjoint manifolds instead of a single (Khayatkhoei et al., 2018). While this multiplicity makes modeling data a difficult problem, recently proposed deep generative models showed notable progresses by considering each manifold separately. Extending the conventional models by using multiple generators (Khayatkhoei et al., 2018; Ghosh et al., 2017; Hoang et al., 2018), discrete latent variables (Chen et al., 2016; Dupont, 2018; Jeong and Song, 2019), or mixture densities (Gurumurthy et al., 2017; Xiao et al., 2018; Tomczak and Welling, 2018), they exhibit improved performances in image generations and in learning high-level features.\nThere are, however, two additional properties little considered by these models. First, since discrete features are both common and combinatorial, there can be exponentially many manifolds that are not included in the dataset. For example, an image dataset of a cat playing around in a room would exhibit a simple manifold structure according to the locations of the cat, but there are also numerous other manifolds derivable from it via discrete variations—such as placing a new chair, displacing a toy, turning on a light or their combinations—that are not included in the dataset (see Fig. 1). Second, while the manifolds to model are numerous considering such variations, they usually have the same generic structure since the underlying continuous features remain the same; regardless of the chair, toy, or light, the manifold structures are equally due to the location of the cat.\nConsidering these properties, desired is a model that can handle a large number of resembling manifolds, but the aforementioned models show several inefficiencies. They need proportionally many generators or mixture components to model a large number of manifolds; each of them requires much data, only to learn the manifolds having the same generic structure. Moreover, even if they are successfully trained, new discrete changes are very easy to be made, yet they cannot generalize beyond the trained manifolds.\nIn this paper, we propose quotient manifold modeling (QMM)—a new generative modeling scheme that considers generic manifold structure independent of discrete features, thereby deriving efficiency in modeling and allowing generalization over untrained manifolds. QMM outwardly follows the multi-generator scheme (Khayatkhoei et al., 2018; Ghosh et al., 2017; Hoang et al., 2018); but it involves a new regularizer that enforces encoder compatibility—a condition that the inverse maps of the generators to be presented by a single deep encoder. Since deep encoders usually exhibit\ngood generalizability, this condition not only makes a generic structure be shared among the generators but also makes it generalizable to untrained manifolds. In particular, it induces a generalizable equivalence relation between data, and the manifold structure of out-of-sample data can be derived by taking the quotient of this relation, hence the name QMM.\nSince the implementation of QMM is essentially adding a regularizer, it can be easily applied to existing deep generative models such as generative adversarial networks (GANs; (Goodfellow et al., 2014)), variational auto-encoders (VAEs; (Kingma and Welling, 2013)), and their extensions. We demonstrate that these QMM-applied models not only show better FID scores but also show good generalizations.\nOur contributions can be summarized as follows:\n• We propose QMM, a new generative modeling scheme that considers generic manifold structure, thereby allowing generalizations over untrained manifolds.\n• We derive a regularizer enforcing encoder compatibility, an essential condition for QMM.\n• We show that GANs and VAEs implementing QMM show superior FID scores and generalize across different datasets." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 MANIFOLD MODELING IN GANS AND VAES", "text": "While generative adversarial networks (GANs) (Goodfellow et al., 2014) and variational autoencoders (VAEs) (Kingma and Welling, 2013) are two different strands of models, they have the same data-modeling scheme that leads to a manifold structure (though VAEs involve stochasticity). They model data x ∈ X as a transformation of a low-dimensional latent code z ∈ Z via a generative (decoding) map fG : Z → X , which makes every datum they consider lie on a subspace M = fG(Z) ⊂ X . Since fG can be assumed to be smooth and injective in practice (Shao et al., 2017), M accords with the mathematical definition of a smooth manifold.\nBut to deal with multi-manifold data, these models need to approximate disconnections in the structure with low densities. This requires them to have a highly nonlinear fG, which is difficult to learn and often leads to either a low-quality model or a mode collapse (Khayatkhoei et al., 2018)." }, { "heading": "2.2 MULTI-MANIFOLD EXTENSIONS", "text": "To better model the multi-manifold structure, several studies proposed extended GAN and VAE models that consider each of the manifolds separately. According to which component is extended, the approaches can be broken down into the below three. While these approaches have advantages in dealing with multi-manifold data, they still show limiting performance in learning generic structure and do not allow generalization over untrained manifolds.\n1) Multi-generators—x(i) = f (i)G (z) (Khayatkhoei et al., 2018; Ghosh et al., 2017; Hoang et al., 2018). In this approach, each manifold is modeled by a separate generator. The generators are usually independent, but some models design them to share the weight parameters in a subset of the layers. This in part contributes to the learning of a generic structure, but lacks theoretical grounds and shows inferior performances (see Appendix F). 2) Mixture density—x(i) = fG(z(i)), where z(i) ∼ p(i) (Gurumurthy et al., 2017; Xiao et al., 2018; Tomczak and Welling, 2018). In this approach, each manifold is modeled by a separate mode of the latent distribution. While the modes outwardly share the generator, the actual mappings are effectively different from each other as they reside in different regions in Z. 3) Discrete latent variables—x(i) = fG([z; d]) (Chen et al., 2016; Dupont, 2018; Jeong and Song, 2019). In this approach, discrete random variables are explicitly defined and concatenated to the continuous variable. Since discrete information is slowly blended in layer by layer, it can learn the generic structure to some degree, but not as clear (see Table 1)." }, { "heading": "3 QUOTIENT MANIFOLD MODELING (QMM)", "text": "QMM inherits the multi-generator scheme (Khayatkhoei et al., 2018; Ghosh et al., 2017; Hoang et al., 2018), but involves an additional regularizer enforcing the encoder compatibility. Leaving the regularizer for the next section, we first explain the role of this compatibility as binding the generative maps. Then, we see how a plausible equivalence relation can be defined using a deep encoder. Lastly, we explain how a new manifold can be obtained by taking the quotient of the relation." }, { "heading": "3.1 ENCODER COMPATIBILITY", "text": "Definition 1. LetH be a set of encoding maps (X → Z) that can be represented by a deep encoder. We say that generative maps {f (i)G : Z →M (i) ⊂ X}Ai=1 have encoder compatibility if there exists hE ∈ H satisfying (f (i)G )−1(x) = hE(x) for all x ∈M (i) and i.\nWith this condition satisfied, the generative maps {f (i)G }i are no longer independent to each other but share a single X ↔ Z translation rule represented by the deep encoder hE ∈ H. However, this binding is meaningful only when H has a certain property; otherwise, hE is just an extension of functions {(f (i)G )−1}i giving no useful signal to {f (i) G }i.\nIn practice, H indeed involves an important property that its elements—deep encoders—have good generalizability. Having numerous parameters, deep encoders could overfit data, but in practice they find the smoothest function exhibiting generalizability. For example, if a deep encoder is trained on images illustrated in Fig. 1 to output the position of the cat, we expect it would work fairly well even after we place a vase or turn on a light, generalizing over the discrete changes of the room condition. While this generalizing property is not fully understood to date (there are several compelling theories (Zhang et al., 2016; Brutzkus et al., 2018)), it has been consistently demonstrated by many deep recognition models (e.g., VGG-Net (Simonyan and Zisserman, 2014), Fast R-CNN (Girshick, 2015)). In this regard, we assume its continuity and generalizability in developing our model, and verify them later from the experiments." }, { "heading": "3.2 EQUIVALENCE RELATION AND QUOTIENT MANIFOLDS", "text": "Putting the generalizability of hE and the compatibility (f (i) G ) −1 = hE together, we expect hE to output the continuous features z given data x. Then, there is a naturally induced equivalence relation\n(called the kernel of hE) x(1) ∼hE x(2) ⇐⇒ hE(x(1)) = hE(x(2)), which effectively groups data having the same continuous features together regardless of the manifolds they are lying on.\nThis can be seen more concretely in Fig. 1. Assuming the model is well trained, it should have captured the crucial continuous feature—the location of the cat—in Z. Due to the encoder compatibility, such a feature should be encoded in Z smoothly, meaning that hE outputs the location information of the cat in a consistent and generalizable way, if not exactly calibrated. Given this, images having the cat on the same location, regardless of the chair, will have the same hE(x), thus being equivalent under ∼hE (ones on a green dotted curve). Since hE can generalize, the equivalence relation is consistent for data lying on an untrained manifold as well (ones with the vase, drawn in purple).\nNow that the data lying on different manifolds are made equivalent under ∼hE , manifold structures are described as quotients of this relation. In implementations, taking the quotient is the same as taking the orthogonal directions to the equivalence-relation contours. In Fig. 1, we can see that M (1) and M (2), the manifolds that are included in the dataset, are already orthogonal to the contours formed from hE . When given an untrained image (shown in purple), we can obtain the new manifold just by following the orthogonal directions. It will be explained later that this manifold can be described by a new generator, whose bias parameters are optimized for the manifold to pass through the given image (see Sec. 5)." }, { "heading": "4 BIAS-ALIGNING REGULARIZER", "text": "To implement the discussed scheme, the main issue is to make the generators have the encoder compatibility. We first examine a simplified case where each generator is single-layered and derive that a sufficient condition for this is that the biases of the generators are aligned in a certain way (Proposition 1). To achieve the alignment, we introduce a bias-aligning regularizer, the main component of QMM. After, we explain how the regularizer can be extended to the multi-layer case." }, { "heading": "4.1 ENCODER COMPATIBILITY FOR SINGLE LINEAR LAYER", "text": "Consider a set of linear generative maps {f (i)G : Z → M (i) ⊂ X}i; each of the maps is defined as f (i) G (z) := U\n(i)z + a(i), where U (i) ∈ RdX×dZ (dX > dZ) and a(i) ∈ RdX are the weight and bias parameters respectively. We assume U (i) is a full-rank matrix (rank-dZ) such that f (i) G is injective. Then, the inverse (f (i)G ) −1 : M (i) → Z can be derived as\n(f (i) G ) −1 = (U (i))+(x(i) − a(i)) (1)\nwhere (U (i))+ := ( (U (i))>U (i) )−1 (U (i))> denotes the pseudo-inverse of U (i). To achieve the\nencoder compatibility (Def. 1), our desire is to restrict the inverse maps {(f (i)G )−1}i such that they can be represented by a single encoder hE : X → Z. One simple way to achieve this is to use the following proposition.\nProposition 1. If the linear generating maps {f (i)G (z)}i are restricted to have the same weight U and to have the same tangential components of bias a‖, then their inverses {(f (i) G ) −1}i can be represented by a single linear encoder hE(x) := W>x + b, where W = U(U>U)−1 and b = −W>a‖.\nProof. Let a(i)‖ and a (i) ⊥ denote the tangential and the normal components of a (i) (to the column space of U>), respectively. Then, the restrictions can be expressed as U (i) = U and a(i) = a‖+a (i) ⊥ for all i. Substituting these in Eq. 1,\n(f (i) G ) −1 = U+x− U+(a‖ + a (i) ⊥ ) = U +x− U+a‖ = W>x+ b." }, { "heading": "4.2 BIAS-ALIGNING REGULARIZER FOR SINGLE LINEAR LAYER", "text": "When implementing Proposition 1, making {U (i)}i the same is as trivial as setting the same weight U for all f (i)G , but making {a (i) ‖ }i the same is nontrivial since the tangential direction keeps changing while training. One solution would be to use a regularizer minimizing the sum of the variance: trace(cov(a(i)‖ )). However, computing this term is intractable due to the inversion (U >U)−1 inside of a(i)‖ = U(U >U)−1U>a(i).\nTheorem 1. The following inequality holds: trace ( cov(U>a(i)) ) ≥ 1 dz H({λk}dzk=1)trace ( cov(a(i)‖ ) ) where {λk}dzk=1 denotes the eigenvalues of U>U and H(·) denotes harmonic mean.\nProof. See Appendix A.\nAs the harmonic mean in Theorem 1 is constant from the perspective of a(i)‖ , we can minimize the original term by minimizing the upper bound instead. With an addition of log to match the scale due to the dimensionality of the layer, we propose this upper bound as a regularizer to make a(i)‖ the same:\nBA-regularizer: RBA = log ( trace ( cov(U>a(i)) )) . (2)" }, { "heading": "4.3 MULTI-LAYER NETWORK", "text": "The encoder compatibility for multi-layer networks can be enforced straightforwardly by applying Proposition 1 to all the linear layers. That means, for the l-th linear layer of all the generators, their weights are shared, U (i)l = Ul, and their biases are regularized to have the same tangential components, a(i)l = al,‖ + a (i) l,⊥ via Eq. 2; other layers—nonlinear activation functions (we use LeakyReLU) and batch-normalization layers—are simply set to be shared.\nThis design guarantees the inverses of the generator networks to be represented by a single deep encoder (though we do not actually compute the inversion), inducing the encoder compatibility. Since all the layers are invertible, the entire networks are invertible; also, due to Eq. 2, inverses of l-th linear layers can be represented by a single linear layer (W (l)>x+ B), and other layers can be trivially inverted and represented by the same layers." }, { "heading": "5 DEEP QUOTIENT GENERATIVE MODELS", "text": "Now that we described the QMM scheme, we explain how it can be applied to the concrete models. We present its applications on Wasserstein GANs and β-VAEs among others.\nGeneration As above, let us denote the weight of the l-th linear layer as Ul and the biases as {a(i)l }Ai=1 . Then, we can express the data generating distribution in the form of ancestral sampling:\nx ∼ f (i)G ( z; {Ul, a(i)l } L l=1 ) where z ∼ p(z), i ∼ πi.\nHere, πi stands for the probability of selecting the i-th bias. This probability could be also learned using the method proposed in Khayatkhoei et al. (2018), but it is beyond our scope and we fix it as 1/A. We denote the distribution due to this process as pG.\nEncoding (Deriving Quotient Manifolds) Due to the bias regularizer, we do not need to concretize the encoder hE by actually inverting f (i) G during training. But, when encoding is needed, we can obtain the latent codes and biases by minimizing the Euclidean distance (as suggested in Ma et al. (2018)) along with a similar bias regularization as follows.\nz, {al}Ll=1 = arg min z̃,{ãl}Ll=1 ∥∥x− fG (z̃; {ãl}Ll=1)∥∥2 + µ L∑ l=1 log ∥∥U>l al − U>l āl,‖∥∥2 , (3)\nQ-WGAN Applying QMM to the Wasserstein GAN (WGAN; (Arjovsky et al., 2017)), we can define the QMM-applied WGAN losses by simply adding the regularizer:\nLG = −Ex∼pG [D(x)] + λ L∑\nl=1\nlog (\ntrace(cov(U>l a (i) l ))\n) ,\nwhere LG and LD = Ex∼pG [D(x)] − Ex∼pR [D(x)] (pR being the real data distribution) are the generator and the discriminator losses, respectively, λ is a regularization weight, and D(x) is a k-Lipschitz discriminator function.\nQ-β-VAE We use a multi-generator version of β-VAE, which use EM algorithm for training (detailed in Appendix C). Applying QMM can be similarly done by adding the BA-regularizer:\nL = −Ez∼q(z|x) [ A∑ i=1 γ(i) log p (i) G (x|z) ] + βDKL (q(z|x)‖p(z)) +RBA({a(i)l }l,i)" }, { "heading": "6 EXPERIMENTS", "text": "Datasets We experiment on MNIST (Lecun et al., 1998), 3D-Chair (Aubry et al., 2014) and UTZap50k (Yu and Grauman, 2014) image datasets. 3D-Chair contains 1393 distinct chairs rendered from 62 different viewing angles (total 86,366 images); in experiments, only front-looking 44,576 images are used and rescaled to 64x64 grayscale images. UT-Zap50k contains images of 4 different types of shoes (total 50,025 images); the images are rescaled to 32x32.\nModel Architectures For GAN models, we use DCGAN (Radford et al., 2015)-like model architectures for all the datasets (see Appendix B for the complete information). In the discriminator, we use spectral normalization (Miyato et al., 2018) to achieve the k-Lipschitz condition. For VAE models, the architectures are generally the same as GANs, except that the batch-norms are not used. We use β = 4 and ten EM steps. In both models, Adam (Kingma and Ba, 2014) is used for training and encoding, with the default values except for the learning rate, 0.0002.\nIn the QMM-applied models, the number of biases, A, is set to 10 (MNIST), 20 (3D-Chair), and 4 (UT-Zap50k), respectively. Although our multi-biased linear layer can be applied to both fullyconnected and convolutional layers, we apply it only to the former. This was sufficient for our purpose since disconnections rarely exist for such small-sized (thus low-dimensional) kernel patches." }, { "heading": "6.1 BASIC MULTI-MANIFOLD LEARNING", "text": "QMM-applied models show great performance in multi-manifold learning with a good alignment, both qualitatively and quantitatively. Looking at Fig. 3 row-wise, we can see that Q-WGAN and Q-β-VAE learn distinct manifolds well, where each manifold accounts for a different discrete class: e.g., different digits; rolling vs. armchairs; boots vs. flat shoes. Column-wise, we can see that the continuous features are well aligned among the manifolds: e.g., stroke weight, the slant of numbers; viewing angle of chairs; colors of shoes. For quantitative evaluation, we compare FID score (Heusel et al., 2017), a widely used metric to examine the diversity and quality of the generated image samples, reflecting the manifold learning performance overall. As seen in Table 1, our models give better or comparable scores than others." }, { "heading": "6.2 DISENTANGLEMENT OF LATENT CODES", "text": "To further investigate the manifold alignment performance, we examine how much the learned latent features are disentangled. We take a few images from the dataset and manually change one of the smooth features that corresponds to a known transformation (e.g., sheer transform). Then, we encode these images to the latent codes using our model, analyze the principal direction of the change, and compute the linearity of the change as the disentanglement score (see Appendix D). Fig. 4 shows that the learned continuous features are well disentangled along the principal changing directions. Table 1 shows that our models get better scores than other models." }, { "heading": "6.3 GENERALIZATION OVER UNTRAINED MANIFOLDS", "text": "Using the encoding method deriving quotient manifolds (Eq. 3), we examine if QMM-applied models well generalize over untrained manifolds. In both case of added noises (Fig. 5) and different datasets (Fig. 6) they indeed show fair performances in generalization, sharing generic structure with the trained manifolds (aligning continuous features column-wise)." }, { "heading": "7 CONCLUSION", "text": "We proposed QMM that performs multi-manifold learning in consideration for the generic structure. Unique to QMM is that it utilizes the generalization ability of a deep encoder, from which it showed its potential to infer the untrained manifolds even across different datasets. If it is trained with larger datasets such as ImageNet in the future, we expect QMM-applied models would become a more versatile tool that can derive the manifold structure of images in wild." }, { "heading": "A THE PROOF OF THEOREM 1", "text": "Theorem 1. The following inequality holds trace ( cov(U>a(i)) ) ≥ 1 dz H({λk}dzk=1)trace ( cov(a(i)‖ ) ) where {λk}dzk=1 are the eigenvalues of U>U and H(·) denotes a harmonic mean.\nProof. Note that trace ( cov(a(i)‖ ) ) = trace ( cov(U(U>U)−1U>a(i)) )\n= trace\n( 1\nA− 1 A∑ i=1\nU(U>U)−1U>(a(i) − ā)(a(i) − ā)>U(U>U)−1U> )\n= trace\n( 1\nA− 1 A∑ i=1\nU>(a(i) − ā)(a(i) − ā)>U(U>U)−1 )\n= trace ( cov(U>a(i))(U>U)−1 )\n≤ trace ( cov(U>a(i)) ) trace ( (U>U)−1 ) ,\nwhere the second and the fourth lines use the definition of the covariance, third line is obtained from the cyclic property of trace and the last line is obtained from the Cauchy-Schwarz inequality of the positive semi-definite matrices. Thus,\ntrace ( cov(U>a(i)) ) ≥ trace ( cov(a(i)‖ ) ) /trace ( (U>U)−1 ) = 1\nD H({λd}Dd=1)trace\n( Var(a(i)‖ ) ) where {λd}Dd=1 are the eigenvalues of U>U and H(·) denotes the harmonic mean." }, { "heading": "B MODEL ARCHITECTURE AND EXPERIMENTING ENVIRONMENTS", "text": "We used machines with one NVIDIA Titan Xp for the training and the inference of all the models.\nB.1 MNIST\nWe use A = 10 distinct decoding biases in the model. In the training, we set the regularization weight λ = 0.05 and use the Adam optimizer with learning rate 0.0002. In the encoding, we use the Adam optimizer with learning rate 0.1, and the set the regularization weight µ = 0.1.\nB.1.1 NOTES ON THE OTHER COMPARED MODELS\nOverall, we match the architecture of other models with our model for fair comparison. Some differences to note are:\n• DMWGAN: We used 10 generators. Each generator has the same architecture as ours except the number of features or the channels are divided by 4, to match the number of trainable parameters. Note that 4 is the suggested number from the original paper.\n• InfoGAN: Latent dimensions consist of 1 discrete variable (10 categories), 2 continuous variables and 8 noise variables.\n• β-VAE & Q-β-VAE: We used the same architecture as the generators of Q-WGAN, except the BatchNorm layers are removed. We used the Bernoulli likelihood.\nB.2 3D-CHAIR\nWe use A = 20 distinct decoding biases in the model. In the training, we set the regularization weight λ = 0.05 and use the Adam optimizer with learning rate 0.0002. In the encoding, we use the Adam optimizer with learning rate 0.1, and the set the regularization weight µ = 0.1.\nB.2.1 NOTES ON THE OTHER COMPARED MODELS\n• DMWGAN: We used 20 generators. Each generator has the same architecture as ours except the BatchNorms are removed and the number of features or the channels are divided by 4, to match the number of trainable parameters. Note that 4 is the suggested number from the original paper. Note that this was the best setting among what we have tried (division number 2; ones with the BatchNorms).\n• InfoGAN: Latent dimensions consist of 3 discrete variables (20 categories), 1 continuous variable and 10 noise variables.\n• β-VAE & Q-β-VAE: We used the same architecture as the generators of Q-WGAN, except the BatchNorm layers are removed. We used the Bernoulli likelihood.\nB.3 UT-ZAP50K\nWe use A = 4 distinct decoding biases in the model. For the regularization weight in the training, we start with λ = 5e− 6 then raise to λ = 5e− 4 after 300 epochs.\nC Q-β-VAE MODEL\nQ-β-VAE model adopts a multi-generator (multi-decoder) version of β-VAE along with the BAregularizer. To maximize the marginal likelihood given the multiple generators, it uses an EM-like algorithm:\nE-Step:\nQ(G) = A∑ i=1 γ(i) log p (i) G (x|z), where γ(i) = p (i) G (x|z)/ ∑ i p (i) G (x|z)\nM-Step:\nL = −Ez∼q(z|x) [Q(G)] + βDKL (q(z|x)‖p(z)) +RBA({a (i) l }l,i).\nwhere RBA({a(i)l }l,i) = λ ∑L l=1 log ( trace(cov(U>l a (i) l )) ) .\nIn the E-Step, it takes the expectation over the different generators; the responsibilities of each generator can be computed as presented above since γ(i) = p(i|x, z) = πip(i)G (x|z)/ ∑ i πip (i) G (x|z) = p (i) G (x|z)/ ∑ i p (i) G (x|z). In the M-Step, we plug in the computed expectation Q(G) as the marginal likelihood term (γ(i) is fixed). Repeating E and M step multiple times (we take ten repeats), we finish the gradient step for a single mini-batch." }, { "heading": "D DISENTANGLEMENT SCORE", "text": "To compute the disentanglement score, we first take 500 images from the dataset and manually change one of the smooth features that corresponds to a known transformation. For example, we change the slant of the MNIST digits by taking a sheer transform. With 11 different degrees of the transformation, we obtain 5500 transformed images in total. We encode these images to obtain the corresponding latent codes and subtract the mean for each group of the images (originates from the same image) to align all the latent codes. Then, we conduct Principal Component Analysis (PCA) to obtain the principal direction and the spectrum of variations of the latent codes. If the latent features are well disentangled, the dimensionality of the variation should be close to one. To quantify how much it is close to one, we compute the ratio of the first eigenvalue to the second eigenvalue of the PCA covariance, and set it as the disentanglement score." }, { "heading": "E EFFECT OF THE NUMBER OF GENERATORS, A", "text": "To investigate the effect of the number of generators (or biases), A, we train our model on MNIST with different A values then compute the FID and the disentanglement scores (Fig. E.1). It can be seen that our model performs consistently better than the baseline, WGAN, regardless of the different A values.\nFigure E.1: The FID scores (right axis, the smaller the better) and the disentanglement scores (left axis, the larger the better) of Q-WGAN with varying A are shown for MNIST dataset. The dashed lines show the mean scores of the baseline model (WGAN).\nF EFFECT OF THE REGULARIZATION WEIGHT, λ, IN TRAINING\nTo investigate the effect of the regularization weight, λ, we train our model on MNIST with different λ values. It can be seen that our model performs consistently better than the other compared models—Q-WGAN with no regularization (λ = 0), DMWGAN, MADGAN-like (see the next paragraph)—regardless of the different λ values (other models are omitted for better readability; see Table 1 for the omitted ones). It can be also seen that the scores are not very sensitive to the different choices of λ’s; this is beneficial in that one may choose any reasonable value for λ when training the model with a new dataset.\nHere, MADGAN-like is a DMWGAN model, but has a similar parametrization to the MADGAN [6]: The parameters of the first three layers (from the top) are shared for all the generators. In contrast, QWGAN shares the parameters in all the layers except the biases in the fully-connected layers. Thus, in a sense, one can say that Q-WGAN shares only the last few layers, whereas MADGAN-like shares only the first few layers (of course, there is another difference due to the regularizers). Although the both models have the shared structures among the generators, Q-WGAN performs much better in both of the scores as seen in Figure F.2. This suggests that a simple parameter sharing is not enough to obtain a good performance, and the bias regularizer is indeed required.\n(a) FID Scores\n(b) Disentanglement Score (Slant) (c) Disentanglement Score (Width)\nFigure F.2: The FID scores (the smaller the better) and the disentanglement scores (the larger the better) of Q-WGAN with varying λ are shown for MNIST dataset. Note the other models are positioned in the center (shaded in gray) to be visually comparable with the best-performing QWGAN model (λ = 0.05).\nG EFFECT OF THE REGULARIZATION WEIGHT, µ, IN ENCODING\nTo investigate the effect of the regularization weight, µ, in encoding (Eq. 3), we take a trained model and encode an image from the train set using different µ values. Then, plugging in the encoded (estimated) biases, {al}Ll=1, we randomly generate the samples from this new manifold and compare the qualities for different µ’s.\nFrom Figure G.3 (b), we can see that the quality of the encoding tends to improve as µ gets smaller. This might seem opposite to what we expect, as weaker regularization gives better results. However, looking closer, we can see that the features like the slant and the stroke are more aligned with a stronger regularization of µ = 0.05, when comparing with the other manifolds in the bottom pane. Thus, a trade-off exists here between the image quality of the samples and the alignment of the manifold to the others. Note we chose to use µ = 0.05 in all the other experiments.\n(a) Left: An image taken from the training dataset. Middle: The same image with a rectangle noise. Right: Regenerated image from the estimated biases, {al}Ll=1, and the estimated latent code z, from Eq. 3.\n(b) Top pane: Generated samples from the estimated biases, {al}Ll=1. The biases are estimated with different µ’s, (5.0, 0.5, 0.05, 0.005, 0.0005, 0.0), from the top to the bottom. Bottom pane: Generated samples from the originally-learned biases, {{a(i)l } A i=1}Ll=1 (only 5 out of 10 are shown). Note the images in the same column have the same latent value z.\nFigure G.3: Effect of the regularization weight µ in encoding is shown with a MNIST-trained QWGAN model." }, { "heading": "H EFFECT OF THE BIAS REGULARIZER", "text": "To examine the effectiveness of our bias regularizer, we visualize the raw values of biases {a(i)l }i,l and their (pseudo-)tangential component {U>l a (i) l }i,l (see Fig. H.4, H.5). In all figures, we see that the biases are diverse, but their tangential components are well aligned due to the bias regularizer (left). On the contrary, without the regularizer, the tangential components are not aligned (right).\nI SAMPLES GENERATED FROM VARIOUS MODELS\nFigure H.4: Biases a(i)l and their (pseudo-)tangential components U > l a (i) l of the Q-WGAN models, trained on MNIST. Individual curve indicates each i-th bias. Left Parameters of Q-WGAN Right Parameters of Q-WGAN without the regularizer (λ = 0). It can be seen that the regularizer makes the tangential components of the biases well aligned.\nFigure H.5: Biases a(i)l and their (pseudo-)tangential component U > l a (i) l of the Q-WGAN models, trained on 3D-Chair. Individual curve indicates each i-th bias. Left Parameters of Q-WGAN Right Parameters of Q-WGAN without the regularizer (λ = 0). It can be seen that the regularizer makes the tangential components of the biases well aligned.\n(a) Q-WGAN (b) Q-WGAN (λ = 0)\n(c) DMWGAN (d) WGAN\n(e) β-VAE (f) InfoGAN\nFigure I.6: MNIST image samples generated from the trained models\n(a) WGAN (b) β-VAE (c) InfoGAN\nFigure I.7: 3D-Chair image samples generated from the trained models\nFigure I.8: 3D-Chair image samples generated from the trained Q-WGAN\nFigure I.9: 3D-Chair image samples generated from Q-WGAN (λ = 0)\nFigure I.10: 3D-Chair image samples generated from the trained DMWGAN" } ]
2,020
null
SP:021a44760567c1cf821f9388b6812a24aa708755
[ "The authors investigate different tokenization methods for the translation between French and Fon (an African low-resource language). This means that they compare different ways to construct the input and output vocabularies of a neural machine translation (NMT) system. They further propose their own way to create those units, based on phrases, which is called WEB." ]
Building effective neural machine translation (NMT) models for very low-resourced and morphologically rich African indigenous languages is an open challenge. Besides the issue of finding available resources for them, a lot of work is put into preprocessing and tokenization. Recent studies have shown that standard tokenization methods do not always adequately deal with the grammatical, diacritical, and tonal properties of some African languages. That, coupled with the extremely low availability of training samples, hinders the production of reliable NMT models. In this paper, using Fon language as a case study, we revisit standard tokenization methods and introduce Word-Expressions-Based (WEB) tokenization, a human-involved super-words tokenization strategy to create a better representative vocabulary for training. Furthermore, we compare our tokenization strategy to others on the Fon-French and French-Fon translation tasks.
[ { "affiliations": [], "name": "FON LANGUAGE" } ]
[ { "authors": [ "Jade Z. Abbott", "Laura Martinus" ], "title": "Towards neural machine translation for african", "venue": "languages. CoRR,", "year": 2018 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Satanjeev Banerjee", "Alon Lavie" ], "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "venue": "In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization,", "year": 2005 }, { "authors": [ "Franck Burlot", "François Yvon" ], "title": "Using monolingual data in neural machine translation: a systematic study", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Hounkpati B.C. Capo" ], "title": "A Comparative Phonology of Gbe", "venue": null, "year": 2010 }, { "authors": [ "Kyunghyun Cho", "Bart van Merrienboer", "Çaglar Gülçehre", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "venue": "CoRR, abs/1406.1078,", "year": 2014 }, { "authors": [ "Noa P. Cruz Díaz", "Manuel Maña López" ], "title": "An analysis of biomedical tokenization: Problems and strategies", "venue": "In Proceedings of the Sixth International Workshop on Health Text Mining and Information Analysis,", "year": 2015 }, { "authors": [ "Bonaventure F.P. Dossou", "Chris C. Emezue" ], "title": "Ffr v1.0: Fon-french neural machine translation", "venue": "In Proceedings of the AfricanNLP Workshop, International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Philip Gage" ], "title": "A new algorithm for data compression", "venue": "C Users J.,", "year": 1994 }, { "authors": [ "Jiatao Gu", "Hany Hassan", "Jacob Devlin", "Victor O.K. Li" ], "title": "Universal neural machine translation for extremely low resource languages. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)", "venue": null, "year": 2018 }, { "authors": [ "Ying He", "Mehmet Kayaalp" ], "title": "A Comparison of 13", "venue": "Tokenizers on MEDLINE", "year": 2006 }, { "authors": [ "Vu Cong Duy Hoang", "Philipp Koehn", "Gholamreza Haffari", "Trevor Cohn" ], "title": "Iterative back-translation for neural machine translation", "venue": "In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation,", "year": 2018 }, { "authors": [ "Pratik M. Joshi", "Sebastin Santy", "Amar Budhiraja", "Kalika Bali", "Monojit Choudhury" ], "title": "The state and fate of linguistic diversity and inclusion in the nlp world", "venue": null, "year": 2004 }, { "authors": [ "Bryan Jurish", "Kay-Michael Würzner" ], "title": "Word and sentence tokenization with hidden markov models", "venue": "J. Lang. Technol. Comput. Linguistics,", "year": 2013 }, { "authors": [ "Alina Karakanta", "Jon Dehdari", "Josef Genabith" ], "title": "Neural machine translation for low-resource languages without parallel corpora", "venue": "Machine Translation,", "year": 2018 }, { "authors": [ "Dominik Machácek", "Jonás Vidra", "Ondrej Bojar" ], "title": "Morphological and language-agnostic word segmentation for NMT", "venue": "CoRR, abs/1806.05482,", "year": 2018 }, { "authors": [ "Andrei Mikheev" ], "title": "Tagging sentence boundaries", "venue": "In Proceedings of the 1st North American Chapter of the Association for Computational Linguistics Conference,", "year": 2000 }, { "authors": [ "Thi-Vinh Ngo", "Thanh-Le Ha", "Phuong-Thai Nguyen", "Le-Minh Nguyen" ], "title": "Overcoming the rare word problem for low-resource language pairs in neural machine translation", "venue": "In Proceedings of the 6th Workshop on Asian Translation,", "year": 2019 }, { "authors": [ "Iroro Orife", "Julia Kreutzer", "Blessing Sibanda", "Daniel Whitenack", "Kathleen Siminyu", "Laura Martinus", "Jamiil Toure Ali", "Jade Abbott", "Vukosi Marivate", "Salomon Kabongo", "Musie Meressa", "Espoir Murhabazi", "Orevaoghene Ahia", "Elan van Biljon", "Arshath Ramkilowan", "Adewale Akinfaderin", "Alp Öktem", "Wole Akin", "Ghollah Kioko", "Kevin Degila", "Herman Kamper", "Bonaventure Dossou", "Chris Emezue", "Kelechi Ogueji", "Abdallah Bashir" ], "title": "Masakhane – machine translation for africa, 2020", "venue": null, "year": 2020 }, { "authors": [ "Kishore Papineni", "Salim Roukos", "Todd Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics,", "year": 2002 }, { "authors": [ "Adithya Renduchintala", "Pamela Shapiro", "Kevin Duh", "Philipp Koehn" ], "title": "Character-aware decoder for neural machine", "venue": "translation. CoRR,", "year": 2018 }, { "authors": [ "Michael D. Riley" ], "title": "Some applications of tree-based modelling to speech and language", "venue": "In Proceedings of the Workshop on Speech and Natural Language,", "year": 1989 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units", "venue": "CoRR, abs/1508.07909,", "year": 2015 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 86–96, Berlin, Germany, August 2016a", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1009. URL https://www.aclweb.org/anthology/P16-1009", "year": 2016 }, { "authors": [ "Rico Sennrich", "Barry Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1715–1725, Berlin, Germany, August 2016b", "venue": "Association for Computational Linguistics. doi: 10.18653/v1/P16-1162. URL https://www.aclweb.org/anthology/P16-1162", "year": 2016 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V. Le" ], "title": "Sequence to sequence learning with neural networks", "venue": "CoRR, abs/1409.3215,", "year": 2014 }, { "authors": [ "Elan van Biljon", "Arnu Pretorius", "Julia Kreutzer" ], "title": "On optimal transformer depth for low-resource language translation", "venue": "In Proceedings of the International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "CoRR, abs/1706.03762,", "year": 2017 }, { "authors": [ "Weiyue Wang", "Jan-Thorsten Peter", "Hendrik Rosendahl", "Hermann Ney" ], "title": "CharacTer: Translation edit rate on character level", "venue": "In Proceedings of the First Conference on Machine Translation:", "year": 2016 }, { "authors": [ "Jiajun Zhang", "Chengqing Zong" ], "title": "Exploiting source-side monolingual data in neural machine translation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "We would like to start by sharing with you this Fon sentence: « m¢tà m¢tà w¢ zìnwó h¢n wa aligbo m¢ ».\nHow would you tokenize this? What happens if we implement the standard method of splitting the sentence into its word elements (either using the space delimiter or using subword units)?\nm¢tà m¢tà w¢ zìnwó h¢n wa aligbo m¢\nm¢tà m¢tà w¢ zìnwó h¢n wa aligbo m¢\nWell we did that and discovered that a translation (to French) model, trained on sentences split this way, gave a literal translation of «chaque singe est entré dans la vie avec sa tête, son destin\n(English: each monkey entered the stage of life with its head, its destiny)» for the above Fon sentence. But we are not talking about a monkey here . It is a metaphor and so some of the words should be taken collectively as phrases. Using a phrase-based tokenizer, we got the following:\nm¢tà m¢tà w¢ zìnwó h¢n wa aligbo m¢\nm¢tà m¢tà w¢ zìnwó h¢n wa aligbo m¢\nA native speaker looking at some of these grouped phrases will quickly point out that some of the grouped phrases are wrong. Probably the phrase-based model could not effectively learn the phrases due to the low data it was trained on? Also, we got a translation of «singe chaque vient au monde dans vie avec tête et destin\n(English: monkey each comes into world in life with head and fate)» . However, this translation is still not\ncorrect. The expression actually means «Every human being is born with chances» . Another interpretation would be that we must be open to changes, and constantly be learning to take advantages of each situation in life .\nOne illustrative example, which we encourage the reader to try, is to go to Google Translate and try translating «it costs an arm and a leg» to your language (native language or a language you understand). For the 20 languages we tried, all the translation results were wrong: literal and not fully conveying the true (some would say phrasal) expression or meaning. The expression «it costs an arm and a leg», just means «it is expensive». Now imagine a language with a sentence structure largely made up of such expressions – that is Fon .\nTokenization is generally viewed as a solved problem. Yet, in practice, we often encounter difficulties in using standard tokenizers for NMT tasks, as shown above with Fon. This may be because of special tokenization needs for particular domains (like medicine (He & Kayaalp, 2006; Cruz Díaz & Maña López, 2015)), or languages. Fon, one of the five classes of the Gbe language clusters (Aja, Ewe, Fon, Gen, and Phla-Phera according to (Capo, 2010)), is spoken by approximately 1.7 million people located in southwestern Nigeria, Benin, Togo, and southeastern Ghana. There exists approximately 53 different dialects of Fon spoken throughout Benin. Fon has complex grammar and syntax, is very tonal and diacritics are highly influential (Dossou & Emezue, 2020). Despite being spoken by 1.7 million speakers, Joshi et al. (2020) have categorized Fon as «left behind» or «understudied» in NLP. This poses a challenge when using standard tokenization methods.\nGiven that most Fon sentences (and by extension most African languages) are like the sentence example given above (or the combination of such expressions), there is a need to re-visit tokenization of such languages. In this paper, using Fon in our experiment, we examine standard tokenization methods, and introduce the Word-Expressions-Based (WEB) tokenization. Furthermore, we test our tokenization strategy on the FonFrench and French-Fon translation tasks. Our main contributions are the dataset, our analysis and the proposal of WEB for extremely low-resourced African languages (ALRLs). The dataset, models and codes will be open-sourced on our Github page." }, { "heading": "2 BACKGROUND AND MOTIVATION", "text": "Modern NMT models usually require large amount of parallel data in order to effectively learn the representations of morphologically rich source and target languages. While proposed solutions, such as transfer-learning from a high-resource language (HRL) to the low-resource language (LRL) (Gu et al., 2018; Renduchintala et al., 2018; Karakanta et al., 2018), and using monolingual data (Sennrich et al., 2016a; Zhang & Zong, 2016; Burlot & Yvon, 2018; Hoang et al., 2018), have proved effective, they are still not able to produce better translation results for most ALRLs. Standard tokenization methods, like Subword Units (SU) (Sennrich et al., 2015), inspired by the byte-pair-encoding (BPE) (Gage, 1994), have greatly improved current NMT systems. However, studies have shown that BPE does not always boost performance of NMT systems for analytical languages(Abbott & Martinus, 2018). Ngo et al. (2019) show that when morphological differences exist between source and target languages, SU does not significantly improve results. Therefore, there is a great need to revisit NMT with a focus on low-resourced, morphologically complex languages like Fon. This may involve taking a look at how to adapt standard NMT strategies to these languages." }, { "heading": "3 TOKENIZATION STRATEGIES AND THEIR CHALLENGES FOR FON", "text": "In this section, we briefly discuss the standard tokenization strategies employed in NMT, as well as challenges faced while applying them to Fon.\nWord-Based tokenization (WB) consists of splitting sentences into words, according to a delimiter. We’ll show the limits of this method using this Fon expression: «un ¡o ganji» . «un» on its own is an interjection,\nto express an emotion of surprise or astonishment. But «un ¡o» already means \"I am\", \"I am at\", or \"I have\", depending on the context in which it is used. The whole expression, «un ¡o ganji» , could mean \"I am fine\" or \"I am okay\".\nPhrase-Based tokenization (PhB) encodes phrases (group of words) as atomic units, instead of words. As a result, models trained on PhB have the ability to learn and interpret language-specific phrases (noun, verbal and prepositional phrases), making it better than WB for Fon language. However, due to the lowresourcedness of the language and the randomness of PhB alignments, some extracted pairs are not always contextually faultless. For example, the computer alignment gave respectively [z¢n, une (a, an, one)] and\n[azªn, la (the)] , instead of [z¢n, une marmite (a pot)] and [azªn, la maladie (the disease)] .\nEncoding with SU has made great headway in NMT, especially due to its ability to effectively encode rare out-of-vocabulary words (Sennrich et al., 2016b). Machácek et al. (2018), in analyzing the word segmentation for NMT, reported that the common property of BPE and SU relies on the distribution of character sequences, but disregards any morphological properties of the languages in question. Apart from rule-based tokenization, there are machine learning approaches to tokenization as well, which unfortunately require a substantial amount of training samples (both original and tokenized versions of the same texts) (Riley, 1989; Mikheev, 2000; Jurish & Würzner, 2013). To the best of our knowledge, there is no known language-specific tokenization proposed for ALRLs, although there have been a number of works on adapting NMT specifically to them (like (Orife et al., 2020; van Biljon et al., 2020; Vaswani et al., 2017), to mention but a few)." }, { "heading": "4 WORD-EXPRESSIONS-BASED TOKENIZATION (WEB)", "text": "WEB involves aligning and extracting meaningful expressions based on linguistic components of Fon (phonemes, morphemes, lexemes, syntax, and context). This requires the assistance of Fon-French native speakers. Some examples of good alignments are\nnªncé︸ ︷︷ ︸−→ maman (mum) ku¡o jigbézǎn︸ ︷︷ ︸−→ joyeux anniversaire (Happy Birthday)\nnªncé viv¢︸ ︷︷ ︸−→ maman chérie (dear mum) ¡o ji¡i¡e ¡o wutu cé à︸ ︷︷ ︸ nªnvi cé︸ ︷︷ ︸\nIt is important to note that WEB is not a human-in-the-loop process, because it doesn’t require human intervention to run. The human intervention occurs while cleaning and preprocessing the dataset. Although not perfect yet, we describe our algorithm as a recursive search algorithm, which looks and finds the most\noptimal combination of words and expressions which will produce a better translation for a source sentence. The following algorithm was designed to encode and decode sentences using the established vocabularies:\n1. Run through the vocabulary and output a list L of all possible word combinations of words and expressions appearing in the sentence S.\n2. Important principle in Fon: higher word orders = more precise and meaningful expressions. Using this principle, for each element (word or expression), w ∈ L,\n(a) Check if there exists a higher word order, v ∈ L, such that w ( v. (b) If 2a is true, discard w, else keep w.\n3. The output is a list L̂ of optimal expressions that from the initial L, making up the initial sentence S.\n4. Add <start> and <end> taggers respectively at the beginning and the end of every element ŵ (word or expression) ∈ L̂.\n5. Encode every ŵ (word or expression) ∈ L̂\nWe argue that WEB scales well because it does not require any linguistic annotations but knowledge and intuitions from bilinguals, meaning, we can crowdsource those phrases. We want to state clearly, in order to avoid any confusion, that WEB is another version of PhB, involving human evaluation. For our study, it took a group of 8 people, all bilinguals speaking Fon and French, and 30 days to align and extract meaningful sentences manually. No preliminary trainings have been done with the annotators, given the fact that they are in majority linguists and natives of the Fon language. This made the step of sentences splitting into expressions, more natural, reliable and faster." }, { "heading": "5 THE FON-FRENCH DATASET: DATA COLLECTION, CLEANING AND EXPANSION PROCESSES", "text": "As our goal is to create a reliable translation system to be used by the modern Fon-speaking community, we set out to gather more data on daily conversations domain for this study. Thanks to many collaborations with Fon-French bilinguals, journalists and linguists, we gathered daily citations, proverbs and sentences with their French translations. After the collection’s stage, we obtained a dataset of 8074 pairs of Fon-French sentences.\nThe cleaning process, which involved the Fon-French bilinguals, mainly consisted of analyzing the contextual meanings of the Fon sentences, and checking the quality of the French translations. In many cases, where the French translations were really bad, we made significant corrections.\nAnother major observation was the presence of many long and complex sentences. That’s where the idea of expanding the dataset came from: we proceeded to split, when possible, Fon sentences into short, independent, and meaningful expressions (expression of 1-6 words), and accordingly add their respective French translations. At the end of these processes, we obtained our final dataset of 25,383 pairs of Fon-French sentences. The experiments, described in this paper, were conducted using the final dataset.\nWe strongly believe that involving the Fon-French bilinguals into the cleaning process, greatly improved the quality of the dataset. In fact, many initial translation errors were disregarded by standard, rule-based tokenization (like WB, PhB and SU) and cleaning techniques1. However, with the help of the intuitive or natural language knowledge of the Fon-French bilinguals, bunch of those errors have been fixed. This highlights, the importance of having native speakers of ALRLs to clean and review the dataset, during the initial stages of its compilation.\n1Using Python Regex and String packages (https://docs.python.org/3/library/re.html) together with NLTK preprocessing library (https://www.nltk.org/)" }, { "heading": "6 METHODOLOGY, RESULTS AND CONCLUSION", "text": "In this section, we describe the implementation of WB, PhB, SU, WEB and we compare the results of our NMT model trained on them for our analysis." }, { "heading": "6.1 CREATION OF VOCABULARIES FOR WB, PHB, SU AND WEB", "text": "For WB, we split the sentences according to the standard ’space’ delimiter, using the TensorFlow-Keras text tokenizer2, getting a vocabulary of 7,845 and 8,756 Fon and French tokens (words) respectively.\nFor PhB, we use the IBM1 model from nltk.translate.api module3 to align and extract all possible pairs of sentences. Our main observation was that, some pairs generated were either not meaningful or not maching, but we didn’t try to rearrange them in order to see how well the generated pairs, without human intervention, would affect the translation quality. In so doing, we got a vocabulary of 10,576 and 11,724 Fon and French tokens respectively (word and expressions).\nTo implement SU , we used TensorFlow’s SubwordTextEncoder4 and built a vocabulary of 7,952 and 8,116 Fon and French tokens (words and subwords) respectively.\nTo implement WEB, we considered unique expressions as atomic units. Using the steps highlighted for WEB in section 4, we encoded those atomic units and obtained a vocabulary of 18,759 and 19,785 Fon and French tokens (word and expressions) used for the model training." }, { "heading": "6.2 DATASET SPLITTING, MODEL’S ARCHITECTURE AND TRAINING.", "text": "From the dataset, we carefully selected 155 long and complex sentences i.e. sentences made of 5 or more expressions, as test data; sentences that we believe, would test the model’s ability to correctly translate higher word order expressions in Fon. 10% of the training data, was set aside for validation.\nFor training, we used an encoder-decoder-based architecture (Sutskever et al., 2014), made up of 128- dimensional gated rectified units (GRUs) recurrent layers(Cho et al., 2014), with a word embedding layer of dimension 256 and a 10-dimensional attention model (Bahdanau et al., 2015).\nWe trained with a batch size of 100, learning rate of 0.001 and 500 epochs, using validation loss to track model performance. The training took all the 500 epochs, with the loss reducing from one epoch to another. We would like to emphasize that up only at 500 epochs, with the given hyperparameters, we obtained significant and meaningful translations.\nAll training processes took 14 days on a 16GB Tesla K80 GPU. We evaluated our NMT models performances using BLEU (Papineni et al., 2002), METEOR (Banerjee & Lavie, 2005), CharacTER (TER) (Wang et al., 2016), and GLEU (Wu et al., 2016) metrics." }, { "heading": "6.3 RESULTS AND CONCLUSION:", "text": "Table 1 and Table 2 show that our baseline model performs better with PhB, and best with WEB, in terms of metric and translation quality. It is important to note that while BLEU scores of PhB and WEB, reduced on the Fr→Fon task, BLEU scores of WB and SU improved on it. We speculate that this might be because WB and SU enhanced the model’s understanding of French expressions over Fon, confirming the findings of (Abbott & Martinus, 2018). Ngo et al. (2019). This corroborates our argument that in order to help NMT\n2https://www.tensorflow.org 3https://www.nltk.org/api/nltk.translate.html 4http://www.tensorflow.org\nsystems to translate ALRLs better, it is paramount to create adequate tokenization processes that can better represent and encode their structure and morphology.\nThis is a pilot project and there is headroom to be explored with improving WEB. We are also working on combining WEB with SU, to get the best of both worlds. To promote research and reproducibility in this direction, the dataset and model will be made publicly available on Github after the review. Simultaneously, we are working on releasing platforms for the translation service to be used. We believe that it would be a good way to gather more data and keep constantly improving the model’s performance." } ]
2,020
null
SP:dc569d8f517e19b940c774679a98c14eb8272919
[ "This paper investigates a new problem – federated continual learning by Federated Weighted Inter-client Transfer. The key idea is to decompose the network weights into global federated parameters and sparse task-specific parameters such that each client can selectively receive knowledge from other clients by taking a weighted combination of their task-specific parameters. The experiment results in two contrived datasets demonstrate the effectiveness of the proposed method." ]
There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural networks in real-world scenarios. Yet little research has been done regarding the scenario where each client learns on a sequence of tasks from a private local data stream. This problem of federated continual learning poses new challenges to continual learning, such as utilizing knowledge from other clients, while preventing interference from irrelevant knowledge. To resolve these issues, we propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients by taking a weighted combination of their task-specific parameters. FedWeIT minimizes interference between incompatible tasks, and also allows positive knowledge transfer across clients during learning. We validate our FedWeIT against existing federated learning and continual learning methods under varying degrees of task similarity across clients, and our model significantly outperforms them with a large reduction in the communication cost.
[]
[ { "authors": [ "Arslan Chaudhry", "Marc’Aurelio Ranzato", "Marcus Rohrbach", "Mohamed Elhoseiny" ], "title": "Efficient lifelong learning with a-gem", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Yang Chen", "Xiaoyan Sun", "Yaochu Jin" ], "title": "Communication-efficient federated deep learning with asynchronous model update and temporally weighted aggregation", "venue": null, "year": 1903 }, { "authors": [ "Yuyang Deng", "Mohammad Mahdi Kamani", "Mehrdad Mahdavi" ], "title": "Adaptive personalized federated learning", "venue": "arXiv preprint arXiv:2003.13461,", "year": 2020 }, { "authors": [ "Alireza Fallah", "Aryan Mokhtari", "Asuman Ozdaglar" ], "title": "Personalized federated learning: A metalearning approach", "venue": "arXiv preprint arXiv:2002.07948,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Ching-Yi Hung", "Cheng-Hao Tu", "Cheng-En Wu", "Chien-Hung Chen", "Yi-Ming Chan", "Chu-Song Chen" ], "title": "Compacting, picking and growing for unforgetting continual learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2019 }, { "authors": [ "Peter Kairouz", "H Brendan McMahan", "Brendan Avent", "Aurélien Bellet", "Mehdi Bennis", "Arjun Nitin Bhagoji", "Keith Bonawitz", "Zachary Charles", "Graham Cormode", "Rachel Cummings" ], "title": "Advances and open problems in federated learning", "venue": "arXiv preprint arXiv:1912.04977,", "year": 2019 }, { "authors": [ "Sai Praneeth Karimireddy", "Satyen Kale", "Mehryar Mohri", "Sashank J Reddi", "Sebastian U Stich", "Ananda Theertha Suresh" ], "title": "Scaffold: Stochastic controlled averaging for on-device federated learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "James Kirkpatrick", "Razvan Pascanu", "Neil Rabinowitz", "Joel Veness", "Guillaume Desjardins", "Andrei A Rusu", "Kieran Milan", "John Quan", "Tiago Ramalho", "Agnieszka Grabska-Barwinska" ], "title": "Overcoming catastrophic forgetting in neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Geoffrey E. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Abhishek Kumar", "Hal Daume III" ], "title": "Learning task grouping and overlap in multi-task learning", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "Matthias De Lange", "Xu Jia", "Sarah Parisot", "Ales Leonardis", "Gregory Slabaugh", "Tinne Tuytelaars" ], "title": "Unsupervised model personalization while preserving privacy and scalability: An open problem", "venue": "In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Sang-Woo Lee", "Jin-Hwa Kim", "Jaehyun Jun", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Overcoming catastrophic forgetting by incremental moment matching", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Tian Li", "Anit Kumar Sahu", "Manzil Zaheer", "Maziar Sanjabi", "Ameet Talwalkar", "Virginia Smith" ], "title": "Federated optimization in heterogeneous networks", "venue": "arXiv preprint arXiv:1812.06127,", "year": 2018 }, { "authors": [ "David Lopez-Paz", "Marc’Aurelio Ranzato" ], "title": "Gradient episodic memory for continual learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "H Brendan McMahan", "Eider Moore", "Daniel Ramage", "Seth Hampson" ], "title": "Communication-efficient learning of deep networks from decentralized data", "venue": "arXiv preprint arXiv:1602.05629,", "year": 2016 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Hong-Wei Ng", "Stefan Winkler" ], "title": "A data-driven approach to cleaning large face datasets", "venue": "In 2014 IEEE international conference on image processing (ICIP),", "year": 2014 }, { "authors": [ "Cuong V. Nguyen", "Yingzhen Li", "Thang D. Bui", "Richard E. Turner" ], "title": "Variational continual learning", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Matthew Riemer", "Ignacio Cases", "Robert Ajemian", "Miao Liu", "Irina Rish", "Yuhai Tu", "Gerald Tesauro" ], "title": "Learning to learn without forgetting by maximizing transfer and minimizing interference", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Mohammad Rostami", "Soheil Kolouri", "Kyungnam Kim", "Eric Eaton" ], "title": "Multi-agent distributed lifelong learning for collective knowledge acquisition", "venue": "In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems,", "year": 2018 }, { "authors": [ "Andrei Rusu", "Neil Rabinowitz", "Guillaume Desjardins", "Hubert Soyer", "James Kirkpatrick", "Koray Kavukcuoglu", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progressive neural networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Paul Ruvolo", "Eric Eaton" ], "title": "Ella: An efficient lifelong learning algorithm", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2013 }, { "authors": [ "Jonathan Schwarz", "Jelena Luketina", "Wojciech M Czarnecki", "Agnieszka Grabska-Barwinska", "Yee Whye Teh", "Razvan Pascanu", "Raia Hadsell" ], "title": "Progress & compress: A scalable framework for continual learning", "venue": "arXiv preprint arXiv:1805.06370,", "year": 2018 }, { "authors": [ "Joan Serrà", "Dídac Surís", "Marius Miron", "Alexandros Karatzoglou" ], "title": "Overcoming catastrophic forgetting with hard attention to the task", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Hanul Shin", "Jung Kwon Lee", "Jaehon Kim", "Jiwon Kim" ], "title": "Continual learning with deep generative replay", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Neta Shoham", "Tomer Avidor", "Aviv Keren", "Nadav Israel", "Daniel Benditkis", "Liron Mor-Yosef", "Itai Zeitak" ], "title": "Overcoming forgetting in federated learning on non-iid data", "venue": null, "year": 1910 }, { "authors": [ "Johannes Stallkamp", "Marc Schlipsing", "Jan Salmen", "Christian Igel" ], "title": "The german traffic sign recognition benchmark: a multi-class classification competition", "venue": "In The 2011 international joint conference on neural networks,", "year": 2011 }, { "authors": [ "Sebastian Thrun" ], "title": "A Lifelong Learning Perspective for Mobile Robot", "venue": "Control. Elsevier,", "year": 1995 }, { "authors": [ "Michalis K Titsias", "Jonathan Schwarz", "Alexander G de G Matthews", "Razvan Pascanu", "Yee Whye Teh" ], "title": "Functional regularisation for continual learning with gaussian processes", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Hongyi Wang", "Mikhail Yurochkin", "Yuekai Sun", "Dimitris Papailiopoulos", "Yasaman Khazaeni" ], "title": "Federated learning with matched averaging", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Ju Xu", "Zhanxing Zhu" ], "title": "Reinforced continual learning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Jaehong Yoon", "Eunho Yang", "Jeongtae Lee", "Sung Ju Hwang" ], "title": "Lifelong learning with dynamically expandable networks", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jaehong Yoon", "Saehoon Kim", "Eunho Yang", "Sung Ju Hwang" ], "title": "Scalable and order-robust continual learning with additive parameter decomposition", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Mikhail Yurochkin", "Mayank Agarwal", "Soumya Ghosh", "Kristjan Greenewald", "Trong Nghia Hoang", "Yasaman Khazaeni" ], "title": "Bayesian nonparametric federated learning of neural networks", "venue": "Proceedings of the International Conference on Machine Learning (ICML),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Continual learning (Thrun, 1995; Kumar & Daume III, 2012; Ruvolo & Eaton, 2013; Kirkpatrick et al., 2017; Schwarz et al., 2018) describes a learning scenario where a model continuously trains on a sequence of tasks; it is inspired by the human learning process, as a person learns to perform numerous tasks with large diversity over his/her lifespan, making use of the past knowledge to learn about new tasks without forgetting previously learned ones. Continual learning is a long-studied topic since having such an ability leads to the potential of building a general artificial intelligence. However, there are crucial challenges in implementing it with conventional models such as deep neural networks (DNNs), such as catastrophic forgetting, which describes the problem where parameters or semantic representations learned for the past tasks drift to the direction of new tasks during training. The problem has been tackled by various prior work (Kirkpatrick et al., 2017; Lee et al., 2017; Shin et al., 2017; Riemer et al., 2019). More recent works tackle other issues, such as scalability or order-robustness (Schwarz et al., 2018; Hung et al., 2019; Yoon et al., 2020).\nHowever, all of these models are fundamentally limited in that the models can only learn from its direct experience - they only learn from the sequence of the tasks they have trained on. Contrarily, humans can learn from indirect experience from others, through different means (e.g. verbal communications, books, or various media). Then wouldn’t it be beneficial to implement such an ability to a continual learning framework, such that multiple models learning on different machines can learn from the knowledge of the tasks that have been already experienced by other clients? One problem that arises here, is that due to data privacy on individual clients and exorbitant communication cost, it may not be possible to communicate data directly between the clients or between the server and clients. Federated learning (McMahan et al., 2016; Li et al., 2018; Yurochkin et al., 2019) is a learning paradigm that tackles this issue by communicating the parameters instead of the raw data itself. We may have a server that receives the parameters locally trained on multiple clients, aggregates it into a single model parameter, and sends it back to the clients. Motivated by our intuition on learning from indirect experience, we tackle the problem of Federated Continual Learning (FCL) where we perform continual learning with multiple clients trained on private task sequences, which communicate their task-specific parameters via a global server. Figure 1 (a) depicts an example\nscenario of FCL. Suppose that we are building a network of hospitals, each of which has a disease diagnosis model which continuously learns to perform diagnosis given CT scans, for new types of diseases. Then, under our framework, any diagnosis model which has learned about a new type of disease (e.g. COVID-19) will transmit the task-specific parameters to the global server, which will redistribute them to other hospitals for the local models to utilize. This allows all participants to benefit from the new task knowledge without compromising the data privacy.\nYet, the problem of federated continual learning also brings new challenges. First, there is not only the catastrophic forgetting from continual learning, but also the threat of potential interference from other clients. Figure 1 (b) describes this challenge with the results of a simple experiment. Here, we train a model for MNIST digit recognition while communicating the parameters from another client trained on a different dataset. When the knowledge transferred from the other client is relevant to the target task (SVHN), the model starts with high accuracy, converge faster and reach higher accuracy (green line), whereas the model underperforms the base model if the transferred knowledge is from a task highly different from the target task (CIFAR-10, red line). Thus, we need to selective utilize knowledge from other clients to minimize the inter-client interference and maximize inter-client knowledge transfer. Another problem with the federated learning is efficient communication, as communication cost could become excessively large when utilizing the knowledge of the other clients, since the communication cost could be the main bottleneck in practical scenarios when working with edge devices. Thus we want the knowledge to be represented as compactly as possible.\nTo tackle these challenges, we propose a novel framework for federated continual learning, Federated Weighted Inter-client Transfer (FedWeIT), which decomposes the local model parameters into a dense base parameter and sparse task-adaptive parameters. FedWeIT reduces the interference between different tasks since the base parameters will encode task-generic knowledge, while the task-specific knowledge will be encoded into the task-adaptive parameters. When we utilize the generic knowledge, we also want the client to selectively utilize task-specific knowledge obtained at other clients. To this end, we allow each model to take a weighted combination of the task-adaptive parameters broadcast from the server, such that it can select task-specific knowledge helpful for the task at hand. FedWeIT is communication-efficient, since the task-adaptive parameters are highly sparse and only need to be communicated once when created. Moreover, when communication efficiency is not a critical issue as in cross-silo federated learning (Kairouz et al., 2019), we can use our framework to incentivize each client based on the attention weights on its task-adaptive parameters. We validate our method on multiple different scenarios with varying degree of task similarity across clients against various federated learning and local continual learning models. The results show that our model obtains significantly superior performance over all baselines, adapts faster to new tasks, with largely reduced communication cost. The main contributions of this paper are as follows:\n• We introduce a new problem of Federated Continual Learning (FCL), where multiple models continuously learn on distributed clients, which poses new challenges such as prevention of inter-client interference and inter-client knowledge transfer. • We propose a novel and communication-efficient framework for federated continual learn-\ning, which allows each client to adaptively update the federated parameter and selectively utilize the past knowledge from other clients, by communicating sparse parameters." }, { "heading": "2 RELATED WORK", "text": "Continual learning While continual learning (Kumar & Daume III, 2012; Ruvolo & Eaton, 2013) is a long-studied topic with a vast literature, we only discuss recent relevant works. Regularizationbased: EWC (Kirkpatrick et al., 2017) leverages Fisher Information Matrix to restrict the change of the model parameters such that the model finds solution that is good for both previous and the current task, and IMM (Lee et al., 2017) proposes to learn the posterior distribution for multiple tasks as a mixture of Gaussians. Architecture-based: DEN (Yoon et al., 2018) tackles this issue by expanding the networks size that are necessary via iterative neuron/filter pruning and splitting, and RCL (Xu & Zhu, 2018) tackles the same problem using reinforcement learning. APD (Yoon et al., 2020) additively decomposes the parameters into shared and task-specific parameters to minimize the increase in the network complexity. Coreset-based: GEM variants (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019) minimize the loss on both of actual dataset and stored episodic memory. FRCL (Titsias et al., 2020) memorizes approximated posteriors of previous tasks with sophisticatedly constructed inducing points. To the best of our knowledge, none of the existing approaches considered the communicability for continual learning of deep neural networks, which we tackle. CoLLA (Rostami et al., 2018) aims at solving multi-agent lifelong learning with sparse dictionary learning, but it is not applicable to federated learning or continual deep learning.\nFederated learning Federated learning is a distributed learning framework under differential privacy, which aims to learn a global model on a server while aggregating the parameters learned at the clients on their private data. FedAvg (McMahan et al., 2016) aggregates the model trained across multiple clients by computing a weighted average of them based on the number of data points trained. FedProx (Li et al., 2018) trains the local models with a proximal term which restricts their updates to be close to the global model. FedCurv (Shoham et al., 2019) aims to minimize the model disparity across clients during federated learning by adopting a modified version of EWC. Recent works Yurochkin et al. (2019); Wang et al. (2020) introduce well-designed aggregation policies by leveraging Bayesian non-parametric methods. A crucial challenge of federated learning is the reduction of communication cost. TWAFL (Chen et al., 2019) tackles this problem by performing layer-wise parameter aggregation, where shallow layers are aggregated at every step, but deep layers are aggregated in the last few steps of a loop. Karimireddy et al. (2020) suggests an algorithm for rapid convergence, which minimizes the interference among discrepant tasks at clients by sacrificing the local optimality. This is an opposite direction from personalized federated learning methods (Fallah et al., 2020; Lange et al., 2020; Deng et al., 2020) which put more emphasis on the performance of local models. FCL is a parallel research direction to both, and to the best of our knowledge, ours is the first work that considers task-incremental learning of clients under federated learning framework." }, { "heading": "3 FEDERATED CONTINUAL LEARNING WITH FEDWEIT", "text": "Motivated by the human learning process from indirect experiences, we introduce a novel continual learning under federated learning setting, which we refer to as Federated Continual Learning (FCL). FCL assumes that multiple clients are trained on a sequence of tasks from private data stream, while communicating the learned parameters with a global server. We first formally define the problem in Section 3.1, and then propose naive solutions that straightforwardly combine the existing federated learning and continual learning methods in Section 3.2. Then, following Section 3.3 and 3.4, we discuss about two novel challenges that are introduced by federated continual learning, and propose a novel framework, Federated Weighted Inter-client Transfer (FedWeIT) which can effectively handle the two problems while also reducing the client-to-sever communication cost." }, { "heading": "3.1 PROBLEM DEFINITION", "text": "In the standard continual learning (on a single machine), the model iteratively learns from a sequence of tasks {T (1), T (2), ..., T (T )} where T (t) is a labeled dataset of tth task, T (t) = {x(t)i , y (t) i } Nt i=1, which consists of Nt pairs of instances x (t) i and their corresponding labels y (t) i . Assuming the most realistic situation, we consider the case where the task sequence is a task stream with an unknown arriving order, such that the model can access T (t) only at the training period of task t which becomes inaccessible afterwards. Given T (t) and the model learned so far, the learning objective at task t is as follows: minimizeθ(t) L(θ (t);θ(t−1), T (t)), where θ(t) is a set of the model parameters at task t.\nWe now extend the conventional continual learning to the federated learning setting with multiple clients and a global server. Let us assume that we have C clients, where at each client cc ∈ {c1, . . . , cC} trains a model on a privately accessible sequence of tasks {T (1)c , T (2)c , ..., T (t)c } ⊆ T . Please note that there is no relation among the tasks T (t)1:c received at step t, across clients. Now the goal is to effectively train C continual learning models on their own private task streams, via communicating the model parameters with the global server, which aggregates the parameters sent from each client, and redistributes them to clients." }, { "heading": "3.2 COMMUNICABLE CONTINUAL LEARNING", "text": "In conventional federated learning settings, the learning is done with multiple rounds of local learning and parameter aggregation. At each round of communication r, each client cc and the server s perform the following two procedures: local parameter transmission and parameter aggregation & broadcasting. In the local parameter transmission step, for a randomly selected subset of clients at round r, C(r) ⊆ {c1, c2, ..., cC}, each client cc sends updated parameters θ(r) to the server. The server-clients transmission is not done at every client because some of the clients may be temporarily disconnected. Then the server aggregates the parameters θ(r)c sent from the clients into a single parameter. The most popular frameworks for this aggregation are FedAvg (McMahan et al., 2016) and FedProx (Li et al., 2018). However, naive federated continual learning with these two algorithms on local sequences of tasks may result in catastrophic forgetting. One simple solution is to use a regularization-based, such as Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), which allows the model to obtain a solution that is optimal for both the previous and the current tasks. There exist other advanced solutions (Rusu et al., 2016; Nguyen et al., 2018; Chaudhry et al., 2019) that successfully prevents catastrophic forgetting. However, the prevention of catastrophic forgetting at the client level is an orthogonal problem from federated learning.\nThus we focus on challenges that newly arise in this federated continual learning setting. In the federated continual learning framework, the aggregation of the parameters into a global parameter θG allows inter-client knowledge transfer across clients, since a task T (q)i learned at client ci at round q may be similar or related to T (r)j learned at client cj at round r. Yet, using a single aggregated parameter θG may be suboptimal in achieving this goal since knowledge from irrelevant tasks may not to be useful or even hinder the training at each client by altering its parameters into incorrect directions, which we describe as inter-client interference. Another problem that is also practically important, is the communication-efficiency. Both the parameter transmission from the client to the server, and server to client will incur large communication cost, which will be problematic for the continual learning setting, since the clients may train on possibly unlimited streams of tasks." }, { "heading": "3.3 FEDERATED WEIGHTED INTER-CLIENT TRANSFER", "text": "How can we then maximize the knowledge transfer between clients while minimizing the inter-client interference, and communication cost? We now describe our model, Federated Weighted Inter-client Transfer (FedWeIT), which can resolve the these two problems that arise with a naive combination of continual learning approaches with federated learning framework.\nThe main cause of the problems, as briefly alluded to earlier, is that the knowledge of all tasks learned at multiple clients is stored into a single set of parameters θG. However, for the knowledge transfer to be effective, each client should selectively utilize only the knowledge of the relevant tasks that is trained at other clients. This selective transfer is also the key to minimize the inter-client interference as well as it will disregard the knowledge of irrelevant tasks that may interfere with learning.\nWe tackle this problem by decomposing the parameters, into three different types of the parameters with different roles: global parameters (θG) that capture the global and generic knowledge across all clients, local base parameters (B) which capture generic knowledge for each client, and task-adaptive parameters (A) for each specific task per client, motivated by Yoon et al. (2020). A set of the model parameters θ(t)c for task t at continual learning client cc is then defined as follows:\nθ(t)c = B (t) c m(t)c + A (t) c + ∑ i∈C\\c ∑ j<|t| α (t) i,jA (j) i (1)\nwhere B(t)c ∈ {RIl×Ol}Ll=1 is the set of base parameters for cth client shared across all tasks in the\nclient, m(t)c ∈ {ROl}Ll=1 is the set of sparse vector masks which allows to adaptively transform B (t) c for the task t, A(t)c ∈ {RIl×Ol}Ll=1 is the set of a sparse task-adaptive parameters at client cc. Here, L is the number of the layer in the neural network, and Il, Ol are input and output dimension of the weights at layer l, respectively.\nThe first term allows selective utilization of the global knowledge. We want the base parameter B(t)c at each client to capture generic knowledge across all tasks across all clients. In Figure 2 (a), we initialize it at each round t with the global parameter from the previous iteration, θ(t−1)G which aggregates the parameters sent from the client. This allows B(t)c to also benefit from the global knowledge about all the tasks. However, since θ(t−1)G also contains knowledge irrelevant to the current task, instead of using it as is, we learn the sparse mask m(t)c to select only the relevant parameters for the given task. This sparse parameter selection helps minimize inter-client interference, and also allows for efficient communication. The second term is the task-adaptive parameters A(t)c . Since we additively decompose the parameters, this will learn to capture knowledge about the task that is not captured by the first term, and thus will capture specific knowledge about the task T (t)c . The final term describes weighted inter-client knowledge transfer. We have a set of parameters that are transmitted from the server, which contain all task-adaptive parameters from all the clients. To selectively utilizes these indirect experiences from other clients, we further allocate attention α(t)c on these parameters, to take a weighted combination of them. By learning this attention, each client can select only the relevant task-adaptive parameters that help learn the given task. Although we design A(j)i to be highly sparse, using about 2− 3% of memory of full parameter in practice, sending all task knowledge is not desirable. Thus we only transmit the task-adaptive parameter of the previous task (t− 1), which we empirically find to achieve good results in practice.\nTraining. We learn the decomposable parameter θ(t)c by optimizing for the following objective:\nminimize B(t)c , m (t) c , A (1:t) c , α (t) c\nL ( θ(t)c ; T (t)c ) + λ1Ω({m(t)c ,A(1:t)c }) + λ2 t−1∑ i=1 ‖∆B(t)c m(i)c + ∆A(i)c ‖22, (2)\nwhere L is a loss function and Ω(·) is a sparsity-inducing regularization term for all task-adaptive parameters and the masking variable (we use `1-norm regularization), to make them sparse. The second regularization term is used for retroactive update of the past task-adaptive parameters, which helps the task-adaptive parameters to maintain the original solutions for the target tasks, by reflecting the change of the base parameter. Here, ∆B(t)c = B (t) c − B (t−1) c is the difference between the base parameter at the current and previous timestep, and ∆A(i)c is the difference between the task-adaptive parameter for task i at the current and previous timestep. This regularization is essential for preventing catastrophic forgetting. λ1 and λ2 are hyperparameters controlling the effect of the two regularizers." }, { "heading": "3.4 EFFICIENT COMMUNICATION VIA SPARSE PARAMETERS", "text": "FedWeIT learns via server-to-client communication. As discussed earlier, a crucial challenge here is to reduce the communication cost. We describe what happens at the client and the server at each step.\nAlgorithm 1 Federated Weighted Inter-client Transfer input Dataset {D(1:t)c }Cc=1, and Global Parameter θG output {Bc, m(1:t)c , α(1:t)c , A(1:t)c }Cc=1 1: Initialize Bc to θG for all c ∈ C ≡ {1, ..., C} 2: for task t = 1, 2, ... do 3: for round r = 1, 2, ..., R do 4: Transmit B̂ (t,r) c and A (t−1,R) c of client cc to server\n5: Compute θ(r)G ← 1 |C| ∑ c∈C B̂ (t,r) c 6: Distribute θ(r)G and {A (t−1,R) j }j∈C to client c 7: Minimize Eq. (2) for solving each local CL problems 8: end for 9: end for\nFigure 3: Configuration of task sequences: We first split a datasetD into multiple sub-tasks in non-IID manner ((a) and (b)). Then, we distribute them to multiple clients (C#). Mixed tasks from multiple datasets (colored circles) are distributed across all clients ((c)).\nClient: At each round r, each client cc partially updates its base parameter with the nonzero components of the global parameter sent from the server; that is, Bc(n) = θG(n) where n is a nonzero element of the global parameter. After training the model using Eq. (2), it obtains a sparsified base parameter B̂ (t)\nc = B (t) c m (t) c and task-adaptive parameter A(t)c for the new task, both of which\nare sent to the server, at smaller cost compared to naive FCL baselines. While naive FCL baselines require |C| ×R× |θ| for client-to-server communication, FedWeIT requires |C| × (R× |B̂|+ |A|) where R is the number of communication round per task and | · | is the number of parameters. Server: The server first aggregates the base parameters sent from all the clients by taking an weighted average of them: θG = 1C ∑ C B̂ (t) i . Then, it broadcasts θG to all the clients. Task adaptive parameters of t− 1, {A(t−1)i } C\\c i=1 are broadcast at once per client during training task t. While naive FCL baselines requires |C| × R × |θ| for server-to-client communication cost, FedWeIT requires |C| × (R × |θG| + (|C| − 1) × |A|) in which θG,A are highly sparse. We describe the FedWeIT algorithm in Algorithm 1. For a detailed version of the algorithm, please see Section D in appendix." }, { "heading": "4 EXPERIMENTS", "text": "We validate our FedWeIT under different configurations of task sequences against baselines which are namely Overlapped-CIFAR-100 and NonIID-50. 1) Overlapped-CIFAR-100: We group 100 classes of CIFAR-100 dataset into 20 non-iid superclasses tasks. Then, we randomly sample 10 tasks out of 20 tasks and split instances to create a task sequence for each of the clients with overlapping tasks. 2) NonIID-50: We use the following eight benchmark datasets: MNIST (LeCun et al., 1998), CIFAR10/-100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), Fashion-MNIST (Xiao et al., 2017), Not-MNIST (Bulatov, 2011), FaceScrub (Ng & Winkler, 2014), and TrafficSigns (Stallkamp et al., 2011). We split the classes in the 8 datasets into 50 non-IID tasks, each of which is composed of 5 classes that are disjoint from the classes used for the other tasks. This is a large-scale experiment, containing 280, 000 images of 293 classes from 8 heterogeneous datasets. After generating and processing tasks, we randomly distribute them to multiple clients as illustrated in Figure 3.\nExperimental setup We use a modified version of LeNet (LeCun et al., 1998) for the experiments with both Overlapped-CIFAR-100 and NonIID-50 dataset. Further, we use ResNet-18 He et al. (2016) with NonIID-50 dataset. We followed other experimental setups from Serrà et al. (2018) and Yoon et al. (2020). For detailed descriptions of the task configuration and hyperparameters used, please see Section B in appendix. Also, for more ablation studies, please see Section C in appendix.\nBaselines and our model 1) STL: Single Task Learning at each arriving task. 2) Local-EWC: Individual continual learning with EWC (Kirkpatrick et al., 2017) per client. 3) Local-APD: Individual continual learning with APD (Yoon et al., 2020) per client. 4) FedProx: FCL using FedProx (Li et al., 2018) algorithm. 5) Scaffold: FCL using Scaffold (Karimireddy et al., 2020) algorithm. 6) FedCurv: FCL using FedCurv (Shoham et al., 2019) algorithm. 7) FedProx-[model]: FCL, that is trained using FedProx algorithm with [model]. 8) FedWeIT: Our FedWeIT algorithm." }, { "heading": "4.1 EXPERIMENTAL RESULTS", "text": "We first validate our model on both Overlapped-CIFAR-100 and NonIID-50 task sequences against single task learning (STL), continual learning (EWC, APD), federated learning (FedProx, Scaffold, FedCurv), and naive federated continual learning (FedProx-based) baselines. Table 1 shows the final average per-task performance after the completion of (federated) continual learning on both datasets. We observe that FedProx-based federated continual learning (FCL) approaches degenerate the performance of continual learning (CL) methods over the same methods without federated learning. This is because the aggregation of all client parameters that are learned on irrelevant tasks results in severe interference in the learning for each task, which leads to catastrophic forgetting and suboptimal task adaptation. Scaffold achieves poor performance on FCL, as its regularization on the local gradients is harmful for FCL, where all clients learn from a different task sequences. While FedCurv reduces inter-task disparity in parameters, it cannot minimize inter-task interference, which results it to underperform single-machine CL methods. On the other hand, FedWeIT significantly outperforms both single-machine CL baselines and naive FCL baselines on both datasets. Even with larger number of clients (C = 100), FedWeIT consistently outperforms all baselines (Figure 4). This improvement largely owes to FedWeIT’s ability to selectively utilize the knowledge from other clients to rapidly adapt to the target task, and obtain better final performance (Figure 4 Left).\nThe fast adaptation to new task is another clear advantage of inter-client knowledge transfer. To further demonstrate the practicality of our method with larger networks, we experiment on Non-IID dtaset with ResNet-18 (Table 2), on which FedWeIT still significantly outperforms the strongest baseline (FedProx-APD) while using fewer parameters. Also, our model is not sensitive to the hyperparameters λ1 and λ2, if they are within reasonable scales (Figure 6 Left).\nEfficiency of FedWeIT We also report the accuracy as a function of network capacity in Table 1, 2, which we measure by the number of parameters used. We observe that FedWeIT obtains much higher accuracy while utilizing less number of parameters compared to FedProx-APD. This efficiency mainly comes from the reuse of task-adaptive parameters from other clients, which is not possible with single-machine CL methods or naive FCL methods. We also examine the communication cost (the size of non-zero parameters transmitted) of each method. Table 1 reports both the client-to-server (C2S) / server-to-client (S2C) communication cost at training each task. FedWeIT, uses only 30%\nand 3% of parameters for B̂ and A of the dense models respectively. We observe that FedWeIT is significantly more communication-efficient than FCL baselines although it broadcasts task-adaptive parameters, due to high sparsity of the parameters. Figure 5 (a) shows the accuracy as a function of C2S cost according to a transmission of top-κ% informative parameters. Since FedWeIT selectively utilizes task-specific parameters learned from other clients, it results in superior performance over APD-baselines especially with sparse communication of model parameters.\nCatastrophic forgetting Further, we examine how the performance of the past tasks change during continual learning, to see the severity of catastrophic forgetting with each method. Figure 6 Left shows the performance of FedWeIT and FCL baselines on the 6th and 8th tasks, at the end of training for later tasks. We observe that naive FCL baselines suffer from more severe catastrophic forgetting than local continual learning with EWC because of the inter-client interference, where the knowledge of irrelevant tasks from other clients overwrites the knowledge of the past tasks. Contrarily, our model shows no sign of catastrophic forgetting. This is mainly due to the selective utilization of the prior knowledge learned from other clients through the global/task-adaptive parameters, which allows it to effectively alleviate inter-client interference. FedProx-APD also does not suffer from catastrophic forgetting, but they yield inferior performance due to ineffective knowledge transfer. We also report Backward Transfer (BWT), which is a measure on catastrophic forgetting for all models (more positive the better). We provide the details of BWT in the Section B in appendix.\nWeighted inter-client knowledge transfer By analyzing the attention α in Eq. (1), we examine which task parameters from other clients each client selected. Figure 5 (b), shows example of the attention weights that are learned for the 0th split of MNIST and 10th split of CIFAR-100. We observe that large attentions are allocated to the task parameters from the same dataset (CIFAR-100 utilizes parameters from CIFAR-100 tasks with disjoint classes), or from a similar dataset (MNIST utilizes parameters from Traffic Sign and SVHN). This shows that FedWeIT effectively selects beneficial parameters to maximize inter-client knowledge transfer. This is an impressive result since it does not know which datasets the parameters are trained on." }, { "heading": "5 CONCLUSION", "text": "We tackled a novel problem of federated continual learning, whose goal is to continuously learn local models at each client while allowing it to utilize indirect experience (task knowledge) from other clients. This poses new challenges such as inter-client knowledge transfer and prevention of interclient interference between irrelevant tasks. To tackle these challenges, we additively decomposed the model parameters at each client into the global parameters that are shared across all clients, and sparse local task-adaptive parameters that are specific to each task. Further, we allowed each\nmodel to selectively update the global task-shared parameters and selectively utilize the task-adaptive parameters from other clients. The experimental validation of our model under various task similarity across clients, against existing federated learning and continual learning baselines shows that our model obtains significantly outperforms baselines with reduced communication cost. We believe that federated continual learning is a practically important topic of large interests to both research communities of continual learning and federated learning, that will lead to new research directions." }, { "heading": "A APPENDIX", "text": "Organization The appendix is organized as follows: In Section B, we further describe the experimental details, including the network architecture, hyper-parameter configurations, forgetting measures, and datasets. Also, we report additional experimental results in Section C about the effect of the communication frequency (Section C.1) and an additional ablation studies for model components on Overlapped-CIFAR-100 dataset (Section C.2). We include a detailed algorithm for our FedWeIT in Section D." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "We further provide the experimental settings in detail, including the descriptions of the network architectures, hyperparameters, and dataset configuration.\nNetwork architecture We utilize a modified version of LeNet and a conventional ResNet-18 as the backbone network architectures for validation. In the LeNet, the first two layers are convolutional neural layers of 20 and 50 filters with the 5 × 5 convolutional kernels, which are followed by the two fully-connected layers of 800 and 500 units each. Rectified linear units activations and local response normalization are subsequently applied to each layers. We use 2 × 2 max-pooling after each convolutional layer. All layers are initialized based on the variance scaling method. Detailed description of the architecture for LeNet is given in Table 3.\nConfigurations We use an Adam optimizer with adaptive learning rate decay, which decays the learning rate by a factor of 3 for every 5 epochs with no consecutive decrease in the validation loss. We stop training in advance and start learning the next task (if available) when the learning rate reaches ρ. The experiment for LeNet with 5 clients, we initialize by 1e−3 × 13 at the beginning of each new task and ρ = 1e−7. Mini-batch size is 100, the rounds per task is 20, an the epoch per round is 1. The setting for ResNet-18 is identical, excluding the initial learning rate, 1e−4. In the case of\nexperiments with 20 and 100 clients, we set the same settings except reducing minibatch size from 100 to 10 with an initial learning rate 1e−4. We use client fraction 0.25 and 0.05, respectively, at each communication round. we set λ1 = [1e−1, 4e−1] and λ2 = 100 for all experiments. Further, we use µ = 5e−3 for FedProx, λ = [1e−2, 1.0] for EWC and FedCurv. We initialize the attention parameter α(t)c as sum to one, α (t) c,j ← 1/|α (t) c |.\nBackward-transfer (BWT) Backward transfer (BWT) is a measure for catastrophic forgetting. BWT compares the performance disparity of previous tasks after learning current task as follows:\nBWT = 1 T − 1 ∑ i<T P (T ) i − P (i) i , (3)\nwhere P (T )i is the performance of task i after task T is learned (i < T ). Thus, a large negative backward transfer value indicates that the performance has been substantially reduced, in which case catastrophic forgetting has happened.\nFor NonIID-50 dataset, we utilize 8 heterogenous datasets and create 50 non-iid tasks in total as shown in Table 4. Then we arbitrarily select 10 tasks without duplication and distribtue them to 5 clients. The average performance of single task learning on the dataset is 85.78± 0.17(%), measured by our base LeNet architecture.\nDatasets We create both Overlapped-CIFAR-100 and NonIID-50 datasets. For Overlapped-CIFAR100, we generate 20 non-iid tasks based on 20 superclasses, which hold 5 subclasses. We split instances of 20 tasks according to the number of clients (5, 20, and 100) and then distribute the tasks across all clients." }, { "heading": "C ADDITIONAL EXPERIMENTAL RESULTS", "text": "We further include a quantitative analysis about the communication round frequency and additional experimental results across the number of clients.\nC.1 EFFECT OF THE COMMUNICATION FREQUENCY\nWe provide an analysis on the effect of the communication frequency by comparing the performance of the model, measured by the number of training epochs per communication round. We run the 4 different FedWeIT with 1, 2, 5, and 20 training epochs per round. Figure 7 shows the performance of our FedWeIT variants. As clients frequently update the model pa-\nrameters through the communication with the central server, the model gets higher performance while maintaining smaller network capacity since the model with a frequent communication efficiently updates the model parameters as transferring the inter-client knowledge. However, it requires much heavier communication costs than the model with sparser communication. For example, the model trained for 1 epochs at each round may need to about 16.9 times larger entire communication cost than the model trained for 20 epochs at each round. Hence, there is a trade-off between model performance of federated continual learning and communication efficiency, whereas FedWeIT variants consistently outperform (federated) continual learning baselines.\nC.2 ABLATION STUDY FOR MODEL COMPONENTS\nWe perform an ablation study to analyze the role of each component of our FedWeIT. We compare the performance of four different variations of our model. w/o B communication describes the model that does not transfer the base parameter B and only communicates task-adaptive ones. w/o A communication is the model that does not communicate task-adaptive parameters. w/o A is the model which trains the model only with sparse transmission of local base parameter, and w/o m is the model without the sparse vector mask.\nAs shown in Table 6, without communicating B or A, the model yields significantly lower performance compared to the full model since they do not benefit from inter-client knowledge transfer. The model w/o A obtains very low performance due to catastrophic forgetting, and the model w/o sparse mask m achieves lower accuracy with larger capacity and cost, which demonstrates the importance of performing selective transmission." }, { "heading": "D DETAILED ALGORITHM FOR FEDWEIT", "text": "Algorithm 2 Algorithm for FedWeIT input Dataset {D(1:t)c }Cc=1, and Global Parameter θG output {Bc, m(1:t)c , α(1:t)c , A(1:t)c }Cc=1 1: Initialize Bc to θG for all c ∈ C ≡ {1, ..., C} 2: for task t = 1, 2, ... do 3: for round r = 1, 2, ..., R do 4: Select communicable clients C(r) ⊆ C 5: if r = 1 then 6: A(t−1,R)\nc∈C(r) and B̂ (t,r) c∈C(r) are transmitted from C (r) to the central server\n7: Set a new knowledge base kb(t−1) = {A(t−1,R)j }j∈C(1) 8: else 9: B̂ (t,r)\nc∈C(r) are transmitted from C (r) to the central server\n10: end if 11: Update θ(r)G ← 1\n|C(r)|\n∑ c∈C(r) B̂ (t,r) c\n12: Distribute θ(r)G and kb (t−1) to client c ∈ C(r) if cc meets kb(t−1) first, otherwise distribute only θ(r)G 13: Minimize Eq. (2) for solving each local CL problems 14: end for 15: end for" }, { "heading": "E FEDWEIT FOR ASYNCHRONOUS FEDERATED CONTINUAL LEARNING", "text": "Algorithm 3 Algorithm for Asynchronous FedWeIT input Dataset {D(1:t)c }Cc=1, and Global Parameter θG output {Bc, m(1:t)c , α(1:t)c , A(1:t)c }Cc=1 1: Initialize Bc to θG for all c ∈ C ≡ {1, ..., C} 2: the knowledge base kb← {} 3: for round r = 1, 2, ... do 4: if all clients finished the training then 5: break 6: else 7: Select communicable clients C(r) ⊆ C 8: for c ∈ C(r) do 9: if new task t′ is arrived at the client cc then 10: Update the knowledge base kb← kb ∪ {A(t ′−1)\nc } at the central server 11: end if 12: B̂ (r)\nc are transmitted from the client cc to the central server 13: end for 14: Update θ(r)G ← 1\n|C(r)|\n∑ c∈C(r) B̂ (r) c\n15: if a client cc∈C(r) still learn then 16: Distribute θ(r)G and kb\n′ ⊆ kb to the client cc if new task is arrived, otherwise distribute θ(r)G 17: Minimize Eq. (2) for solving local CL problems at the client cc 18: end if 19: end if 20: end for\nWe now consider FedWeIT under the asynchronous federated continual learning scenario, where there is no synchronization across clients for each task. This is a more realistic scenario since each task may require different training rounds to converge during federated continual learning. Here, asynchronous implies that each task requires different training costs (i.e., time, epochs, or rounds) for training. Under the asynchronous federated learning\nscenario, FedWeIT transfers any available task-adaptive parameters from the knowledge base (kb′) to each client. We provide the detailed algorithm in Algorithm 3. In Table 7 and Figure 10, we plot the average test accuracy over all tasks during synchronous / asynchronous federated continual learning. As shown in Figure 10, different tasks across clients at the same timestep require the same number of training rounds, receiving new tasks and task-adaptive parameters from the knowledge base simultaneously with the synchronous FedWeIT. On the other hand, with asynchronous FedWeIT, each task requires different training rounds and receives new tasks and task-adaptive parameters in an asynchronous manner. The results in Table 7 shows that the performance of asynchronous FedWeIT is almost similar to that of the synchronous FedWeIT." } ]
2,020
null
SP:517da79468af9d50729358716877de22c41431a9
[ "This work considers the effect of sparsification, quantization and non-linear transormations on the spectrum of a random matrix with respect to performance in downstream applications like clustering. Eigen decomposition of large matrices is computationally very expensive and therefore methods like sparsification is used in order to reduce the computational complexity, while adding some amount of error. Therefore, theoretical bounds which quantize the amount of deviation because of such processing steps is essential to understand how much/little degraded the performance of machine learning applications are because of this step. This work considers the important case of spectral clustering with a 2 class mixture of subexponential models using " ]
Given a large data matrix, sparsifying, quantizing, and/or performing other entrywise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations. In particular, we show that very little change occurs in the informative eigenstructure, even under drastic sparsification/quantization, and consequently that very little downstream performance loss occurs when working with very aggressively sparsified or quantized spectral clustering problems. We illustrate how these results depend on the nonlinearity, we characterize a phase transition beyond which spectral clustering becomes possible, and we show when such nonlinear transformations can introduce spurious non-informative eigenvectors.
[ { "affiliations": [], "name": "Zhenyu Liao" }, { "affiliations": [], "name": "Romain Couillet" }, { "affiliations": [], "name": "Michael W. Mahoney" } ]
[ { "authors": [ "Dimitris Achlioptas", "Frank McSherry" ], "title": "Fast computation of low-rank matrix approximations", "venue": "Journal of the ACM (JACM),", "year": 2007 }, { "authors": [ "Zhidong Bai", "Jack W Silverstein" ], "title": "Spectral analysis of large dimensional random matrices, volume 20", "venue": null, "year": 2010 }, { "authors": [ "Jinho Baik", "Gérard Ben Arous", "Sandrine" ], "title": "Péché. Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices", "venue": "The Annals of Probability,", "year": 2005 }, { "authors": [ "Xiuyuan Cheng", "Amit Singer" ], "title": "The spectrum of random inner-product kernel matrices", "venue": "Random Matrices: Theory and Applications,", "year": 2013 }, { "authors": [ "Tarin Clanuwat", "Mikel Bober-Irizar", "Asanobu Kitamoto", "Alex Lamb", "Kazuaki Yamamoto", "David Ha" ], "title": "Deep learning for classical japanese literature, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "ImageNet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Yen Do", "Van Vu" ], "title": "The spectrum of random kernel matrices: universality results for rough and varying kernels", "venue": "Random Matrices: Theory and Applications,", "year": 2013 }, { "authors": [ "Edgar Dobriban", "Stefan Wager" ], "title": "High-dimensional asymptotics of prediction: Ridge regression and classification", "venue": "The Annals of Statistics,", "year": 2018 }, { "authors": [ "Zhen Dong", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "HAWQ: Hessian aware quantization of neural networks with mixed-precision", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Petros Drineas", "Michael W Mahoney" ], "title": "RandNLA: randomized numerical linear algebra", "venue": "Communications of the ACM,", "year": 2016 }, { "authors": [ "Petros Drineas", "Anastasios Zouzias" ], "title": "A note on element-wise matrix sparsification via a matrixvalued Bernstein inequality", "venue": "Information Processing Letters,", "year": 2011 }, { "authors": [ "Noureddine El Karoui" ], "title": "On information plus noise kernel random matrices", "venue": "The Annals of Statistics,", "year": 2010 }, { "authors": [ "Zhou Fan", "Andrea Montanari" ], "title": "The spectral norm of random inner-product kernel matrices", "venue": "Probability Theory and Related Fields,", "year": 2019 }, { "authors": [ "Gene H. Golub", "Charles F. Van Loan" ], "title": "Matrix Computations", "venue": null, "year": 2013 }, { "authors": [ "Itay Hubara", "Matthieu Courbariaux", "Daniel Soudry", "Ran El-Yaniv", "Yoshua Bengio" ], "title": "Binarized Neural Networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural Tangent Kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "The asymptotic spectrum of the Hessian of DNN throughout training", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Antony Joseph", "Bin Yu" ], "title": "Impact of regularization on spectral clustering", "venue": "The Annals of Statistics,", "year": 2016 }, { "authors": [ "Arun Kadavankandy", "Romain Couillet" ], "title": "Asymptotic gaussian fluctuations of spectral clustering eigenvectors", "venue": "IEEE 8th International Workshop on Computational Advances in MultiSensor Adaptive Processing (CAMSAP),", "year": 2019 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Ping Li", "Phan-Minh Nguyen" ], "title": "On random deep weight-tied autoencoders: Exact asymptotic analysis, phase transitions, and implications to training", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Zhenyu Liao", "Romain Couillet" ], "title": "Inner-product kernels are asymptotically equivalent to binary discrete kernels", "venue": "arXiv preprint arXiv:1909.06788,", "year": 2019 }, { "authors": [ "Xiaofan Lin", "Cong Zhao", "Wei Pan" ], "title": "Towards accurate binary convolutional neural network", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Fanghui Liu", "Xiaolin Huang", "Yudong Chen", "Johan AK Suykens" ], "title": "Random features for kernel approximation: A survey in algorithms, theory, and beyond", "venue": "arXiv preprint arXiv:2004.11154,", "year": 2020 }, { "authors": [ "Sifan Liu", "Edgar Dobriban" ], "title": "Ridge Regression: Structure, Cross-Validation, and Sketching", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Anna Lytova", "Leonid Pastur" ], "title": "Central limit theorem for linear eigenvalue statistics of random matrices with independent entries", "venue": "The Annals of Probability,", "year": 2009 }, { "authors": [ "Michael W Mahoney" ], "title": "Randomized Algorithms for Matrices and Data", "venue": "Foundations and Trends® in Machine Learning,", "year": 2011 }, { "authors": [ "Vladimir A Marcenko", "Leonid Andreevich Pastur" ], "title": "Distribution of eigenvalues for some sets of random matrices", "venue": "Mathematics of the USSR-Sbornik,", "year": 1967 }, { "authors": [ "Charles H Martin", "Michael W Mahoney" ], "title": "Traditional and heavy tailed self regularization in neural network models", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Charles H Martin", "Michael W Mahoney" ], "title": "Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks", "venue": "In Proceedings of the 2020 SIAM International Conference on Data Mining,", "year": 2020 }, { "authors": [ "Arthur Mensch", "Julien Mairal", "Bertrand Thirion", "Gaël Varoquaux" ], "title": "Stochastic subsampling for factorizing huge matrices", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Vinay Uday Prabhu" ], "title": "Kannada-MNIST: A new handwritten digits dataset for the Kannada language", "venue": "arXiv preprint arXiv:1908.01242,", "year": 2019 }, { "authors": [ "Ali Rahimi", "Benjamin Recht" ], "title": "Random features for large-scale kernel machines", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Farbod Roosta-Khorasani", "Michael W Mahoney" ], "title": "Sub-sampled Newton methods", "venue": "Mathematical Programming,", "year": 2019 }, { "authors": [ "Walter Rudin" ], "title": "Principles of mathematical analysis, volume 3. McGraw-hill", "venue": "New York,", "year": 1964 }, { "authors": [ "Alaa Saade", "Florent Krzakala", "Lenka Zdeborová" ], "title": "Spectral clustering of graphs with the Bethe Hessian", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Mohamed El Amine Seddik", "Mohamed Tamaazousti", "Romain Couillet" ], "title": "A kernel random matrixbased approach for sparse PCA", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Mohamed El Amine Seddik", "Cosme Louart", "Mohamed Tamaazousti", "Romain Couillet" ], "title": "Random matrix theory proves that deep learning representations of GAN-data behave as gaussian mixtures", "venue": null, "year": 2001 }, { "authors": [ "Sheng Shen", "Zhen Dong", "Jiayu Ye", "Linjian Ma", "Zhewei Yao", "Amir Gholami", "Michael W Mahoney", "Kurt Keutzer" ], "title": "Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Jack W Silverstein", "ZD Bai" ], "title": "On the empirical distribution of eigenvalues of a class of large dimensional random matrices", "venue": "Journal of Multivariate analysis,", "year": 1995 }, { "authors": [ "Jack W Silverstein", "Sang-Il Choi" ], "title": "Analysis of the limiting spectral distribution of large dimensional random matrices", "venue": "Journal of Multivariate Analysis,", "year": 1995 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning", "venue": null, "year": 1929 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Terence Tao", "Van Vu", "Manjunath Krishnapur" ], "title": "Random matrices: Universality of ESDs and the circular law", "venue": "The Annals of Probability,", "year": 2023 }, { "authors": [ "Leena C Vankadara", "Debarghya Ghoshdastidar" ], "title": "On the optimality of kernels for highdimensional clustering", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Dan Voiculescu" ], "title": "Addition of certain non-commuting random variables", "venue": "Journal of functional analysis,", "year": 1986 }, { "authors": [ "Ulrike Von Luxburg" ], "title": "A tutorial on spectral clustering", "venue": "Statistics and computing,", "year": 2007 }, { "authors": [ "Shusen Wang", "Fred Roosta", "Peng Xu", "Michael W Mahoney" ], "title": "GIANT: Globally improved approximate Newton method for distributed optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eugene P. Wigner" ], "title": "Characteristic vectors of bordered matrices with infinite dimensions", "venue": "Annals of Mathematics,", "year": 1955 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms, 2017", "venue": null, "year": 2017 }, { "authors": [ "Tayeb Zarrouk", "Romain Couillet", "Florent Chatelain", "Nicolas Le Bihan" ], "title": "Performance-complexity trade-off in large dimensional statistics", "venue": "IEEE International Workshop on Machine Learning for Signal Processing (MLSP)", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Sparsifying, quantizing, and/or performing other entry-wise nonlinear operations on large matrices can have many benefits. Historically, this has been used to develop iterative algorithms for core numerical linear algebra problems (Achlioptas & McSherry, 2007; Drineas & Zouzias, 2011). More recently, this has been used to design better neural network models (Srivastava et al., 2014; Dong et al., 2019; Shen et al., 2020). A concrete example, amenable to theoretical analysis and ubiquitous in practice, is provided by spectral clustering, which can be solved by retrieving the dominant eigenvectors of XTX, for X = [x1, . . . ,xn] ∈ Rp×n a large data matrix (Von Luxburg, 2007). When the amount of data n is large, the Gram “kernel” matrix XTX can be enormous, impractical even to form and leading to computationally unaffordable algorithms. For instance, Lanczos iteration that operates through repeated matrix-vector multiplication suffers from an O(n2) complexity (Golub & Loan, 2013) and quickly becomes burdensome.\nOne approach to overcoming this limitation is simple subsampling: dividing X into subsamples of size εn, for some ε ∈ (0, 1), on which one performs parallel computation, and then recombining. This leads to computational gain, but at the cost of degraded performance, since each data point xi looses the cumulative effect of comparing to the whole dataset. An alternative cost-reduction procedure consists in uniformly randomly “zeroing-out” entries from the whole matrix XTX, resulting in a sparse matrix with only an ε fraction of nonzero entries. For spectral clustering, by focusing on the eigenspectrum of the “zeroed-out” matrix, Zarrouk et al. (2020) showed that the same computational gain can be achieved at the cost of a much less degraded performance: for n/p rather large, almost no degradation is observed down to very small values of ε (e.g., ε ≈ 2% for n/p & 100). Previous efforts showed that it is often advantageous to perform sparsification/quantization in a nonuniform manner, rather than uniformly (Achlioptas & McSherry, 2007; Drineas & Zouzias, 2011). The focus there, however, is often on (non-asymptotic bounds of) the approximation error between\nthe original and the sparsified/quantized matrices. This, however, does not provide a direct access to the actual performance for spectral clustering or other downstream tasks of interest, e.g., since the top eigenvectors are known to exhibit a phase transition phenomenon (Baik et al., 2005; Saade et al., 2014). That is, they can behave very differently from those of the original matrix, even if the matrix after treatment is close in operator or Frobenius norm to the original matrix.\nHere, we focus on a precise characterization of the eigenstructure of XTX after entry-wise nonlinear transformation such as sparsification or quantization, in the large n, p regime, by performing simultaneously non-uniform sparsification and/or quantization (down to binarization). We consider a simple mixture data model with x ∼ N (±µ, Ip) and let K ≡ f(XTX/ √ p)/ √ p, where f is an entry-wise thresholding/quantization operator (thereby zeroing-out/quantizing entries of XTX); and we prove that this leads to significantly improved performances, with the same computational cost, in spectral clustering as uniform sparsification, but for a much reduced cost in storage induced by quantization. The only (non-negligible) additional cost arises from the extra need for evaluating each entry of XTX. Our main technical contribution (of independent interest, e.g., for those interested in entry-wise nonlinear transformations of feature matrices) consists in using random matrix theory (RMT) to derive the large n, p asymptotics of the eigenspectrum of K = f(XTX/ √ p)/ √ p for a wide range of functions f , and then comparing to previously-established results for uniform subsampling and sparsification in (Zarrouk et al., 2020). Experiments on real-world data further corroborate our findings.\nOur main contributions are the following.\n1. We derive the limiting eigenvalue distribution of K as n, p→∞ (Theorem 1), and we identify: (a) the existence of non-informative and isolated eigenvectors of K for some f (Corollary 1); (b) in the absence of such eigenvectors, a phase transition in the dominant eigenvalue-eigenvector\n(λ̂, v̂) pair (Corollary 2): if the signal-to-noise ratio (SNR) ‖µ‖2 of the data exceeds a certain threshold γ, then λ̂ becomes isolated from the main bulk (Von Luxburg, 2007; Joseph & Yu, 2016; Baik et al., 2005) and v̂ contains data class-structure information exploitable for clustering; if not, then v̂ contains only noise and is asymptotically orthogonal to the class-label vector.\n2. Letting f be a sparsification, quantization, or binarization operator, we show: (a) a selective non-uniform sparsification operator, such that XTX can be drastically sparsified\nwith very little degradation in clustering performance (Proposition 1 and Section 4.2), which significantly outperforms the random uniform sparsification scheme in (Zarrouk et al., 2020);\n(b) for a given matrix storage budget (i.e., fixed number of bits to store K), an optimal design of the quantization/binarization operators (Proposition 2 and Section 4.3), the performances of which are compared against the original XTX and its sparsified but not quantized version.\nFor spectral clustering, the surprisingly small performance drop, accompanied by a huge reduction in computational cost, contributes to improved algorithms for large-scale problems. More generally, our proposed analysis sheds light on the effect of entry-wise nonlinear transformations on the eigenspectra of data/feature matrices. Thus, looking forward (and perhaps more importantly, given the use of nonlinear transformations in designing modern neural network models as well as the recent interest in applying RMT to neural network analyses (Dobriban et al., 2018; Li & Nguyen, 2018; Seddik et al., 2018; Jacot et al., 2019; Liu & Dobriban, 2019)), we expect that our analysis opens the door to improved analysis of computationally efficient methods for large dimensional machine learning and neural network models more generally." }, { "heading": "2 SYSTEM MODEL AND PRELIMINARIES", "text": "Basic setup. Let x1, . . . ,xn ∈ Rp be independently drawn (not necessarily uniformly) from a two-class mixture of C1 and C2 with\nC1 : xi = −µ+ zi, C2 : xi = +µ+ zi (1) with zi ∈ Rp having i.i.d. zero-mean, unit-variance, κ-kurtosis, sub-exponential entries, µ ∈ Rp such that ‖µ‖2 → ρ ≥ 0 as p → ∞, and v ∈ {±1}n with [v]i = −1 for xi ∈ C1 and +1 for xi ∈ C2.1 The data matrix X = [x1, . . . ,xn] ∈ Rp×n can be compactly written as X = Z + µvT\n1The norm ‖ · ‖ is the Euclidean norm for vectors and the operator norm for matrices.\nfor Z = [z1, . . . , zn] so that ‖v‖ = √ n and both Z,µvT have operator norm of order O( √ p) in the n ∼ p regime. In this setting, the Gram (or linear kernel) matrix XTX achieves optimal clustering performance on the mixture model (1); see Remark 1 below. However, it consists of a dense n× n matrix, which becomes quickly expensive to store or to perform computation on, as n increases.\nThus, we consider instead the following entry-wise nonlinear transformation of XTX: K = { δi 6=jf(x T i xj/ √ p)/ √ p }n i,j=1\n(2)\nfor f : R → R satisfying some regularity conditions (see Assumption 1 below), where δi 6=j equals 1 for i 6= j and equals 0 otherwise. The diagonal elements f(xTi xi/ √ p) (i) bring no additional information for clustering and (ii) do not scale properly for p large (xTi xi/ √ p = O( √ p) instead of O(1)). Thus, following (El Karoui, 2010; Cheng & Singer, 2013), they are discarded.\nMost of our technical results hold for rather generic functions f , e.g., those of interest beyond sparse quantized spectral clustering, but we are particularly interested in f with nontrivial numerical properties (e.g., promoting quantization and sparsity):\nSparsification: f1(t) = t · 1|t|>√2s (3)\nQuantization: f2(t) = 22−M (bt · 2M−2/ √\n2sc+ 1/2) · 1|t|≤√2s + sign(t) · 1|t|>√2s (4) Binarization: f3(t) = sign(t) · 1|t|>√2s . (5)\nHere, s ≥ 0 is some truncation threshold, and M ≥ 2 is a number of information bits.2 The visual representations of these fs are given in Figure 1-(left). For f3, taking s→ 0 leads to the sign function sign(t). In terms of storage, the quantization f2 consumes 2M−2 +1 bits per nonzero entry, while the binarization f3 takes values in {±1, 0} and thus consumes 1 bit per nonzero entry.\nRandom matrix theory. To provide a precise description of the eigenspectrum of K for the nonlinear f of interest, to be used in the context of spectral clustering, we will provide a large dimensional characterization for the resolvent of K, defined for z ∈ C \\ R+ as\nQ(z) ≡ (K− zIn)−1. (6) This matrix, which plays a central role in RMT analysis (Bai & Silverstein, 2010), will be used in two primary ways. First, the normalized trace 1n tr Q(z) is the so-called Stieltjes transform of the eigenvalue distribution of K, from which the eigenvalue distribution can be recovered, and be further used to characterize the phase transition beyond which spectral clustering becomes theoretically possible (Corollary 2). Second, for (λ̂, v̂), an “isolated” eigenvalue-eigenvector pair of K, and a ∈ Rn, a deterministic vector, by Cauchy’s integral formula, the “angle” between v̂ and a is given by |v̂Ta|2 = − 12πı ∮ Γ(λ̂)\naTQ(z)a dz, where Γ(λ̂) is a positively oriented contour surrounding λ̂ only. Letting a = v, this will be exploited to characterize the spectral clustering error rate (Proposition 1).\nFrom a technical perspective, unlike linear random matrix models, K (and thus Q(z)) involves nonlinear dependence between its entries. To break this difficulty, following the ideas of (Cheng & Singer, 2013), we exploit the fact that, by the central limit theorem, zTi zj/ √ p → N (0, 1) in distribution as p → ∞. As such, up to µvT, which is treated separately with a perturbation argument, the entries of K asymptotically behave like a family of dependent standard Gaussian variables to which f is applied. Expanding f in a series of orthogonal polynomials with respect to the Gaussian measure allows for “unwrapping” this dependence. A few words on the theory of orthogonal polynomials (Andrews et al., 1999) are thus convenient to pursue our analysis.\nOrthogonal polynomial framework. For a probability measure µ, let {Pl(x), l ≥ 0} be the orthonormal polynomials with respect to 〈f, g〉 ≡ ∫ fgdµ obtained by the Gram-Schmidt procedure on the monomials {1, x, x2, . . .}, such that P0(x) = 1, Pl is of degree l and 〈Pl1 , Pl2〉 = δl1−l2 . By the Riesz-Fischer theorem (Rudin, 1964, Theorem 11.43), for any function f ∈ L2(µ), the set of square-integrable functions with respect to 〈·, ·〉, one can formally expand f as\nf(x) ∼ ∞∑ l=0 alPl(x), al = ∫ f(x)Pl(x)µ(dx) (7)\n2Also, here, b·c denotes the floor function, while erf(x) = 2√ π ∫ x 0 e−t 2\ndt and erfc(x) = 1−erf(x) denotes the error function and complementary error function, respectively.\nwhere “f ∼ ∑∞ l=0 alPl” indicates that ‖f − ∑L l=0 alPl‖ → 0 as L→∞ with ‖f‖2 = 〈f, f〉.\nTo investigate the asymptotic behavior of K, as n, p→∞, we make the following assumption on f . Assumption 1 (Square-integrable in Gauss space). Let ξp = zTi zj/ √ p and Pl,p(x) be the orthonor-\nmal polynomials with respect to the measure µp of ξp. For f ∈ L2(µp), f(x) ∼ ∑∞ l=0 al,pPl,p(x)\nwith al,p in (7) such that (i) ∑∞ l=0 al,pPl,p(x)µp(dx) converges in L\n2(µp) to f(x) uniformly over large p; and (ii) as p→∞, ∑∞ l=1 a 2 l,p → ν and, for l = 0, 1, 2, al,p → al converges with a0 = 0.\nSince ξp → N (0, 1) in distribution, the parameters a0, a1, a2 and ν are simply moments of the standard Gaussian measure involving f . More precisely, for ξ ∼ N (0, 1),\na0 = E[f(ξ)], a1 = E[ξf(ξ)], √\n2a2 = E[ξ2f(ξ)]− a0, ν = E[f2(ξ)− a20] ≥ a21 + a22. (8) Imposing the condition a0 = 0 simply discards the non-informative rank-one matrix a01n1Tn/ √ p from K. The three parameters (a1, a2, ν) are of crucial significance in determining the spectral behavior of K (see Theorem 1 below).The sparse f1, quantized f2, and binary f3 of our primary interest all satisfy Assumption 1 (counterexample exists though, for example f(x) = eP (x) for polynomial P (x) of degree greater than two), with their corresponding a2 = 0 (as we shall see in Corollary 1 below, this is important for the spectral clustering use case) and a1, ν given in Figure 1- (right). With these preliminary comments, we are in position to present our main technical results." }, { "heading": "3 MAIN TECHNICAL RESULTS", "text": "Our main technical result, from which our performance-complexity trade-off analysis will follow, provides an asymptotic deterministic equivalent Q̄(z) for the random resolvent Q, defined in (6). (A deterministic equivalent is a deterministic matrix Q̄(z) such that, for any deterministic sequence of matrices An ∈ Rn×n and vectors an,bn ∈ Rn of bounded (spectral and Euclidean) norms, 1 n tr An(Q(z)− Q̄(z))→ 0 and a T n(Q(z)− Q̄(z))bn → 0 almost surely as n, p→∞. We denote this relation Q(z)↔ Q̄(z).) This is given in the following theorem. The proof is in Appendix A.1. Theorem 1 (Deterministic equivalent). Let n, p→∞ with p/n→ c ∈ (0,∞), let Q(z) be defined in (6), and let =[·] denote the imaginary part of a complex number. Then, under Assumption 1,\nQ(z)↔ Q̄(z) = m(z)In−VΛ(z)VT, Λ(z) =\n[ Θ(z)m2(z) v\nT1nΘ(z)Ω(z) n m(z)\nvT1nΘ(z)Ω(z) n m(z) (vT1n) 2Θ(z)Ω2(z) n2 − Ω(z)\n] ,\nwith √ nV = [v, 1n], Ω(z) = a22(κ−1)m 3(z)\n2c2−a22(κ−1)m2(z) , Θ(z) = a1‖µ‖\n2\nc+a1m(z)(1+‖µ‖2)+a1‖µ‖2Ω(z)(vT1n)2/n2 , for κ the kurtosis of the entries of Z, and m(z) the unique solution, such that =[m(z)] · =[z] ≥ 0, to\n− 1 m(z) = z + a21m(z) c+ a1m(z) + ν − a21 c m(z). (9)\nThe major implication of Theorem 1 is that, the spectral behavior of the matrix K (e.g., its eigenvalue distribution and the isolated eigenpairs as discussed after (6)) depends on the nonlinear f only via the three parameters a1, a2, ν defined in (8). More concretely, the empirical spectral measure ωn = 1 n ∑n i=1 δλi(K) of K has a deterministic limit ω as n, p→∞, uniquely defined through its Stieltjes\ntransform m(z) ≡ ∫\n(t − z)−1ω(dt) as the solution to (9).3 This limiting measure ω does not 3For m(z) the Stieltjes transform of a measure ω, ω can be obtained via ω([a, b]) = limy↓0 1π ∫ b a =[m(x+\nıy)]dx for all a < b continuity points of ω.\n−10 −5 0 5 10 15 −10 −5 0 5 10 15−10 −5 0 5 10 15\n−0.02 0\n0.02\n−0.02 0\n0.02\nFigure 2: (Left) Histogram of eigenvalues of K (blue) versus the limiting spectrum and spikes (red). (Right) Eigenvectors of the largest (top) and second largest (bottom) eigenvalues of K (blue), versus the rescaled class label αv/ √ n (red, from Corollary 2). f(t) = sin(t) − 3 cos(t) + 3/ √ e, p = 800, n = 6 400, µ = 1.1 · 1p/ √ p, v = [−1n/2; 1n/2] on Student-t data with κ = 5.\ndepend on the law of the independent entries of Z, so long that they are sub-exponential, with zero mean and unit variance. In particular, taking a1 = 0 in (9) gives the (rescaled) Wigner semi-circle law ω (Wigner, 1955), and taking ν = a21 (i.e., al = 0 for l ≥ 2) gives the Marc̆enko-Pastur law ω (Marcenko & Pastur, 1967). See Remark 2 in Appendix A.2 for more discussions.\nSpurious non-informative spikes. Going beyond just the limiting spectral measure of K, Theorem 1 also shows that isolated eigenvalues (often referred to as spikes) may be found in the eigenspectrum of K at very specific locations. Such spikes and their associated eigenvectors are typically thought to provide information on the data class-structure (and they do when f is linear). However, when f is nonlinear, this is not always the case: it is possible that not all these spikes are “useful” in the sense of being informative about the class structure. This is made precise in the following, where we assume µ = 0, i.e., that there is no class structure. The proof is in Appendix A.2.\nCorollary 1 (Non-informative spikes). Assume µ = 0, κ 6= 1 and a2 6= 0, so that in the notations of Theorem 1, Θ(z) = 0 and Q̄(z) = m(z) + Ω(z) 1n1n1 T n for Ω(z) defined in Theorem 1. Then,\nif x± ≡ ± 1a2 √ 2 κ−1 satisfies a1x± 6= ±1 and (ν − a 2 1)x 2 ± + a21x 2 ± (1+a1x±)2 < 1/c, two eigenvalues of K converge to z± = − 1cx± − a21x± 1+a1x± − (ν − a21)x± away from the support of ω. If instead x± = ±1/a1 for a1 6= 0, then a single eigenvalue of K isolates with limit z = − νa1 − a1(2−c) 2c .\nCorollary 1 (combined with Theorem 1) says that, while the limiting spectrum ω is universal with respect to the distribution of the entries of Z, the existence and position of the spurious non-informative spikes are not universal and depend on the kurtosis κ of the distribution. (See Figure 12 in Appendix A.2 for an example.) This is far from a mathematical curiosity. Given how nonlinear transformations are used in machine learning practice, being aware of the existence of spurious non-informative spikes in the eigenspectrum of K (well separated from the bulk of eigenvalues, but that correspond to random noise instead of signal), as a function of properties of the nonlinear f , is of fundamental importance for downstream tasks. For example, for spectral clustering, their associated eigenvectors may be mistaken as informative ones by spectral clustering algorithms, even in the complete absence of classes. This is confirmed by Figure 2 where two isolated eigenvalues (on the right side of the bulk) are observed, with only the second largest one corresponding to an eigenvector that contains class-label information. For further discussions, see Appendix A.2 and A.3.\nInformative spikes. From Theorem 1, we see that the eigenspectrum of K depends on f only via a1, a2, and ν. In particular, a1 and ν determine the limiting spectral measure ω. From Corollary 1, we see that a2 contributes by (i) introducing (at most two) non-informative spikes and (ii) reducing the ratio a1/ν (since ν = ∑ i≥1 a 2 i ), thereby necessarily enlarging the support of ω (see Remark 2 in Appendix A.2 for more details). Taking a2 = 0 thus reduces the length of the support of ω and, as such, maximizes the “chance” of the appearance of an informative spike (the eigenvector of which is positively correlated with the label vector v). See Remark 1 below for a more precise statement.\nIn particular, by taking a2 = 0 and a1 6= 0, we obtain only informative spikes, and we can characterize a phase transition depending on the SNR ρ. The proof of the following is in Appendix A.3.\nCorollary 2 (Informative spike and a phase transition). For a1 > 0 and a2 = 0, let\nF (x) = x4 + 2x3 + ( 1− cν\na21\n) x2 − 2cx− c, G(x) = a1\nc (1 + x) + a1 x + ν − a21 a1 1 1 + x , (10)\nand let γ be the largest real solution to F (γ) = 0. Then, under Assumption 1, we have √ c ≤ γ ≤√\ncν/a1, and the largest eigenpair (λ̂, v̂) of K satisfies λ̂→ λ = { G(ρ) ρ > γ,\nG(γ) ρ ≤ γ; |v̂Tv|2 n → α =\n{ F (ρ)\nρ(1+ρ)3 ρ > γ, 0 ρ ≤ γ; (11)\nalmost surely as n, p→∞, p/n→ c ∈ (0,∞), where we recall ρ ≡ limp→∞ ‖µ‖2 and ‖v‖ = √ n.\nWithout loss of generality, we discuss only the case a1 > 0.4 For a1 > 0, both F (x) and G(x) are increasing functions on x ∈ (γ,∞). Then, as expected, both λ and α increase with the SNR ρ. Moreover, the phase transition point γ is an increasing function of c and of ν/a21. As such, the optimal f in the sense of the smallest phase transition point is the linear function f(t) = t with a1 = ν = 1 and γ = √ c. This recovers the classical random matrix result in (Baik et al., 2005)." }, { "heading": "4 CLUSTERING PERFORMANCE OF SPARSE AND QUANTIZED OPERATORS", "text": "We start, in Section 4.1, by providing a sharp asymptotic characterization of the clustering error rate and demonstrating the optimality of the linear function under (1). Then, in Section 4.2, we discuss the advantageous performance of the proposed selective sparsification approach (with f1) versus the uniform or subsampling approach studied previously in (Zarrouk et al., 2020). Finally, in Section 4.3, we derive the optimal truncation threshold sopt, for both quantized f2 and binary f3, so as to achieve an optimal performance-complexity trade-off for a given storage budget." }, { "heading": "4.1 PERFORMANCE OF SPECTRAL CLUSTERING", "text": "The technical results in Section 3 provide conditions under which K admits an informative eigenvector v̂ that is non-trivially correlated with the class label vector v (and thus that is exploitable for spectral clustering) in the n, p → ∞ limit. Since the exact (limiting) alignment |vTv̂| is known, along with an additional argument on the normal fluctuations of v̂, we have the following result for the performance of the spectral clustering method. The proof is in Appendix A.4. Proposition 1 (Performance of spectral clustering). Let Assumption 1 hold, let a1 > 0, a2 = 0, and let Ĉi = sign([v̂]i) be the estimate of the underlying class Ci of xi, with the convention v̂Tv ≥ 0, for v̂ the top eigenvector of K. Then, the (average) misclassification rate satisfies 1n ∑n i=1 δĈi 6=Ci → 1 2 erfc( √ α/(2− 2α)), almost surely, as n, p→∞, for α ∈ [0, 1) defined in (11).\nRecall from Corollary 2 that, for a2 = 0, the nonlinear function f (e.g., f1, f2, f3) acts on the (statistical behavior of the) isolated eigenvector v̂, and thus on the spectral clustering performance per Proposition 1, only through the ratio ν/a21. It thus suffices to evaluate the ratio ν/a 2 1 of different f and compare, for instance to that of the linear f(t) = t corresponding to the original XTX matrix.\nDespite being asymptotic results valid in the n, p → ∞ limit, the results of Proposition 1 and Corollary 2 closely match empirical results for moderately large n, p only in the hundreds. This is illustrated in Figure 3. Proposition 1 further confirms that the misclassification rate, being a decreasing function of α, increases with ν/a21 (for c and ρ fixed). This leads to the following remark. Remark 1 (Optimality of linear function). Since both the phase transition point γ and the misclassification rate grow with ν/a21, the linear function f(t) = t with the minimal ν/a 2 1 = 1 is optimal in the sense of (i) achieving the smallest SNR ρ or the largest ratio c = lim p/n (i.e., the fewest samples n) necessary to observe an informative (isolated) eigenvector, and (ii) upon existence of such an isolated eigenvector, reaching the lowest classification error rate.\nAccording to Remark 1, any f with ν/a21 > 1 induces performance degeneration (compared to the optimal linear function). However, by choosing f to be one of the functions f1, f2, f3 defined in (3)-(5), one may trade clustering performance optimality for reduced storage size and computational time. Figure 4 displays the theoretical decay in clustering performance and the gain in storage size of the sparse f1, quantized f2, and binary f3, when compared to the optimal but “dense” linear function. As s → 0, both performance and storage size under f1 naturally approach those of the\n4Otherwise we could consider −K instead of K and the largest eigenvalue becomes the smallest one.\nlinear function. This is unlike f2 or f3, which approach the sign function. For s 1, the performance under sparse f1 becomes comparable to that of binary f3 (which is significantly worse than quantized but non-sparse f2) but for a larger storage cost. In particular, using f2 or f3 in the setting of Figure 4, one can reduce the storage size by a factor of 32 or 64 (in the case of IEEE standard single- or double-precision floating-point format), at the price of a performance drop less than 1%." }, { "heading": "4.2 COMPARISON TO UNIFORM SPARSIFICATION AND SUBSAMPLING", "text": "From Figure 4, we see that the classification error and storage gain of the sparse f1 increase monotonically, as the truncation threshold s grows. For f1, the number of nonzero entries of K is approximately erfc(s)n2 with truncation threshold s. Thus, the sparsity level εselec = erfc(s) ∈ [0, 1] can be defined and compared to uniform sparsification or subsampling approaches.\nRecall (from the introduction) that the cost of spectral clustering may be reduced by subsampling the whole dataset in 1/εsub chunks of nεsub data vectors each. Alternatively, as investigated recently in (Zarrouk et al., 2020), the cost can be reduced by uniformly zeroing-out XTX with a symmetric random mask matrix B, with Bij ∼ Bern(εunif) for 1 ≤ i < j ≤ n and Bii = 0. On average, a proportion 1 − εunif of the entries of XTX is set to zero, so that εunif ∈ [0, 1] controls the sparsity level (and thus the storage size as well as computational time). Similar to our Corollary 2, the associated eigenvector alignment α (and thus the clustering accuracy per Proposition 1) in both cases can be derived. Specifically, taking εunif = a21/ν in (Zarrouk et al., 2020, Theorem 3.2), we obtain the same F (x) as in our Corollary 2 and therefore the same phase transition point γ and eigenvector alignment α. As for subsampling, its performance can be obtained by letting a21 = ν and changing c into c/εsub in the formulas of F (x) and G(x) of our Corollary 2. Consequently, the same clustering performance is achieved by either uniform or selective sparsification (using f1) with\nεunif = a 2 1/ν = erfc(s) + 2se −s2/ √ π > erfc(s) = εselec, (12)\nand our proposed selective sparsification thus leads to strictly sparser matrices. Moreover, their ratio r(s) = erfc(s)/(erfc(s) + 2se−s 2 / √ π) is a decreasing function of s and approximates as\nr(s) ∼ (1 + s2)−1/2 for s 1,5 meaning that the gain in storage size and computational time is more significant as the matrix becomes sparser. This is depicted in Figure 5-(left).\nFixing α in Corollary 2 to achieve a given clustering performance level (via Proposition 1), one may then retrieve “equi-performance” curves in the (ε, ρ)-plane, for uniform sparsification, selective sparsification, and subsampling. This is displayed in Figure 5-(right), showing that a dramatic performance gain is achieved by the proposed selective sparsification f1. Besides, here for c = 2, as much as 80% sparsity could be obtained with selective sparsification at constant SNR ρ, with virtually no performance loss (red curves are almost flat on ε ∈ [0.2, 1]). This fails to hold for uniform sparsification (Zarrouk et al. (2020) obtain such a result only when c . 0.1) or subsampling." }, { "heading": "4.3 OPTIMALLY QUANTIZED AND BINARIZED MATRICES", "text": "From Figure 4, we see that the classification errors of the quantized f2(M ; s; t) and binarized f3(s; t) do not increase monotonically with the truncation threshold s. It can be shown (and also visually confirmed in Figure 4) that, for a given M ≥ 2, the ratio ν/a21 of both f2 and f3 is convex in s and has a unique minimum. This leads to the following optimal design result for f2 and f3, respectively, the proof of which is straightforward. Proposition 2 (Optimal design of quantized and binarized functions). Under the assumptions and notations of Proposition 1, the classification error rate is minimized at s = sopt with\n1. sopt the unique solution to a1(sopt)ν′(sopt) = 2a′1(sopt)ν(sopt) for quantized f2, with a ′ 1(s) and ν′(s) the corresponding derivatives with respect to s in Figure 1-(right); and 2. sopt = exp(−s2opt)/(2 √ π erfc(sopt)) ≈ 0.43 for binary f3, with level of sparsity ε ≈ 0.54.\nTherefore, the optimal threshold sopt for quantized f2 or binary f3 under (1) is problem-independent, as it depends neither on ρ nor on c. In particular, note that (i) the binary f3(sopt; ·) is consistently better than f(t) = sign(t) for which ν/a21 = π/2 ≈ 1.57 > 1.24; and (ii) the performance of quantized f2 can be worse, though very slightly, than that of binary f3 for small s, but significantly better for not-too-small s. These are visually confirmed in the left and middle displays of Figure 4.\nAs already observed in Figure 4-(right), a significant gain in storage size can be achieved by using f2 or f3, versus the performance-optimal but dense linear function, with virtually no performance loss. Figure 6 compares the performance of the optimally designed f2 and f3, to sparse f1 that has approximately the same storage size.6 A significant drop in classification error is observed by using quantization f2 or binarization f3 rather than sparsification f1. Also, the performances of f2 and f3 are extremely close to the theoretical optimal (met by f(t) = t). This is further confirmed by Figure 6-(right) where, for the optimal f2, the ratio ν/a21 gets close to 1, for all M ≥ 5. Figure 7 next evaluates the clustering performance, the proportion of nonzero entries in K, and the computational time of the top eigenvector, for sparse f1 and binary f3, versus linear f(t) = t,\n5We use here the asymptotic expansion erfc(s) = e −s2\ns √ π\n[ 1 + ∑∞ k=1(−1) k · 1·3···(2k−1) (2s2)k ] .\n6We set the truncation threshold s of f1 such that erfc(s) = 3/64, so that the storage size of the sparse f1 (64 bits per nonzero entry) is the same as the quantized f2 with M = 3 (with 3 bits per nonzero entry), which is three times that of the binary f3.\nas a function of the truncation threshold s, on the popular MNIST datasets (LeCun et al., 1998). Depending on (the SNR ρ of) the task, up to 90% of the entries can be discarded almost “for free”. Moreover, the curves of the binary f3 appear strikingly close to those of the sparse f1, showing the additional advantage of using the former to further reduce the storage size of K. More empirical results on various datasets are provided in Appendix B to confirm our observations in Figure 7." }, { "heading": "5 CONCLUDING REMARKS", "text": "We have evaluated performance-complexity trade-offs when sparsifying, quantizing, and binarizing a linear kernel matrix via a thresholding operator. Our main technical result characterizes the change in the eigenspectrum under these operations; and we have shown that, under an information-plusnoise model, sparsification and quantization, when carefully employed, maintain the informative eigenstructure and incur almost negligible performance loss in spectral clustering. Empirical results on real data demonstrate that these conclusions hold far beyond the present statistical model.\nThe proposed analysis can be extended in many ways, for instance by considering a multi-cluster and more involved model than (1) as in (Liao & Couillet, 2019) (i.e., “generic” K-class Gaussian mixtureN (µa,Ca) for a ∈ {1, . . . ,K}, which may help better interpret the empirical observations in Figure 7 and Appendix B), by focusing on more general kernels beyond the current inner-product type in (2), or by deriving non-asymptotic guarantees as in (Vankadara & Ghoshdastidar, 2020).\nOur results open the door to theoretical investigation of a broad range of cost-efficient linear algebra methods in machine learning, including subsampling techniques (Mensch et al., 2017; RoostaKhorasani & Mahoney, 2019), distributed optimization (Wang et al., 2018), randomized linear algebra algorithms (Mahoney, 2011; Drineas & Mahoney, 2016), and quantization for improved training and/or inference (Dong et al., 2019; Shen et al., 2020). Also, given recent interest in viewing neural networks from the perspective of RMT (Li & Nguyen, 2018; Seddik et al., 2018; Jacot et al., 2019; Liu & Dobriban, 2019; Martin & Mahoney, 2019; 2020), our results open the door to understanding and improving performance-complexity trade-offs far beyond kernel methods (Rahimi & Recht, 2008; Jacot et al., 2018; Liu et al., 2020), e.g., to sparse, quantized, or even binary neural networks (Hubara et al., 2016; Lin et al., 2017)." }, { "heading": "ACKNOWLEDGMENTS", "text": "We would like to acknowledge DARPA, IARPA (contract W911NF20C0035), NSF, and ONR via its BRC on RandNLA for providing partial support of this work. Our conclusions do not necessarily reflect the position or the policy of our sponsors, and no official endorsement should be inferred. Couillet’s work is partially supported by MIAI at University Grenoble-Alpes (ANR-19-P3IA-0003) and the HUAWEI LarDist project." }, { "heading": "A PROOFS AND RELATED DISCUSSIONS", "text": "Under the mixture model (1), the data matrix X ∈ Rp×n can be compactly written as X = Z + µvT, (13)\nfor Z ∈ Rp×n having i.i.d. zero-mean, unit-variance, κ-kurtosis, sub-exponential entries and v ∈ {±1}n so that ‖v‖ = √ n. Recall also the following notations:\nK = { δi6=jf(x T i xj/ √ p)/ √ p }n i,j=1 , Q(z) ≡ (K− zIn)−1. (14)\nA.1 PROOF OF THEOREM 1\nThe proof of Theorem 1 comes in the following two steps:\n1. show that the random quantities 1n tr AnQ(z) and a T nQ(z)bn of interest concentrate\naround their expectations in the sense that 1\nn tr An(Q(z)− E[Q(z)])→ 0, aTn(Q(z)− E[Q(z)])bn → 0, (15)\nalmost surely as n, p→∞; and 2. show that the sought-for deterministic equivalent Q̄(z) given in Theorem 1 is an asymptotic\napproximation for the expectation of the resolvent Q(z) defined in (6) in the sense that\n‖E[Q]− Q̄‖ → 0, (16) as n, p→∞.\nThe concentration of trace forms in the first item has been established in (Cheng & Singer, 2013; Do & Vu, 2013), and the bilinear forms follow similarly. Here we focus on the second item to show that ‖E[Q]− Q̄‖ → 0 in the large n, p limit. In the sequel, we use o(1) and o‖·‖(1) for scalars or matrices of (almost surely if being random) vanishing absolute values or operator norms as n, p→∞. To establish ‖E[Q]− Q̄‖ → 0, we need to show subsequently that:\n1. under (13), the random matrix K defined in (2) admits a spiked-model approximation, that is\nK = K̃0 + UΛU T + o‖·‖(1), (17)\nfor some full rank random (noise) matrix K̃0 and low rank (information) matrix UΛUT to be specified; and\n2. the matrix inverse (K̃0−UΛUT−zIn)−1 can be decomposed with the Woodbury identity, so that\nQ = (K̃0−UΛUT−zIn)−1 +o‖·‖(1) = Q̃0−Q̃0U(Λ−1 +UTQ̃0U)−1UTQ̃0 +o‖·‖(1), (18)\nwith Q̃0(z) ≡ (K̃0 − zIn)−1; and 3. the expectation of the right-hand side of (18) is close to Q̄ in the large n, p limit, allowing us to conclude the proof of Theorem 1.\nTo establish (17), we denote the “noise-only” null model with ‖µ‖ = 0 by writing K = K0 such that\n[K0]ij = δi 6=jf(z T i zj/ √ p)/ √ p. (19)\nWith a combinatorial argument, it has been shown in (Fan & Montanari, 2019) that∥∥∥∥K0 − K̃0 − a2√2 1p (ψ1Tn + 1nψT) ∥∥∥∥→ 0, (20)\nalmost surely as n, p → ∞, for K̃0 such that (K̃0 − zIn)−1 ≡ Q̃0(z) ↔ m(z)In and the random vector ψ ∈ Rn with its i-th entries given by\n[ψ]i = 1 √ p\n( ‖zi‖2 − E[‖zi‖2] ) =\n1 √ p (‖zi‖2 − p).\nConsider now the informative-plus-noise model K for X = Z +µvT as in (13) with [v]i = ±1 and ‖v‖ =\n√ n. It follows from (Liao & Couillet, 2019) that∥∥∥∥K−K0 − a1p [v ZTµ] [ ‖µ‖2 1 1 0 ] [ vT µTZ ]∥∥∥∥→ 0, (21) almost surely as n, p→∞.\nCombining (20) with (21), we obtain ‖K− K̃0 −UΛUT‖ → 0 almost surely as n, p→∞, with\nU = 1 √ p [1n, v, ψ, Z Tµ] ∈ Rn×4, Λ = 0 0 a2√ 2 0 0 a1‖µ‖2 0 a1 a2√\n2 0 0 0 0 a1 0 0 (22) and (K̃0 − zIn)−1 ≡ Q̃0(z)↔ m(z)In. By the Woodbury identity, we write\nQ = (K− zIn)−1 = (K̃0 + UΛUT − zIn)−1 + o‖·‖(1) = Q̃0 − Q̃0U(Λ−1 + UTQ̃0U)−1UTQ̃0 + o‖·‖(1) (23)\nwith\nΛ−1 + UTQ̃0U = m(z) c\nm(z) c vT1n n 0 0\nm(z) c vT1n n\nm(z) c 0 0\n0 0 (κ− 1)m(z)c 0 0 0 0 µT( 1pE[ZQ̃0Z T])µ + o‖·‖(1) where we use the fact that\nE[ψ] = 0, E[ψψT] = (κ− 1)In.\nWe need to evaluate the expectation 1pE[Z(K̃0 − zIn) −1ZT]. This is given in the following lemma. Lemma 1. Under the assumptions and notations of Theorem 1, we have 1\np E[Z(K̃0 − zIn)−1ZT] =\nm(z)\nc+ a1m(z) Ip + o‖·‖(1). (24)\nProof of Lemma 1. For Q̃0 = (K̃0−zIn)−1, we aim to approximate the expectation E[ZQ̃0ZT]/p. Consider first the case where the entries of Z are i.i.d. Gaussian, we can write the (i, i′) entry of E[ZQ̃0ZT] with Stein’s lemma (i.e., E[xf(x)] = E[f ′(x)] for x ∼ N (0, 1)) as\nE[ZQ̃0ZT]ii′ = n∑ j=1 E[Zij [Q̃0ZT]ji′ ] = n∑ j=1 E ∂[Q̃0Z T]ji′ ∂Zij\n= n∑ j=1 E\n[ [Q̃0]jjδii′ +\n∑ k=1 ∂[Q̃0]jk ∂Zij ZTki′\n] .\nWe first focus on the term ∂[Q̃0]jk∂Zij by writing\n∂[Q̃0]jk ∂Zij\n= − [ Q̃0\n∂K0 ∂Zij Q̃0 ] jk = n∑ l,m=1 −[Q̃0]jl ∂[K0]lm ∂Zij [Q̃0]mk\nwhere we recall [K0]ij = δi6=jf(ZTZ/ √ p)ij/ √ p so that for l 6= m we have\n∂[K0]lm ∂Zij = 1 p f ′(ZTZ/ √ p)lm ∂[ZTZ]lm ∂Zij = 1 p f ′(ZTZ/ √ p)lm(δjlZim + Z T liδjm)\nand ∂[K0]lm∂Zij = 0 for l = m. We get∑ j,k ∂[Q̃0]jk ∂Zij ZTki′ = − 1 p ∑ j,k,m [Q̃0]jjf ′(ZTZ/ √ p)jmZim[Q̃0]mkZ T ki′ − 1 p ∑ j,k,l [Q̃0]jlf ′(ZTZ/ √ p)ljZ T li[Q̃0]jkZ T ki′\n= −1 p [Z diag(f ′(ZTZ/ √ p)Q̃01n)Q̃0Z T]ii′ − 1 p [Z(Q̃0 f ′(ZTZ/ √ p))Q̃0Z T]ii′\nwhere f ′(ZTZ/ √ p) indeed represents f ′(ZTZ/ √ p)− diag(·) in both cases.\nFor the first term, since f ′(ZTZ/ √ p)− diag(·) = a11n1Tn +O‖·‖( √ p), we have\n1 p f ′(ZTZ/ √ p)Q̃01n = a1 p 1TnQ̃01n · 1n +O(p−1/2) = a1m(z) c 1n +O(p −1/2) (25)\nwhere O(p−1/2) is understood entry-wise. As a result,\n1 p Z diag(f ′(ZTZ/ √ p)Q̃01n)Q̃0Z T = a1m(z) c · 1 p ZQ̃0Z T + o‖·‖(1).\nFor the second term, since f ′(ZTZ/ √ p) has O(1) entries and ‖A B‖ ≤ √ n‖A‖∞‖B‖ for A,B ∈ Rn×n, we deduce that 1\np ‖Z(Q̃0 f ′(ZTZ/\n√ p))Q̃0Z T‖ = O(√p).\nAs a consequence, we conclude that\n1 p E[ZQ̃0ZT] = 1 p tr Q̃0 · Ip − a1m(z) c · 1 p E[ZQ̃0ZT] + o‖·‖(1)\nthat is 1\np E[ZQ̃0ZT] =\nm(z)\nc+ a1m(z) Ip + o‖·‖(1)\nwhere we recall that tr Q̃0/p = m(z)/c and thus the conclusion of Lemma 1 for the Gaussian case. The interpolation trick (Lytova & Pastur, 2009, Corollaray 3.1) can then be applied to extend the result beyond Gaussian distribution. This concludes the proof of Lemma 1.\nDenote A = (Λ−1 + UTQ̃0U)−1, it follows from Lemma 1 that\nE[UAUT] = 1\np (A111n1\nT n + A121nv T + A21v1 T n + A22vv T + A33(κ− 1)In + A44‖µ‖2In)\n= 1\np (A111n1\nT n + A121nv T + A21v1 T n + A22vv T) + o‖·‖(1)\nsince ‖µ‖ = O(1) and ‖v‖ = O( √ n). We thus deduce from (23) that\nQ(z)↔ Q̄(z) = m(z)In − cm2(z)V [ A11 A12 A21 A22 ] VT\nwith √ nV = [v, 1n]. Rearranging the expression we conclude the proof of Theorem 1.\nA.2 PROOF OF COROLLARY 1 AND RELATED DISCUSSIONS\nConsider the noise-only model by taking µ = 0 in Theorem 1. Then, we have K = K0 and Θ(z) = 0, so that\nQ̄(z) = m(z) + Ω(z) · 1 n 1n1 T n, Ω(z) = a22(κ− 1)m3(z) 2c2 − a22(κ− 1)m2(z)\n(26)\nwhere we recall m(z) is the solution to m(z) = − ( z + a21m(z)\nc+ a1m(z) + ν − a21 c m(z)\n)−1 . (27)\nSince the resolvent Q(z) is undefined for z ∈ R within the eigensupport of K that consists of (i) the main bulk characterized by the Stieltjes transform m(z) defined in (27) and (ii) the possible spikes, we need to find the poles of Q̄(z) but not those of m(z) to determine the asymptotic locations of the spikes that are away from the main bulk. Direct calculations show that the Stieltjes transforms of the possible non-informative spikes satisfy\nm± = ± √ 2\nκ− 1 c a2 (28)\nthat are in fact the poles of Ω(z), for a2 6= 0 and κ 6= 1. For κ = 1 or a2 = 0, Ω(z) has no (additional) poles, so that there is (almost surely) no spike outside the limiting spectrum.\nIt is however not guaranteed that z ∈ R corresponding to (28) isolates from the main bulk. To this end, we introduce the following characterization of the limiting spectral measure in (27), the proof of which follows from previous work. Corollary 3 (Limiting spectrum). Under the notations and conditions of Theorem 1, with probability one, the empirical spectral measure ωn = 1n ∑n i=1 δλi(K0) of the noise-only model K0 (and therefore that of K as a low rank additive perturbation of K0 via (21)) converges weakly to a probability measure ω of compact support as n, p → ∞, with ω uniquely defined through its Stieltjes transform m(z) solution to (27). Moreover,\n1. if we let supp(ω) be the support of ω, then\nsupp(ω) ∪ {0} = R \\ {x(m) |m ∈ R \\ {{−c/a1} ∪ {0}} and x′(m) > 0} (29) for x(m) the functional inverse of (27) explicitly given by\nx(m) = − 1 m − a\n2 1m c+ a1m − ν − a 2 1 c m, (30)\n2. the measure ω has a density and its support may have up to four edges, with the associated Stieltjes transforms given by the roots of x′(m) = 0, i.e.,\nx′(m) = 1 m2 − a\n2 1c (c+ a1m)2 − ν − a 2 1 c = 0. (31)\nThe limiting spectral measure ω of the null model K0 was first derived in (Cheng & Singer, 2013) for Gaussian distribution and then extended to sub-exponential distribution in (Do & Vu, 2013). The fact that a finite rank perturbation does not affect the limiting spectrum follows from (Silverstein & Bai, 1995, Lemma 2.6).\nThe characterization in (29) above follows the same idea as in (Silverstein & Choi, 1995, Theorem 1.1), which arises from the crucial fact that the Stieltjes transform m(x) = ∫ (t − x)−1ω(dt) of a measure ω is an increasing function on its domain of definition and so must be its functional inverse x(m) given explicitly in (30). In plain words, Corollary 3 tells us that (i) depending on the number of real solutions to (31), the support of ω may contain two disjoint regions with four edges, and (ii) x ∈ R is outside the support of ω if and only if its associated Stieltjes transform m satisfies x′(m) > 0, i.e., belonging to the increasing region of the functional inverse x(m) in (30). This is depicted in Figure 8, where for the same function f(t) = max(t, 0) − 1/ √ 2π with a1 = 1/2,\na2 = 1/(2 √ π) and ν = (π − 1)/(2π), we observe in the top display a single region of ω for c = 2 and in the bottom display two disjoint regions (with thus four edges) for c = 1/10. The corresponding (empirical) eigenvalue histograms and limiting laws are given in Figure 9. Note, in particular, that the local extrema of the functional inverse x(m) in Figure 8 characterize the (possibly up to four) edges of the support of ω in Figure 9.\nAccording to the discussion above, it remains to check the sign of x′(m) for m = ± √\n2 κ−1 c a2 to see if they correspond to isolated eigenvalues away from the support of ω. This, after some algebraic manipulations, concludes the proof of Corollary 1.\nDiscussions. The limiting spectral measure in Corollary 3 is indeed a “mix” between the popular Marc̆enko-Pastur and the Wigner’s semicircle law. Remark 2 (From Marc̆enko-Pastur to semicircle law). As already pointed out in (Fan & Montanari, 2019), here the limiting spectral measure ω is the so-called free additive convolution (Voiculescu, 1986) of the semicircle and Marc̆enko-Pastur laws, weighted respectively by a1 and √ ν − a21, i.e.,\nω = a1(ωMP,c−1 − 1) √ (ν − a21)/c · ωSC (32)\nwhere we denote a1(ωMP,c−1 − 1) the law of a1(x− 1) for x ∼ ωMP,c−1 and √\n(ν − a21)/c · ωSC the law of √ (ν − a21)/c · x for x ∼ ωSC . Figure 10 compares the eigenvalue distributions of K0\n2π,\nfor c = 2 (above, with two edges) and c = 1/10 (bottom, with four edges). The support of ω can be read on the vertical axes and the values of x such that x′(m) = 0 are marked in green.\nfor f(t) = a1t+ a2(t2 − 1)/ √\n2 (so that ν − a21 = a22) with different pairs of (a1, a2). We observe a transition from the Marc̆enko-Pastur law (in the left display, with a1 6= 0 and a2 = 0) to the semicircle law (in the right display, with a1 = 0 and a2 6= 0).\nRemark 2 tells us that, depending on the ratio ν/a21, the eigenspectrum of K exhibits a transition from the Marc̆enko-Pastur to semicircle-like shape. Note from Figure 1-(right) that, for the sparse f1, the ratio ν/a21 is an increasing function of the truncation threshold s and therefore, as the matrix K become sparser, it eigenspectrum changes from a Marc̆enko-Pastur-type (at s = 0) to be more semicircle-like. This is depicted in Figure 11 and similar conclusions hold for quantized f2 and binary f3 in the s ≥ sopt regime.\nAs discussed after Theorem 1 and in the proof above, while the limiting eigenvalue distribution ω is universal and independent of the law of the entries of Z, so long as they are independent, subexponential, of zero mean and unit variance, as commonly observed in RMT (Tao et al., 2010), this is no longer the case for the isolated eigenvalues. In particular, according to Corollary 1, the possible non-informative spikes do depend on the kurtosis κ of the distribution. In Figure 12 we observe a farther (left) spike for Student-t (with κ = 5 and is thus not sub-exponential) than Gaussian distribution (with κ = 3), while no spike can be observed for the symmetric Bernoulli distribution (that takes values ±1 with probability 1/2 so that κ = 1), with the same limiting eigenvalue distribution for f(t) = max(t, 0)− 1/ √ 2π.\nRemark 3 (Non-informative spike in-between). When the support of ω consists of two disjoint regions (e.g., in the right plot of Figure 9), a non-informative spike may appear between these two regions, with the associated Stieltjes transform m < −c/a1 in the setting of Figure 8-(bottom). This is only possibly when a1 √ 2\nκ−1 > a2. An example is provided in Figure 13.\nA.3 PROOF OF COROLLARY 2 AND RELATED DISCUSSIONS\nSimilar to our discussions in Section A.2, we need to find the zeros of det Λ(z), that are real solutions to H(x) = 0 with\nH(x) = a1a 2 2(κ−1)\n( (vT1n) 2\nn2 ρ− 1− ρ\n) m3(x)−a22c(κ−1)m2(x)+2a1c2(ρ+1)m(x)+2c3 = 0\n(33)\nfor m(z) the unique solution to (9) and ρ = limp ‖µ‖2. Note that\n1. for a1a22(κ− 1)( (vT1n) 2 n2 ρ− 1− ρ) 6= 0, there can be up to three spikes; 2. with a1 = 0 and a2 6= 0, we get m2(x) = 2c 2\na22(κ−1) and there are at most two spikes:\nthis is equivalent to the case of Corollary 1 with ρ = 0; in fact, taking a1 we discard the information in the signal µ, as has been pointed out in (Liao & Couillet, 2019);\n3. with a2 = 0 and a1 6= 0 we obtain m(x) = − ca1(ρ+1) , this is the case of Corollary 2.\nFor a given isolated eigenvalue-eigenvector pair (λ̂, v̂) (assumed to be of multiplicity one), the projection |v̂Tv|2 onto the label vector v can be evaluated via the Cauchy’s integral formula and our Theorem 1. More precisely, consider a positively oriented contour Γ that circles around only the isolated λ̂, we write\n1 n vTv̂v̂Tv = − 1 2πı ∮ Γ 1 n vT(K− zIn)−1v dz\n= − 1 2πı ∮ Γ 1 n vT(m(z)In −VΛ(z)VT)v dz + o(1)\n= 1\nn vTV\n( 1\n2πı ∮ Γ Λ(z) dz ) VTv + o(1) = 1 n vTV (ResΛ(z)) VTv + o(1)\n= [ 1 v\nT1n n ]( lim z→λ (z − λ) [\nΘ(z)m2(z) Θ(z)Ω(z) v T1n n m(z)\nΘ(z)Ω(z) v T1n n m(z) Θ(z)Ω 2(z) (vT1n) 2 n2 −Ω(z) ])[ 1 vT1n n ] + o(1)\nwhere we use Theorem 1 for the second line and recall that the asymptotic location λ of λ̂ is away from the support of limiting spectral measure ω so that − 12πı ∮ Γ m(z) dz = 0 in the third line.\nInterestingly, note at this point that taking vT1n = 0 or a2 = 0 (so that Ω(z) = 0) leads to the same simplification\n1 n |vTv̂|2 = lim z→λ (z − λ)Θ(z)m2(z) + o(1) = lim z→λ (z − λ) a1ρm\n2(z)\nc+ a1m(z)(1 + ρ) + o(1) (34)\n= a1ρ\n1 + ρ\nm2(λ) m′(λ) + o(1) = a1ρ 1 + ρ\n( 1− a 2 1cm 2(λ)\n(c+ a1m(λ))2 − ν − a 2 1 c m2(λ)\n) + o(1) (35)\nwith l’Hospital’s rule and the fact that m′(z) = (\n1 m2(z) − a21c (c+a1m(z))2 − ν−a 2 1 c\n)−1 by differenti-\nating (9). The particularly means that, in the absence of the (noisy) non-informative spikes due to a2 6= 0 or in the case of balanced class vT1n = 0 (in fact “almost” balanced class vT1n = o(n)), we obtain the same asymptotic alignment (with respect to v) for the informative eigenvector. However, there may appear spurious non-informative and isolated eigenvectors in the latter case.\nIn the setting of Corollary 2 with a1 > 0 and a2 = 0, with the substitution m(λ) = mρ = − ca1(ρ+1) into (35) and then the change-of-variable m = − ca1 1 1+x , we obtain the expression of F (x) in Corollary 2. The phase transition condition can be similarly obtained, as discussed in Section A.2, by checking the sign of the derivative of the functional inverse x′(m) as in Corollary 3. This concludes the proof of Corollary 2.\nDiscussions. Note that, while with either a2 = 0 or vT1n = 0 we obtain the same expression for the projection |vTv̂|2, the possible spike of interest λ̂ (and its asymptotic location λ) in these two scenarios can be rather different. More precisely,\n1. with a2 = 0, there is a single possible spike λ̂ with m(λ) = mρ = − ca1(ρ+1) ;\n2. with vT1n = 0, there can be up to three spikes that correspond to mρ = − ca1(ρ+1) and m± = ± ca2 √ 2 κ−1 .\nThis observation leads to the following remark. Remark 4 (Noisy top eigenvector with a2 6= 0). For vT1n = 0 and a2 6= 0, one may have m− = − ca2 √ 2 κ−1 > − c a1(ρ+1) = mρ for instance with large a2 and small a1. Since m(x) is an increasing function, the top eigenvalue-eigenvector pair of K can be non-informative, independent of the SNR ρ, and totally useless for clustering purposes. An example is provided in Figure 2 where one observes that (i) the largest spike (on the right-hand side) corresponds to a noisy eigenvector while the second largest spike has its eigenvector positively aligned with the label vector v; and (ii) the theoretical prediction of the eigen-alignment α in Corollary 2 still holds here due to vT1n = 0. This extends our Corollary 1 to the signal-plus-noise scenario and confirms the advantage and necessity of taking a2 = 0.\nAs a side remark, in contrast to Remark 3 and Figure 13, where we observe that the non-informative spike can be lying between the two disjoint regions of the limiting measure ω, in the case of a1 > 0, the informative spike mρ = − ca1(ρ+1) can only appear on the right-hand side of the support of ω, since − ca1 < − c a1(ρ+1) < 0 for ρ = limp ‖µ‖2 ≥ 0. See Figure 8-(bottom) for an illustration.\nA.4 PROOF OF PROPOSITION 1\nNote that, for v̂ the top isolated eigenvector of K, with Corollary 2 we can write\nv̂ = √ αv/ √ n+ σw (36)\nfor some σ ∈ R, w ∈ Rn a zero-mean random vector, orthogonal to v, and of unit norm. To evaluate the asymptotic clustering performance in the setting of Proposition 1 (i.e., with the estimate Ĉi = sign([v̂]i) for v̂Tv ≥ 0), we need to assess the probability Pr(sign([v̂]i) < 0) for xi ∈ C1 and Pr(sign([v̂]i) > 0) for xi ∈ C2 (recall that the class-label [v]i = −1 for xi ∈ C1 and [v]i = +1 for xi ∈ C2), and it thus remains to derive σ. Note that\n1 = v̂Tv̂ = α+ 2σ √ αwTv/ √ n+ σ2 = α+ σ2 + o(1) (37)\nwhere we recall ‖v‖ = √ n, which, together with an argument on the normal fluctuations of v̂ (Kadavankandy & Couillet, 2019), concludes the proof." }, { "heading": "B ADDITIONAL EMPIRICAL RESULTS ON REAL-WORLD DATASETS", "text": "In Figure 14, we compare the clustering performance, the level of sparsity, and the computational time of the top eigenvector, of the sparse function f1 and quantized f2 with M = 2 (so 2 bits per nonzero entry), on the MNIST dataset. We see that, different from the binary f3 with which small entries of K are set to zero, the quantized function f2, by letting the small entries of K to take certain nonzero values, yields surprisingly good performance on the MNIST dataset. This performance gain comes, however, at the price of somewhat heavy computational burden that is approximately the same as the original dense matrix XTX, since we lose the sparsity with f2, see Figure 1-(left). This may be interpreted as a trade-off between storage size and computational time.\nAlso, from the left and middle displays of Figure 7 and Figure 14, we see that for MNIST data, while the classification error rates on the pair of digits (0, 1) can be as low as 1%, the performances on the pair (5, 6) are far from satisfactory, with the linear f(t) = t and the proposed f1, f2 or f3. This is the limitation of the proposed statistical model in (1), which only takes into account the first-order discriminative statistics. Indeed, it has been shown in (Liao & Couillet, 2019) that, taking a2 = 0 (as\nin the case of the proposed f1, f2 and f3) asymptotically discards the second-order discriminative statistics in the covariance structure, and may thus result in suboptimal performance in the case of non-identity covariance. It would be of future interest to extend the current analysis to the “generic” Gaussian mixture classification: N (µ1,C1) versus N (µ2,C2) by considering the impact of (i) asymmetric means µ1 and µ2 6= −µ1 and (ii) statistical information in the covariance structure C1 versus C2 and (iii) possibly a multi-class mixture model with number of classes K ≥ 3.\nFigure 15 compares the clustering performances of the proposed f1, f2, and f3 on other MNISTlike datasets including the Fashion-MNIST (Xiao et al., 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), and Kannada-MNIST (Prabhu, 2019) datasets. Then, Figure 16 compares the performances on the representations of the ImageNet dataset (Deng et al., 2009) from the popular GoogLeNet (Szegedy et al., 2015) of feature dimension p = 2 048. On various real-world data or features, we made similar observations as in the case of MNIST data in Figure 7 and Figure 14: the performances of sparse f1 and binary f3 are very similar and generally degrade as the threshold s becomes large, while the quantized f2 yields consistently good performances that are extremely close to that of the linear function. This is in line with the (theoretically sustained) observation in (Seddik et al., 2020) that the “deep” representations of real-world datasets behave, in the large n, p regime, very similar to simple Gaussian mixtures, thereby conveying a strong practical motivation for the present analysis." } ]
2,021
SPARSE QUANTIZED SPECTRAL CLUSTERING
SP:351283f0a33b5d1eb8d54d4fb74e93f0505b051c
[ "The authors argue that the world consists of largely independent causal mechanisms that sparsely interact. The authors propose a new kind of recurrent network (RIM) that presumably distills this world view into inductive biases. RIMs consist of largely independent recurrent modules that are sparsely activated and interact through soft attention. They tested RIMs on a number of supervised and reinforcement learning tasks, and showed that RIMs perform better than LSTMs and several other more recently proposed networks (e.g., Differential Neural Computers, Relational Memory Core, etc.) in these tasks. In particular, RIMs can generalize more naturally than other networks to out-of-distribution test set in presumably modular tasks.", "This paper proposes a neural network architecture consisting of multiple independent recurrent modules that interact sparingly. These independent modules are not all used simultaneously, a subset of them is active at each time step. This subset of active modules is chosen through an attention mechanism. The idea behind this architecture is that it would allow the different modules to specialize in different mechanisms and that would allow compositionality. The empirical results suggest that the proposed approach is able to generalize better than traditional architectures (which all have the implicit assumption that all processes interact)." ]
Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose Recurrent Independent Mechanisms (RIMs), a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant. We show that this leads to specialization amongst the RIMs, which in turn allows for dramatically improved generalization on tasks where some factors of variation differ systematically between training and evaluation.
[]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Neural module networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Dzmitry Bahdanau", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "arXiv preprint arXiv:1409.0473,", "year": 2014 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Eric B Baum", "David Haussler" ], "title": "What size net gives valid generalization? In Advances in neural information processing", "venue": null, "year": 1989 }, { "authors": [ "Yoshua Bengio" ], "title": "The consciousness prior", "venue": "arXiv preprint arXiv:1709.08568,", "year": 2017 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Rosemary Ke", "Sébastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": null, "year": 1901 }, { "authors": [ "Léon Bottou", "Patrick Gallinari" ], "title": "A framework for the cooperation of learning algorithms", "venue": "In Advances in neural information processing systems,", "year": 1991 }, { "authors": [ "Matthew Botvinick", "Todd Braver" ], "title": "Motivation and cognitive control: from behavior to neural mechanism", "venue": "Annual review of psychology,", "year": 2015 }, { "authors": [ "Michael M Bronstein", "Joan Bruna", "Yann LeCun", "Arthur Szlam", "Pierre Vandergheynst" ], "title": "Geometric deep learning: going beyond euclidean data", "venue": "IEEE Signal Processing Magazine,", "year": 2017 }, { "authors": [ "Maxime Chevalier-Boisvert", "Lucas Willems" ], "title": "Minimalistic gridworld environment for openai gym", "venue": "https://github.com/maximecb/gym-minigrid,", "year": 2018 }, { "authors": [ "Maxime Chevalier-Boisvert", "Dzmitry Bahdanau", "Salem Lahlou", "Lucas Willems", "Chitwan Saharia", "Thien Huu Nguyen", "Yoshua Bengio" ], "title": "Babyai: First steps towards grounded language learning with a human in the loop", "venue": "arXiv preprint arXiv:1810.08272,", "year": 2018 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "arXiv preprint arXiv:1609.01704,", "year": 2016 }, { "authors": [ "Paul Cisek", "John F Kalaska" ], "title": "Neural mechanisms for interacting with a world full of action choices", "venue": "Annual review of neuroscience,", "year": 2010 }, { "authors": [ "Emily Denton", "Rob Fergus" ], "title": "Stochastic video generation with a learned prior", "venue": "arXiv preprint arXiv:1802.07687,", "year": 2018 }, { "authors": [ "Robert Desimone", "Jody Duncan" ], "title": "Neural mechanisms of selective visual attention", "venue": "Annual Review of Neuroscience,", "year": 1995 }, { "authors": [ "A. Dickinson" ], "title": "Actions and habits: the development of behavioural autonomy", "venue": "Philosophical Transactions of the Royal Society B: Biological Sciences,", "year": 1985 }, { "authors": [ "Salah El Hihi", "Yoshua Bengio" ], "title": "Hierarchical recurrent neural networks for long-term dependencies", "venue": "In Advances in neural information processing systems,", "year": 1996 }, { "authors": [ "Chrisantha Fernando", "Dylan Banarse", "Charles Blundell", "Yori Zwols", "David Ha", "Andrei A Rusu", "Alexander Pritzel", "Daan Wierstra" ], "title": "Pathnet: Evolution channels gradient descent in super neural networks", "venue": "arXiv preprint arXiv:1701.08734,", "year": 2017 }, { "authors": [ "James J Gibson" ], "title": "The theory of affordances", "venue": "Hilldale, USA,", "year": 1977 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Anirudh Goyal", "Riashat Islam", "Daniel Strouse", "Zafarali Ahmed", "Matthew Botvinick", "Hugo Larochelle", "Sergey Levine", "Yoshua Bengio" ], "title": "Infobot: Transfer and exploration via the information bottleneck", "venue": "arXiv preprint arXiv:1901.10902,", "year": 2019 }, { "authors": [ "Anirudh Goyal", "Shagun Sodhani", "Jonathan Binas", "Xue Bin Peng", "Sergey Levine", "Yoshua Bengio" ], "title": "Reinforcement learning with competitive ensembles of information-constrained primitives", "venue": "arXiv preprint arXiv:1906.10667,", "year": 2019 }, { "authors": [ "Alex Graves", "Greg Wayne", "Ivo Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401,", "year": 2014 }, { "authors": [ "Alex Graves", "Greg Wayne", "Malcolm Reynolds", "Tim Harley", "Ivo Danihelka", "Agnieszka GrabskaBarwińska", "Sergio Gómez Colmenarejo", "Edward Grefenstette", "Tiago Ramalho", "John Agapiou" ], "title": "Hybrid computing using a neural network with dynamic external memory", "venue": null, "year": 2016 }, { "authors": [ "Danijar Hafner", "Timothy Lillicrap", "Ian Fischer", "Ruben Villegas", "David Ha", "Honglak Lee", "James Davidson" ], "title": "Learning latent dynamics for planning from pixels", "venue": "arXiv preprint arXiv:1811.04551,", "year": 2018 }, { "authors": [ "Mikael Henaff", "Jason Weston", "Arthur Szlam", "Antoine Bordes", "Yann LeCun" ], "title": "Tracking the world state with recurrent entity networks", "venue": "arXiv preprint arXiv:1612.03969,", "year": 2016 }, { "authors": [ "Geoffrey E Hinton", "Sara Sabour", "Nicholas Frosst" ], "title": "Matrix capsules with em routing", "venue": null, "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Robert A Jacobs", "Michael I Jordan", "Steven J Nowlan", "Geoffrey E Hinton" ], "title": "Adaptive mixtures of local experts", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "Yacine Jernite", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Variable computation in recurrent neural networks", "venue": "arXiv preprint arXiv:1611.06188,", "year": 2016 }, { "authors": [ "Nan Rosemary Ke", "Anirudh Goyal", "Olexa Bilaniuk", "Jonathan Binas", "Michael C Mozer", "Chris Pal", "Yoshua Bengio" ], "title": "Sparse attentive backtracking: Temporal credit assignment through reminding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Diederik Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Thomas Kipf", "Ethan Fetaya", "Kuan-Chieh Wang", "Max Welling", "Richard Zemel" ], "title": "Neural relational inference for interacting systems", "venue": "arXiv preprint arXiv:1802.04687,", "year": 2018 }, { "authors": [ "Louis Kirsch", "Julius Kunze", "David Barber" ], "title": "Modular networks: Learning to decompose neural computation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Wouter Kool", "Matthew Botvinick" ], "title": "Mental labour", "venue": "Nature human behaviour,", "year": 2018 }, { "authors": [ "Ilya Kostrikov" ], "title": "Pytorch implementations of reinforcement learning algorithms", "venue": "https://github. com/ikostrikov/pytorch-a2c-ppo-acktr-gail,", "year": 2018 }, { "authors": [ "Jan Koutnik", "Klaus Greff", "Faustino Gomez", "Juergen Schmidhuber" ], "title": "A clockwork rnn", "venue": "arXiv preprint arXiv:1402.3511,", "year": 2014 }, { "authors": [ "David Krueger", "Tegan Maharaj", "János Kramár", "Mohammad Pezeshki", "Nicolas Ballas", "Nan Rosemary Ke", "Anirudh Goyal", "Yoshua Bengio", "Aaron Courville", "Chris Pal" ], "title": "Zoneout: Regularizing rnns by randomly preserving hidden activations", "venue": "arXiv preprint arXiv:1606.01305,", "year": 2016 }, { "authors": [ "Shuai Li", "Wanqing Li", "Chris Cook", "Ce Zhu", "Yanbo Gao" ], "title": "Independently recurrent neural network (indrnn): Building a longer and deeper rnn", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "arXiv preprint arXiv:1609.07843,", "year": 2016 }, { "authors": [ "Daniel Neil", "Michael Pfeiffer", "Shih-Chii Liu" ], "title": "Phased lstm: Accelerating recurrent network training for long or event-based sequences", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Giambattista Parascandolo", "Niki Kilbertus", "Mateo Rojas-Carulla", "Bernhard Schölkopf" ], "title": "Learning independent causal mechanisms", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Judea Pearl" ], "title": "Causality: Models, Reasoning, and Inference", "venue": null, "year": 2009 }, { "authors": [ "Jonas Peters", "Dominik Janzing", "Bernhard Schölkopf" ], "title": "Elements of Causal Inference - Foundations and Learning Algorithms", "venue": null, "year": 2017 }, { "authors": [ "David Raposo", "Adam Santoro", "David Barrett", "Razvan Pascanu", "Timothy Lillicrap", "Peter Battaglia" ], "title": "Discovering objects and their relations from entangled scene representations", "venue": "arXiv preprint arXiv:1702.05068,", "year": 2017 }, { "authors": [ "Eric Ronco", "Henrik Gollee", "Peter J Gawthrop" ], "title": "Modular neural networks and self-decomposition", "venue": "Technical Report CSC-96012,", "year": 1997 }, { "authors": [ "Clemens Rosenbaum", "Tim Klinger", "Matthew Riemer" ], "title": "Routing networks: Adaptive selection of non-linear functions for multi-task learning", "venue": "arXiv preprint arXiv:1711.01239,", "year": 2017 }, { "authors": [ "Clemens Rosenbaum", "Ignacio Cases", "Matthew Riemer", "Tim Klinger" ], "title": "Routing networks and the challenges of modular and compositional computation", "venue": null, "year": 1904 }, { "authors": [ "Sara Sabour", "Nicholas Frosst", "Geoffrey E Hinton" ], "title": "Dynamic routing between capsules", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "The graph neural network model", "venue": "IEEE Transactions on Neural Networks,", "year": 2008 }, { "authors": [ "Jürgen Schmidhuber" ], "title": "One big net for everything", "venue": "arXiv preprint arXiv:1802.08864,", "year": 2018 }, { "authors": [ "Bernhard Schölkopf", "Dominik Janzing", "Jonas Peters", "Eleni Sgouritsa", "Kun Zhang", "Joris Mooij" ], "title": "On causal and anticausal learning", "venue": "Proceedings of the 29th International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Herbert A Simon" ], "title": "The architecture of complexity", "venue": "In Facets of systems science,", "year": 1991 }, { "authors": [ "Shagun Sodhani", "Anirudh Goyal", "Tristan Deleu", "Yoshua Bengio", "Sergey Levine", "Jian Tang" ], "title": "Learning powerful policies by using consistent dynamics model", "venue": null, "year": 1906 }, { "authors": [ "Andrea Tacchetti", "H Francis Song", "Pedro AM Mediano", "Vinicius Zambaldi", "Neil C Rabinowitz", "Thore Graepel", "Matthew Botvinick", "Peter W Battaglia" ], "title": "Relational forward models for multi-agent learning", "venue": "arXiv preprint arXiv:1809.11044,", "year": 2018 }, { "authors": [ "Yee Teh", "Victor Bapst", "Wojciech M Czarnecki", "John Quan", "James Kirkpatrick", "Raia Hadsell", "Nicolas Heess", "Razvan Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Sjoerd Van Steenkiste", "Michael Chang", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "Relational neural expectation maximization: Unsupervised discovery of objects and their interactions", "venue": "arXiv preprint arXiv:1802.10353,", "year": 2018 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Nicholas Watters", "Daniel Zoran", "Theophane Weber", "Peter Battaglia", "Razvan Pascanu", "Andrea Tacchetti" ], "title": "Visual interaction networks: Learning a physics simulator from video", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ronald J Williams", "David Zipser" ], "title": "A learning algorithm for continually running fully recurrent neural networks", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "2017 Santoro et al", "2017 Gilmer et al", "Van Steenkiste" ], "title": "One can also view our proposed model as a relational graph neural network, where nodes are parameterized as individual RIMs and edges are parameterized by the attention mechanism. Though, its important to emphasize that the topology of the graph induced in the proposed model is dynamic, while in most graph neural networks the topology is fixed", "venue": "Kipf et al.,", "year": 2018 }, { "authors": [ "BlockLSTM. C" ], "title": "BOUNCING BALLS We use the bouncing-ball dataset from (Van Steenkiste et al., 2018). The dataset consists of 50,000 training examples and 10,000 test examples showing ∼50 frames of either 4 solid balls bouncing in a confined square geometry, 6-8 balls bouncing in a confined geometry, or 3 balls bouncing in a confined geometry with a random", "venue": null, "year": 2018 }, { "authors": [ "Van Steenkiste" ], "title": "system is rolled out for the next 15 time steps, computing the binary cross entropy between the prediction and the true balls at each instant", "venue": "OCCLUSION In Fig. 13,", "year": 2018 }, { "authors": [ "Rusu" ], "title": "Atari games averaged over 3 trials per game. In both cases PPO was used with the exact same settings with the only change being the choice of the recurrent architecture (RIMs with kA = 5). C.12.1 TRANSFER ON ATARI As a very preliminary result, we investigate feature transfer between randomly selected Atari games. In order to study", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Learning modular structures which reflect the dynamics of the environment can lead to better generalization and robustness to changes which only affect a few of the underlying causes. We propose Recurrent Independent Mechanisms (RIMs), a new recurrent architecture in which multiple groups of recurrent cells operate with nearly independent transition dynamics, communicate only sparingly through the bottleneck of attention, and are only updated at time steps where they are most relevant. We show that this leads to specialization amongst the RIMs, which in turn allows for dramatically improved generalization on tasks where some factors of variation differ systematically between training and evaluation." }, { "heading": "1 INTRODUCTION", "text": "Physical processes in the world often have a modular structure, with complexity emerging through combinations of simpler subsystems. Machine learning seeks to uncover and use regularities in the physical world. Although these regularities manifest themselves as statistical dependencies, they are ultimately due to dynamic processes governed by physics. These processes are often independent and only interact sparsely. For instance, we can model the motion of two balls as separate independent mechanisms even though they are both gravitationally coupled to Earth as well as (weakly) to each other. They may, however, occasionally strongly interact via collisions.\nThe notion of independent or autonomous mechanisms has been influential in the field of causal inference, where it is applied not only to dynamic processes but also to time independent datasets. For instance, it has been argued that the conditional distribution of the average annual temperature given the altitude of a place is an abstraction of a causal mechanism (subsuming complex physical processes involving air pressure, etc.) that is independent of the distribution of the altitudes of settlements (Peters et al., 2017), and will thus apply invariantly for, say, different countries in the same climate zone with different altitude distributions.\nA complex generative model, temporal or not, can be thought of as the composition of independent mechanisms or “causal” modules. In the causality community, this is often considered a prerequisite of being able to perform localized interventions upon variables determined by such models (Pearl, 2009). It has been argued that the individual modules tend to remain robust or invariant even as other modules change, e.g., in the case of distribution shift (Schölkopf et al., 2012; Peters et al., 2017). One may hypothesize that if a brain is able to solve multiple problems beyond a single i.i.d. (independent and identically distributed) task, it would be economical to learn structures aligned with this, by learning independent mechanisms that can flexibly be reused, composed and re-purposed.\nIn the dynamic setting, we think of an overall system being assayed as composed of a number of fairly independent subsystems that evolve over time, responding to forces and interventions. A learning agent then need not devote equal attention to all subsystems at all times: only those aspects that significantly interact need to be considered jointly when taking a decision or forming a plan (Bengio, 2017). Such sparse interactions can reduce the difficulty of learning since few interactions need to be considered at a time, reducing unnecessary interference when a subsystem is adapted. Models learned this way may be more likely to capture the compositional generative (or causal) structure of the world, and thus better generalize across tasks where a (small) subset of mechanisms change while most of them remain invariant (Simon, 1991; Peters et al., 2017; Parascandolo et al., 2018). The central question motivating our work is how a machine learning approach can learn independent but sparsely interacting recurrent mechanisms in order to benefit from such modularity." }, { "heading": "2 RECURRENT INDEPENDENT MECHANISMS WITH SPARSE INTERACTIONS", "text": "Our approach to modelling a dynamical system of interest divides the overall model into k small subsystems (or modules), each of which is recurrent in order to be able to capture dynamics. We refer to these subsystems as Recurrent Independent Mechanisms (RIMs), where each RIM has distinct functions that are learned automatically from data1. We refer to RIM k at time step t as having state ht,k, where t = 1, . . . , T . Each RIM has parameters θk, which are shared across all time steps.\nAt a high level (see. Fig. 1), we want each RIM to have its own independent dynamics operating by default, and occasionally to interact with other relevant RIMs and with selected elements of the encoded input. The total number of parameters can be kept small since RIMs can specialize on simple sub-problems, similar to Parascandolo et al. (2018). This specialization and modularization not only has computational and statistical advantages (Baum & Haussler, 1989; Bengio et al., 2019), but also prevents individual RIMs from dominating and modelling complex, composite mechanisms. We expect this to lead to more robust systems than training one big homogeneous neural network (Schmidhuber, 2018). Moreover, modularity also has the desirable implication that a RIM should maintain its own independent functionality even as other RIMs are changed. A more detailed account of the desiderata for the model is given in Appendix A." }, { "heading": "2.1 INDEPENDENT RIM DYNAMICS", "text": "Now, consider the default transition dynamics which we apply for each RIM independently and during which no information passes between RIMs. We use h̃ for the hidden state after the independent dynamics are applied (and before attention is applied). First, for the RIMs which are not activated (we refer to the activated set as St), the hidden state remains unchanged:\nh̃t+1,k = ht,k ∀k /∈ St. (1) Note that the gradient still flows through a RIM on a step where it is not activated. For the RIMs that are activated, we run a per-RIM independent transition dynamics. The form of this is somewhat flexible, but in this work we opted to use either a GRU (Chung et al., 2015) or an LSTM (Hochreiter & Schmidhuber, 1997). We generically refer to these independent transition dynamics as Dk, and we emphasize that each RIM has its own separate parameters. Aside from being RIM-specific, the internal operation of the LSTM and GRU remain unchanged, and the active RIMs are updated by\nh̃t+1,k = Dk(ht,k) = LSTM(ht,k, A (in) k ; θ (D) k ) ∀k ∈ St (2)\nas a function of the attention mechanism A(in)k applied on the current input, described in the next two subsections below, after explaining the key-value mechanism used to select arguments for this update.\n1Note that we are using the term mechanism both for the mechanisms that make up the world’s dynamics as well as for the computational modules that we learn to model those mechanisms." }, { "heading": "2.2 KEY-VALUE ATTENTION TO PROCESS SETS OF NAMED INTERCHANGEABLE VARIABLES", "text": "Each RIM should be activated and updated when the input is relevant to it. We thus utilize competition to allocate representational and computational resources. As argued by Parascandolo et al. (2018), this tends to produce independence among learned mechanisms, provided the training data has been generated by a set of independent physical mechanisms. In contrast to Parascandolo et al. (2018), we use an attention mechanism for this purpose. In doing so, we are inspired by findings from experimental psychology in the study of the interplay of top-down attention and bottom-up information flow, conceptualized in the biased competition theory of selective attention (Desimone & Duncan, 1995): A brain’s capacity for parallel processing of complex entities is limited, and many brain systems representing visual information use competition (operating in parallel across the visual field) to allocate resources, often biased by feedback from higher brain areas.\nThe introduction of content-based soft-attention mechanisms (Bahdanau et al., 2014) has opened the door to neural networks which operate on sets of typed interchangeable objects. This idea has been remarkably successful and widely applied to most recent Transformer-style multi-head dot product self attention models (Vaswani et al., 2017; Santoro et al., 2018), achieving new state-of-the-art results in many tasks. Soft-attention uses the product of a query (or read key) Q of dimensionality Nr × d matrix Q, and d dimension of each key) to a set of No objects each associated with a key (or write-key) matrix KT (No × d), and after normalization with a softmax yields outputs in the convex hull of the values (or write-values) Vi (row i of matrix V ). Its result is computed as\nAttention(Q,K, V ) = softmax ( QKT√\nd\n) V,\nwhere the softmax is applied to each row of its argument matrix, yielding a set of convex weights. As a result, one obtains a convex combination of the values V . If the attention is focused on one element for a particular row (i.e., the softmax is saturated), this simply selects one of the objects and copies its value to row j of the result. Note that the d dimensions in the key can be split into heads which then have their attention matrix and write values computed separately.\nWhen the inputs and outputs of each RIM are a set of objects or entities (each associated with a key and value vector), the RIM processing becomes a generic object-processing machine which can operate on “variables” in a sense analogous to variables in a programming language: as interchangeable arguments of functions. Because each object has a key embedding (which one can understand both as a name and as a type), the same RIM processing can be applied to any variable which fits an expected \"distributed type\" (specified by a query vector). Each attention head then corresponds to a typed argument of the function computed by the RIM. When the key of an object matches the query, it can be used as input for the RIM. Whereas in regular neural networks (without attention) neurons operate on fixed variables (the neurons which are feeding them from the previous layer), the key-value attention mechanisms make it possible to select on the fly which variable instance (i.e. which entity or object) is going to be used as input for each of the arguments of the RIM dynamics, with a different set of query embeddings for each RIM. These inputs can come from the external input or from the output of other RIMs. So, if the individual RIMs can represent these “functions with typed arguments,” then they can “bind” to whatever input is currently available and best suited according to its attention score: the “input attention” mechanism would look at the candidate input object’s key and evaluate if its “type” matches with what this RIM expects (specified in the query)." }, { "heading": "2.3 SELECTIVE ACTIVATION OF RIMS AS A FORM OF TOP-DOWN MODULATION", "text": "The proposed model learns to dynamically select those RIMs for which the current input is relevant. We give each RIM the choice between attending to the actual input instances or a special null input. The null input consists entirely of zeros and thus contains no information. At each step, we select the top-kA (out of kT ) RIMs in terms of their value of the softmax for the real input. Intuitively, the RIMs must compete on each step to read from the input, and only the RIMs that win this competition will be able to read from the input and have their state updated.\nIn our use of key-value attention, the queries come from the RIMs, while the keys and values come from the current input. The mechanics of this attention mechanism follow from the Transformer (Vaswani et al., 2017) and the RMC (Santoro et al., 2018), with the modification that the parameters of the attention mechanism itself are separate for each RIM. The input attention for a particular RIM\nis described as follows. The input xt at time t is seen as a set of elements, structured as rows of a matrix (for image data, it can be the output of the CNN). We first concatenate a row full of zeros, to obtain\nX = ∅ ⊕ xt. (3)\n⊕ refers to the row-level concatenation operator. Then, linear transformations are used to construct keys (K = XW k, one per input element and for the null element), values (V = XW v, again one per element), and queries (Q = RW qk , one per RIM attention head) where R is a matrix with each row ri corresponding to the hidden state of an individual RIM (i.e ht,k). W v is a simple matrix mapping from an input element to the corresponding value vector for the weighted attention and W k is similarly a weight matrix which maps the input to the keys. W qk is a per-RIM weight matrix which maps from the RIM’s hidden state to its queries. The attention thus is\nA (in) k = softmax\n( RW qk (XW\nk)T√ de\n) XW v, where θ(in)k = (W q k ,W e,W v). (4)\nBased on the softmax values in (4), we select the top kA RIMs (out of the total K RIMs) to be activated for each step, which have the least attention on the null input (and thus put the highest attention on the input), and we call this set St. Since the queries depend on the state of the RIMs, this enables individual RIMs to attend only to the part of the input that is relevant for that particular RIM, thus enabling selective attention based on a top-down attention process (see. Fig 1). In practice, we use multiheaded attention, and multi-headed attention doesn’t change the essential computation, but when we do use it for input-attention we compute RIM activation by averaging the attention scores over the heads." }, { "heading": "2.4 COMMUNICATION BETWEEN RIMS", "text": "Although the RIMs operate independently by default, the attention mechanism allows sharing of information among the RIMs. Specifically, we allow the activated RIMs to read from all other RIMs (activated or not). The intuition behind this is that non-activated RIMs are not related to the current input, so their value should not change. However they may still store contextual information that is relevant for activated RIMs. For this communication between RIMs, we use a residual connection as in (Santoro et al., 2018) to prevent vanishing or exploding gradients over long sequences.\nQt,k = W̃ q k h̃t,k, ∀k ∈ St (5)\nKt,k = W̃ e k h̃t,k, ∀k (6)\nVt,k = W̃ v k h̃t,k, ∀k (7) ht+1,k = softmax ( Qt,k(Kt,:) T\n√ de\n) Vt,: + h̃t,k ∀k ∈ St, where θ(c)k = (W̃ q k , W̃ e k , W̃ v k ). (8)\nAs in the Transformer and RMC Vaswani et al. (2017); Santoro et al. (2018), we use multiple heads (as well as input attention (as in Sec 2.3) by producing different sets of queries, keys, and values to compute a linear transformation for each head (different heads have different parameters)." }, { "heading": "2.5 VARIATIONS ON THE RIMS ARCHITECTURE", "text": "The RIMs architecture that we study is highly homogeneous and generally the only hyperparameters are the number of RIMs K and how many RIMs are activated on each time step KA. All of the datasets that we consider are temporal, yet there is a distinction between datasets where the input on each time step is highly structured (such as a video, where each time step is an image) and where this is not the case (such as language modeling, where each step is a word or character). In the former case, we can get further improvements by making the activation of RIMs not just sparse across time but also sparse across the (spatial) structure." }, { "heading": "3 RELATED WORK", "text": "Neural Turing Machine (NTM) and Relational Memory Core (RMC): the NTM (Graves et al., 2014a) consists of a sequence of independent memory cells, and uses an attention mechanism while performing targeted read and write operations. This shares a key idea with RIMs: that input information should only impact a sparse subset of the memory by default, while keeping most of the memory unaltered. RMC (Santoro et al., 2018) uses a multi-head attention mechanism to share information between multiple memory elements. We encourage the RIMs to remain separate as much as possible, whereas Santoro et al. (2018) allow information between elements to flow on each step in an unsconstrained way. Instead, each RIM has its own default dynamics, while in RMC, all the processes interact with each other.\nSeparate Recurrent Models: EnTNet (Henaff et al., 2016) and IndRNN (Li et al., 2018) can be viewed as a set of separate recurrent models. In IndRNN, each recurrent unit has completely independent dynamics, whereas EntNet uses an independent gate for writing to each memory slot. RIMs use different recurrent models (with separate parameters), but we allow the RIMs to communicate with each other sparingly using an attention mechanism.\nModularity and Neural Networks: A neural network is composed of several neural modules, where each module is meant to perform a distinct function, and hence can be seen as a combination of experts (Jacobs et al., 1991; Bottou & Gallinari, 1991; Ronco et al., 1997; Reed & De Freitas, 2015; Andreas et al., 2016; Parascandolo et al., 2018; Rosenbaum et al., 2017; Fernando et al., 2017; Shazeer et al., 2017; Kirsch et al., 2018; Rosenbaum et al., 2019) routing information through a gated activation of layers. These works generally assume that only a single expert is active at a particular time step. In the proposed method, multiple RIMs can be active, interact and share information.\nComputation on demand: There are various architectures (El Hihi & Bengio, 1996; Koutnik et al., 2014; Chung et al., 2016; Neil et al., 2016; Jernite et al., 2016; Krueger et al., 2016) where parts of the LSTM’s hidden state are kept dormant at times. The major differences as compared to the proposed architecture are that (a) we modularize the dynamics of recurrent cells (using RIMs), and (b) we also control the inputs of each module (using transformer style attention), while many previous gating methods did not control the inputs of each module, but only whether they should be executed or not." }, { "heading": "4 EXPERIMENTS", "text": "The main goal of our experiments is to show that the use of RIMs improves generalization across changing environments and/or in modular tasks, and to explore how it does so. Our goal is not to outperform highly optimized baselines; rather, we want to show the versatility of our approach by applying it to a range of diverse tasks, focusing on tasks that involve a changing environment. We organize our results by the capabilities they illustrate: we address generalization based on temporal patterns, based on objects, and finally consider settings where both of these occur together." }, { "heading": "4.1 RIMS IMPROVE GENERALIZATION BY SPECIALIZING OVER TEMPORAL PATTERNS", "text": "We first show that when RIMs are presented with sequences containing distinct temporal patterns, they are able to specialize so that different RIMs are activated on different patterns. As a result, RIMs are able to generalize well when we modify a subset of the patterns (especially those unrelated to the class label) while most recurrent models fail to generalize well to these variations.\n4.1.1 COPYING TASK\nAs an example of out-of-distribution generalization, we find that using RIMs, we can extend the length of this dormant phase from 50 during training to 200 during testing and retain perfect performance (Table 1), whereas baseline methods including LSTM, NTM, and RMC substantially degrade. In addition, we find that this result is robust to the number of RIMs used as well as to the number of RIMs activated per-step. Our results (Appendix C.5) show that communication between different RIMs as well as input attention is necessary to achieve good generalization. We consider this preliminary evidence that RIMs can specialize over distinct patterns in the data and improve generalization to settings where these patterns change." }, { "heading": "4.1.2 SEQUENTIAL MNIST RESOLUTION TASK", "text": "RIMs are motivated by the hypothesis that generalization performance can be improved by having modules which only activate on relevant parts of the sequence. For further evidence that RIMs can achieve this out-of-distribuution, we consider the task of classifying MNIST digits as sequences of pixels (Krueger et al., 2016) and assay generalization to images of resolutions different from those seen during training. Our intuition is that the RIMs model should have distinct subsets of the RIMs activated for pixels with the digit and empty pixels. As a result, RIMs should generalize better to greater resolutions by keeping the RIMs which store pixel information dormant over the empty regions of the image.\nResults: Table 1 shows the result of the proposed model on the Sequential MNIST Resolution Task. If the train and test sequence lengths agree, both models achieve comparable test set performance. However, the RIMs model was relatively robust to changing the sequence length (by changing the image resolution), whereas the LSTM performance degraded more severely. This can be seen as a more involved analogue of the copying task, as MNIST digits contain large empty regions. It is essential that the model be able to store information and pass gradients through these regions. The RIMs outperform strong baselines such as Transformers, EntNet, RMC, as well as the Differentiable Neural Computer (DNC) (Graves et al., 2016)." }, { "heading": "4.2 RIMS LEARN TO SPECIALIZE OVER OBJECTS AND GENERALIZE BETWEEN THEM", "text": "We have presented evidence that RIMs can specialize over temporal patterns. We now turn our attention to showing that RIMs can specialize to objects, and show improved generalization to settings where we add or remove objects at test time." }, { "heading": "4.2.1 BOUNCING BALL ENVIRONMENT", "text": "We consider a synthetic “bouncing balls” task in which multiple balls (of different masses and sizes) move using basic Newtonian physics (Van Steenkiste et al., 2018). What makes this task particularly suited to RIMs is that the balls move independently most of the time, except when they collide. During training, we predict the next frame at each time step using teacher forcing (Williams & Zipser, 1989). We can then use this model to generate multi-step rollouts.\nAs a preliminary experiment, we train on sequences of length 51 (the previous standard), using a binary cross entropy loss when predicting the next frame. We consider LSTM as baseline. We then produce rollouts, finding that RIMs are better able to predict future motion (examples in Figure 3, Figure 10 in Appendix and quantitative comparisons in Figure 4).\nWe take this further by evaluating RIMs on environments where the setup is different from the training setup. First we consider training with 4 balls and evaluating on an environment with 6-8 balls. Second, we consider training with 6-8 balls and evaluating with just 4 balls. Robustness in these settings requires a degree of invariance w.r.t. the number of balls.\nIn addition, we consider a task where we train on 4 balls and then evaluate on sequences where part of the visual space is occluded by a “curtain”. This allows us to assess the ability of balls to be tracked (or remembered) through the occluding region. Our experimental results on these generalization tasks (Figure 4) show that RIMs substantially improve over an LSTM baseline. We found that increasing the capacity of the LSTM from 256 to 512 units did not substantially change the performance gap, suggesting that the improvement from RIMs is not primarily a result of increased capacity." }, { "heading": "4.2.2 ENVIRONMENT WITH NOVEL DISTRACTORS", "text": "We next consider an object-picking reinforcement learning task from BabyAI (Chevalier-Boisvert et al., 2018) in which an agent must retrieve a specific object in the presence of distractors. We use a partially observed formulation of the task, where the agent only sees a small number of squares ahead of it. These tasks are difficult to solve (Chevalier-Boisvert et al., 2018) with standard RL algorithms, due to (1) the partial observability of the environment and (2) the sparsity of the reward, given that the agent receives a reward only after reaching the goal. During evaluation, we introduce new distractors to the environment which were not observed during training.\nFigure 5 shows that RIMs outperform LSTMs on this task (details in appendix). When evaluating with known distractors, the RIM model achieves perfect performance while the LSTM struggles. When evaluating in an environment with novel unseen distractors the RIM doesn’t achieve perfect performance but still outperforms the LSTM. An LSTM with a single memory flow may struggle to keep the distracting elements separate from elements which are necessary for the task, while the RIMs model uses attention to control which RIMs receive infor-\nmation at each step as well as what information they receive (as a function of their hidden state). This \"top-down\" bias results in a diminished representation of the distractor, not only enhancing the target visual information, but also suppressing irrelevant information. The notion that enhancement of the relevant information necessarily results in suppression of irrelevant information is fundamental to biased competition theory (Desimone & Duncan, 1995)." }, { "heading": "4.3 RIMS IMPROVE GENERALIZATION IN COMPLEX ENVIRONMENTS", "text": "We have investigated how RIMs use specialization to improve generalization to changing important factors of variation in the data. While these improvements have often been striking, it raises a question: what factors of variation should be changed between training and evaluation? One setting where factors of variation change naturally is in reinforcement learning, as the data received from an environment changes as the agent learns and improves. We conjecture that when applied to reinforcement learning, an agent using RIMs may be able to learn faster as its specialization leads to improved generalization to previously unseen aspects of the environment.\nTo investigate this we use an RL agent trained using Proximal Policy Optimization (PPO) (Schulman et al., 2017) with a recurrent network producing the policy. We employ an LSTM as a baseline, and compare results to the RIMs architecture. This was a simple drop-in replacement and did not require changing any of the hyperparameters for PPO. We experiment on the whole suite of Atari games and find that simply replacing the LSTM with RIMs greatly improves performance (Figure 6).\nThere is also an intriguing connection between the selective activation in RIMs and the concept of affordances from cognitive psychology (Gibson, 1977; Cisek & Kalaska, 2010). To perform well in environments with a dynamic combination of risks and opportunities, an agent should be ready to adapt immediately, releasing into execution actions which are at least partially prepared. This suggests agents should process sensory information in a contextual manner, building representations of potential actions that the environment currently affords. For instance, in Demon Attack, one of the games where RIMs exhibit strong performance gains, the agent must quickly choose between targeting distant aliens to maximize points and avoiding fire from close-by aliens to avoid destruction (indeed both types of aliens are always present, but which is relevant depends on the player’s position). We hypothesize that in cases like this, selective activation of RIMs allows the agent to rapidly adapt its information processing to the types of actions relevant to the current context." }, { "heading": "4.4 ABLATIONS", "text": "Role of Top-Down Modulation: Removing Input Attention We study the scenario where we remove the input attention process (Section 2.3) but still allow communication between RIMs (Section 2.4). We train this agent on 30 ATARI games for 30M time-steps each and compare the performance of this agent with the normal RIMs-PPO agent. We find that the RIMs agent still outperform this agent on 11 out of 30 games, while on 1 game (Frostbite) we see the proposed baseline agent substantially improves the performance. For more details regarding the training curves, refer to Fig. 25 (in Appendix).\nImportance of communication between RIMs: For copying, we performed an ablation where we remove the communication between RIMs. We also varied the number of RIMs as well as the\nnumber of activated RIMs (Table 5). We found that the communication between RIMs is essential for good performance. We found similar results for the sequential MNIST resolution task.\nImportance of sparsity of activation of the RIMs For the copying task, as well as for the sequential MNIST changed resolution task, we performed an ablation where we kept all RIMs active for all time steps (Table 5). We found that we were not able to achieve strong generalization as compared to the best performing RIMs model. On Atari we found that using kA = 5 slightly improved over results compared with kA = 4, but both had similar performance across the vast majority of games, suggesting that the kA hyperparameter is reasonably flexible in practice.\nVarying the number of attention heads for communication: Here, we study what happens if the output of RIMs only has one ’object’ rather than multiple ones (Section 2.2). The intuition is that RIM processing can be applied to any “head” which matches the query by an individual RIM. So, having more heads should help, as different heads could be used by different RIMs, rather than every RIM competing for the same head. We study this in the context of bouncing balls. We found that using multiple heads improves the performance, thus validating our hypothesis (Sec. 2.2). See Appendix C.11 for details.\nRandomly Dropping Out RIMs: Modular structures are aggregates of mechanisms that can perform functions without affecting the remainder of the system, and interact as needed. To what extent are trained RIMs able to model meaningful phenomena when other RIMs are removed? We performed an experiment on moving MNIST digits where we train normally and “dropout” a random RIM at test time. We found that in the absence of selective activation (i.e. when kA = kT , Section C.13) the performance degraded very badly, but the performance degrades much less with selective activation. See Appendix C.13 for details." }, { "heading": "5 CONCLUSION", "text": "Many systems of interest comprise multiple dynamical processes that operate relatively independently and only occasionally have meaningful interactions. Despite this, most machine learning models employ the opposite inductive bias, i.e., that all processes interact. This can lead to poor generalization (if data is limited) and lack of robustness to changing task distributions. We have proposed a new architecture, Recurrent Independent Mechanisms (RIMs), in which we learn multiple recurrent modules that are independent by default, but interact sparingly. Our positive experimental results lend support to the consciousness prior (Bengio, 2017), i.e., the importance of computational elements which focus on few mechanisms at a time in order to determine how a high-level state evolves over time, with many aspects of the state not being affected by this attentive dynamics (i.e., following default dynamics). For the purposes of this paper, we note that the notion of RIMs is not limited to the particular architecture employed here. The latter is used as a vehicle to assay and validate our overall hypothesis (cf. Appendix A), but better architectures for the RIMs model can likely be found." }, { "heading": "A DESIDERATA FOR RECURRENT INDEPENDENT MECHANISMS", "text": "We have laid out a case for building models composed of modules which by default operate independently and can interact in a limited manner. Accordingly, our approach to modelling the dynamics of the world starts by dividing the overall model into small subsystems (or modules), referred to as Recurrent Independent Mechanisms (RIMs), with distinct functions learned automatically from data.Our model encourages sparse interaction, i.e., we want most RIMs to operate independently and follow their default dynamics most of the time, only rarely sharing information. Below, we lay out desiderata for modules to capture modular dynamics with sparse interactions.\nCompetitive Mechanisms: Inspired by the observations in the main paper, we propose that RIMs utilize competition to allocate representational and computational resources. As argued by (Parascandolo et al., 2018), this tends to produce independence among learned mechanisms if the training data has been generated by independent physical mechanisms.\nTop Down Attention: The points mentioned in Section 2 in principle pertain to synthetic and natural intelligent systems alike. Hence, it is not surprising that they also appear in neuroscience. For instance, suppose we are looking for a particular object in a large scene, using limited processing capacity. The biased competition theory of selective attention conceptualizes basic findings of experimental psychology and neuroscience (Desimone & Duncan, 1995): our capacity of parallel processing of and reasoning with high-level concepts is limited, and many brain systems representing visual information use competition to allocate resources. Competitive interactions among multiple objects occur automatically and operate in parallel across the visual field. Second, the principle of selectivity amounts to the idea that a perceiver has the ability to filter out unwanted information and selectively process the rest of the information. Third, top-down bias originating from higher brain areas enables us to selectively devote resources to input information that may be of particular interest or relevance. This may be accomplished by units matching the internal model of an object or process of interest being pre-activated and thus gaining an advantage during the competition of brain mechanisms.\nSparse Information Flow: Each RIMs’ dynamics should only be affected by RIMs which are deemed relevant. The fundamental challenge is centered around establishing sensible communication between RIMs. In the presence of noisy or distracting information, a large subset of RIMs should stay dormant, and not be affected by the noise. This way, training an ensemble of these RIMs can be more robust to out-of-distribution or distractor observations than training one big homogeneous neural network (Schmidhuber, 2018).\nModular Computation Flow and Modular Parameterization: Each RIM should have its own dynamics operating by default, in the absence of interaction with other RIMs. The total number of parameters (i.e. weights) can be reduced since the RIMs can specialize on simple sub-problems, similar to (Parascandolo et al., 2018). This can speed up computation and improve the generalisation ability of the system (Baum & Haussler, 1989). The individuals RIMs in the ensemble should be simple also to prevent individual RIMs from dominating and modelling complex, composite mechanisms. We refer to a parameterization as modular if most parameters are associated to individuals RIMs only. This has the desirable property that a RIM should maintain its own independent functionality even as other RIMs are changed (due to its behavior being determined by its own self-contained parameters)." }, { "heading": "B EXTENDED RELATED WORK", "text": "The present section provides further details on related work, thus extending Section 3.\nNeural Turing Machine (NTM). The NTM (Graves et al., 2014a) has a Turing machine inspired memory with a sequence of independent memory cells, and uses an attention mechanism to move heads over the cells while performing targeted read and write operations. This shares a key idea with RIMs: that input information should only impact a sparse subset of the memory by default, while keeping most of the memory unaltered. The RIM\nmodel introduces the idea that each RIM has its own independent dynamics, whereas the mechanism for updating memory cells update is shared.\nRelational RNN. The Relational Models paper (Santoro et al., 2018) is based on the idea of using a multi-head attention mechanism to share information between multiple parts of memory. It is related to our idea but a key difference is that we encourage the RIMs to remain separate as much as possible, whereas (Santoro et al., 2018) allows information between the parts to flow on each step (in effect making the part distribution only relevant to a particular step). Additionally, RIMs has the notion of each RIM having its own independent transition dynamics which operate by default, whereas the Relational RNN only does computation and updating of the memory using attention.\nSparse Attentive Backtracking (SAB). The SAB architecture (Ke et al., 2018) explores RNNs with selfattention across time steps as well as variants where the attention is sparse in the forward pass and where the gradient is sparse in the backward pass. It shares the motivation of using sparse attention to keep different pieces of information separated, but differs from the RIMs model in that it considers separation between time steps rather than separation between RIMs.\nIndependently Recurrent Neural Network (IndRNN). The IndRNN (Li et al., 2018) replaces the full transition matrix in a vanilla RNN (between time steps) to a diagonal transition weight matrix. In other words, each recurrent unit has completely independent dynamics. Intriguingly they show that this gives much finer control over the gating of information, and allows for such an RNN to learn long-term dependencies without vanishing or exploding gradients. Analysis of the gradients shows that having smaller recurrent transition matrices mitigates the vanishing and exploding gradient issue. This may provide further explanation for why RIMs perform well on long sequences.\nConsciousness Prior (Bengio, 2017): This is based on the assumption of a sparse graphical model describing the interactions between high-level variables, using gating mechanisms to select only a subset of high-level variables to interact at any particular time. This is closely related to our work in the sense high level abstract representation is based on the representations of the RIMs, which are activated sparsely and interact sparsely. Our paper thus helps to validate the consciousness prior idea.\nRecurrent Entity Networks: EnTNet (Henaff et al., 2016) can be viewed as a set of separate recurrent models whose hidden states store the memory slots. These hidden states are either fixed by the gates, or modified through a simple RNN-style update. Moreover, EntNet uses an independent gate for writing to each memory slot. Our work is related in the sense that we also have different recurrent models (i.e.,RIMs, though each RIM has different parameters), but we allow the RIMs to communicate with each other sparingly using an attention mechanism.\nCapsules and Dynamic Routing: EM Capsules (Hinton et al., 2018) and the preceding Dynamic Capsules (Sabour et al., 2017) use the poses of parts and learned part→ object relationships to vote for the poses of objects. When multiple parts cast very similar votes, the object is assumed to be present, which is facilitated by an interactive inference (routing) algorithm.\nRelational Graph Based Methods: Recent graph-based architectures have studied combinatorial generalization in the context of modeling dynamical systems like physics simulation, multi-object scenes, and motion-capture data, and multiagent systems (Scarselli et al., 2008; Bronstein et al., 2017; Watters et al., 2017; Raposo et al., 2017; Santoro et al., 2017; Gilmer et al., 2017; Van Steenkiste et al., 2018; Kipf et al., 2018; Battaglia et al., 2018; Tacchetti et al., 2018). One can also view our proposed model as a relational graph neural network, where nodes are parameterized as individual RIMs and edges are parameterized by the attention mechanism. Though, its important to emphasize that the topology of the graph induced in the proposed model is dynamic, while in most graph neural networks the topology is fixed.\nDefault Behaviour: Our work is also related to work in behavioural research that deals with two modes of decision making (Dickinson, 1985; Botvinick & Braver, 2015; Kool & Botvinick, 2018): an automatic systems that relies on habits and a controlled system that uses some privileged information for making decision making. The proposed model also has two modes of input processing, RIMs which activate uses some external sensory information, and hence analogous to controlled system. RIMs which don’t activate, they are synonymous to habit based system. There is some work done trying in Reinforcement learning, trying to learn default policies, which have shown to improve transfer and generalization in multi-task RL (Teh et al., 2017; Goyal et al., 2019a). The proposed method is different in the sense, we are not trying to learn default policies which effect the environment, instead we want to learn mechanisms, which try to understand the environment. State dependent activation of different primitive policies was also studied in (Goyal et al., 2019b), and the authors showed that they can learn different primitives, but they also consider that only a single primitive can be active at a particular time step. Also, note that primitive policies try to effect the environment, whereas mechanism try to understand the enviornment." }, { "heading": "C EXPERIMENTAL DETAILS AND HYPERPARAMETERS", "text": "" }, { "heading": "C.1 RIMS IMPLEMENTATION", "text": "The RIMs model consists of three main components: the input attention, the process for selecting activated RIMs, and the communication between RIMs. The input attention closely follows the attention mechanism of (Santoro et al., 2018) but with a significant modification: that all of the weights within the attention mechanism are separate per-block. Thus we remove the normal linear layers and replace them with a batch matrix multiplication over the RIMs (as each block has its own weight matrix). Note that the read-key (or query) is a function of the hidden state of each RIM.\nFor selecting activated RIMs, we compute the top-k attention weight on the null input over the RIMs. We then select the activated RIMs by using a mask.\nWe compute the independent dynamics over all RIMs by using a separate LSTM for each RIM. Following this, we compute the communication between RIMs as a multihead attention (Santoro et al., 2018), with the earlier-discussed modification of having separate weight parameters for each block, and also that we added a skip-connection around the attention mechanism. This attention mechanism used 4 heads and in general used a key size and value size of 32. We computed the updates for all RIMs but used the activated-block mask to selectively update only the activated subset of the RIMs.\nThe use of RIMs introduces two additional hyperparameters over an LSTM/GRU: the number of RIMs and the number of activated RIMs per step. We also observed that having too few activated RIMs tends to hurt optimization and having too many activated RIMs attenuates the improvements to generalization. For the future it would be interesting to explore dynamic ways of controlling how many RIMs to activate." }, { "heading": "C.2 DETAILED MODEL HYPERPARAMETERS", "text": "Table 3 lists the different hyperparameters." }, { "heading": "C.3 FUTURE ARCHITECTURAL CHANGES", "text": "We have not conducted systematic optimizations of the proposed architecture. We believe that even principled hyperparameter tuning may significantly improve performance for many of the tasks we have considered in the paper. We briefly mention a few architectural changes which we have studied:\n• On the output side, we concatenate the representations of the different RIMs, and use the concatenated representation for learning a policy (in RL experiments) or for predicting the input at the next time step (for bouncing balls as well as all other experiments). We empirically found that adding another layer of (multi-headed) key value attention on the output seems to improve the results. We have not included this change\n• In our experiments, we shared the same decoder for all the RIMs, i.e., we concatenate the representations of different RIMS, and feed the concatenated representations to the decoder. In the future it" }, { "heading": "C.4 LANGUAGE MODELING", "text": "Approach Num. Parameters Train PPL Valid PPL Test PPL\nLSTM (2-layer) 21.2M 39.78 109.25 102.53 Relational Memory (Santoro et al., 2018) 11M n/a 112.77 107.21 RIMs (2-layer, kT = 6, kA = 6) 23.7M 41.27 103.60 98.66\nWe investigate the task of word-based language modeling. We ran experiments on the wikitext-2 dataset (Merity et al., 2016). We ran each experiment for a fixed 100 epochs. These results are in Table 4. Our goal in this experiment is to demonstrate the breadth of the approach by showing that RIMs performs well even on datasets which are noisy and drawn from the real-world." }, { "heading": "C.5 COPYING TASK", "text": "We used a learning rate of 0.001 with the Adam Optimizer and trained each model for 150 epochs (unless the model was stuck, we found that this was enough to bring the training error close to zero). For the RIMs model we used 600 units split across 6 RIMs (100 units per block). For the LSTM we used a total of 600 units. We did not explore this extensively but we qualitatively found that the results on copying were not very sensitive to the exact number of units.\nThe sequences to be copied first have 10 random digits (from 0-8), then a span of zeros of some length, followed by a special indicator “9” in the input which instructs the model to begin outputting the copied sequence.\nIn our experiments, we trained the models with “zero spans” of length 50 and evaluated on the model with “zero spans” of length 200. We note that all the ablations were run with the default parameters (i.e number of keys, values as for RIMs model) for 100 epochs. Tab. 5 shows the effect of two baselines as compared to the RIMs model (a) When we allow the input attention for activation of different RIMs but we dont allow different RIMs to communicate. (b) No Input attention, but we allow different RIMs to communicate with each other. Tab. 5 shows that the proposed method is better than both of these baselines. For copy task, we used 1 head in input attention, and 4 heads for RIMs communication. We note that even with 1 RIM, its not exactly same as a LSTM, because each RIM can still reference itself." }, { "heading": "C.6 SEQUENTIAL MNIST TASK", "text": "In this task we considered classifying binary MNIST digits by feeding the pixels to an RNN (in a fixed order scanning over the image). As the focus of this work is on generalization, we introduced a variant on this task where the training digits are at a resolution of 14 x 14 (sequence length of 196). We then evaluated on MNIST digits of different higher resolutions (16 x 16, 19 x 19, and 24 x 24). When re-scaling the images, we used the nearest-neighbor based down-scaling and performed binarization after re-scaling. We trained with a learning rate of 0.0001 and the Adam optimizer. For RIMs we used a total of 600 hidden units split across 6 RIMs (100 units per block). For the LSTM we used a total of 600 units. We ran proposed model as well as baselines for 100 epochs. For sequential MNIST task, we used 1 head in input attention, and 4 heads for RIMs communication.\nC.7 IMITATION LEARNING: ROBUSTNESS TO NOISE IN STATE DISTRIBUTION\nHere, we consider imitation learning where we have training trajectories generated from an expert (Table 6). We evaluate our model on continuous control tasks in Mujoco (in our case, Half-Cheetah) (Todorov et al., 2012). We take the rendered images as input and compared the proposed model with recurrent policy (i.e., LSTM). Since, using rendered image of the input does not tell anything about the velocity of the Half-Cheetah, it makes the task partially observable. In order to test how well the proposed model generalizes during test, we add some noise (in the joints of the half-cheetah body). As one can see, after adding noise LSTM baselines performs poorly. On the other hand, for the proposed model, there’s also a drop in performance but not as bad as for the LSTM baseline.\nWe use the convolutional network from (Ha & Schmidhuber, 2018) as our encoder, a GRU (Chung et al., 2015) with 600 units as deterministic path in the dynamics model, and implement all other functions as two fully connected layers of size 256 with ReLU activations. Since, here we are using images as input, which makes the task, partially observable. Hence, we concatenate the past 4 observations, and then feed the concatenated observations input to GRU (or our model). For our model, we use 6 RIMs, each of size 100, and we set ka = 3. We follow the same setting as in (Hafner et al., 2018; Sodhani et al., 2019). We also compare the proposed method to the baseline where we dont include input attention (or top-down attention). AS 6 shows, there’s a decline in performance if we dont use input attention, hence justifying the importance" }, { "heading": "C.8 GENERALIZATION TO DISTRACTORS: ALGORITHM IMPLEMENTATION DETAILS", "text": "We evaluate the proposed framework using Adavantage Actor-Critic (A2C) to learn a policy πθ(a|s, g) conditioned on the goal. To evaluate the performance of proposed method, we use a range of maze multi-room tasks from the gym-minigrid framework (Chevalier-Boisvert & Willems, 2018) and the A2C implementation from (Chevalier-Boisvert & Willems, 2018). For the maze tasks, we used agent’s relative distance to the absolute goal position as \"goal\".\nFor the maze environments, we use A2C with 48 parallel workers. Our actor network and critic networks consist of two and three fully connected layers respectively, each of which have 128 hidden units. The encoder network is also parameterized as a neural network, which consists of 1 fully connected layer. We use RMSProp with an initial learning rate of 0.0007 to train the models. Due to the partially observable nature of the environment, we further use a LSTM to encode the state and summarize the past observations." }, { "heading": "C.9 MINIGRID ENVIRONMENTS FOR OPENAI GYM", "text": "The MultiRoom environments used for this research are part of MiniGrid, which is an open source gridworld package2. This package includes a family of reinforcement learning environments compatible with the OpenAI Gym framework. Many of these environments are parameterizable so that the difficulty of tasks can be adjusted (e.g., the size of rooms is often adjustable)." }, { "heading": "C.9.1 THE WORLD", "text": "In MiniGrid, the world is a grid of size NxN. Each tile in the grid contains exactly zero or one object. The possible object types are wall, door, key, ball, box and goal. Each object has an associated discrete color, which can be one of red, green, blue, purple, yellow and grey. By default, walls are always grey and goal squares are always green." }, { "heading": "C.9.2 REWARD FUNCTION", "text": "Rewards are sparse for all MiniGrid environments. In the MultiRoom environment, episodes are terminated with a positive reward when the agent reaches the green goal square. Otherwise, episodes are terminated with zero reward when a time step limit is reached. In the FindObj environment, the agent receives a positive reward if it reaches the object to be found, otherwise zero reward if the time step limit is reached.\nThe formula for calculating positive sparse rewards is 1− 0.9 ∗ (step_count/max_steps). That is, rewards are always between zero and one, and the quicker the agent can successfully complete an episode, the closer to 1 the reward will be. The max_steps parameter is different for each environment, and varies depending on the size of each environment, with larger environments having a higher time step limit." }, { "heading": "C.9.3 ACTION SPACE", "text": "There are seven actions in MiniGrid: turn left, turn right, move forward, pick up an object, drop an object, toggle and done. For the purpose of this paper, the pick up, drop and done actions are irrelevant. The agent can use the turn left and turn right action to rotate and face one of 4 possible directions (north, south, east, west). The move forward action makes the agent move from its current tile onto the tile in the direction it is currently facing, provided there is nothing on that tile, or that the tile contains an open door. The agent can open doors if they are right in front of it by using the toggle action.\n2https://github.com/maximecb/gym-minigrid" }, { "heading": "C.9.4 OBSERVATION SPACE", "text": "Observations in MiniGrid are partial and egocentric. By default, the agent sees a square of 7x7 tiles in the direction it is facing. These include the tile the agent is standing on. The agent cannot see through walls or closed doors. The observations are provided as a tensor of shape 7x7x3. However, note that these are not RGB images. Each tile is encoded using 3 integer values: one describing the type of object contained in the cell, one describing its color, and a flag indicating whether doors are open or closed. This compact encoding was chosen for space efficiency and to enable faster training. The fully observable RGB image view of the environments shown in this paper is provided for human viewing." }, { "heading": "C.9.5 LEVEL GENERATION", "text": "The level generation in this task works as follows: (1) Generate the layout of the map (X number of rooms with different sizes (at most size Y) and green goal) (2) Add the agent to the map at a random location in the first room. (3) Add the goal at a random location in the last room. A neural network parameterized as CNN is used to process the visual observation.\nWe follow the same architecture as (Chevalier-Boisvert & Willems, 2018) but we replace the LSTM layer with BlockLSTM." }, { "heading": "C.10 BOUNCING BALLS", "text": "We use the bouncing-ball dataset from (Van Steenkiste et al., 2018). The dataset consists of 50,000 training examples and 10,000 test examples showing ∼50 frames of either 4 solid balls bouncing in a confined square geometry, 6-8 balls bouncing in a confined geometry, or 3 balls bouncing in a confined geometry with a random occluded region. In all cases, the balls bounce off the wall as well as off one another. We train baselines as well as proposed model for about 100 epochs using 0.0007 as learning rate and using Adam as optimizer (Kingma & Ba, 2014). We use the same architecture for encoder as well as decoder as in (Van Steenkiste et al., 2018). We train the proposed model as well as the baselines for 100 epochs. Our goal in this section is to give more thorough experimental results omitted from the main paper for the sake of brevity. Below, we highlight a few different results." }, { "heading": "C.10.1 DIFFERENT RIMS ATTEND TO DIFFERENT BALLS", "text": "In order to visualize what each RIM is doing, we associate each RIM with a different encoder. By performing spatial masking on the input, we can control the possible spatial input to each RIM. We use six non-overlapping horizontal strips and allow only 4 RIMs to be active at a time (shown in Fig. 8). The mask is fixed mask of zeros with a band of ones that is multiplied by the input to each encoder. Therefore, each of the 6 encoders gets 1/6th of the input. The goal was to see how the RIM activation patterns changed/correlated with the locations of the balls. We find that early in training, the RIMs’ activations are strongly correlated with the location of the 4 balls. However, after training has proceeded for some time this correlation deteriorates. This is likely because the predictable dynamics of the system do not necessitate constant attention." }, { "heading": "C.10.2 COMPARISON WITH LSTM BASELINES", "text": "In Figures 9, 10, 11, and 12 we highlight different baselines and how these compare to the proposed RIMs model." }, { "heading": "C.10.3 OCCLUSION", "text": "In Fig. 13, we show the performance of RIMs on the curtain dataset. We find RIMs are able to track balls through the occlusion without difficulty. Note that the LSTM baseline, is also able to track the ball through the “invisible” curtain." }, { "heading": "C.10.4 STUDY OF TRANSFER", "text": "It is interesting to ask how models trained on a dataset with 6-8 balls perform on a dataset with 4 balls. In Fig. 14 we show predictions during feed-in and rollout phases." }, { "heading": "C.11 ABLATIONS", "text": "We present one ablation in addition to the ones in Section 4.4. In this experiment, we study the effect on input attention (i.e top down attention) as well as the use of multi-headed head key-value attention. We compare the proposed model (with input attention as well as multi-headed key value attention) with 2 baselines: (a) In which we remove the input attention (and force all the RIMs to communicate with each other (b) We use 1 head for key value attention as compared to multi-headed key-value attention. Results comparing the proposed model, with these two baselines is shown in Fig. 15.\nIn Fig. 16, we show the predictions that result from the model with only one active head." }, { "heading": "C.12 ATARI", "text": "We used open-source implementation of PPO from (Kostrikov, 2018) with default parameters. We ran the proposed algorihtm with 6 RIMs, and kept the number of activated RIMs to 4/5. We have not done any hyper-parameter search for Atari experiments." }, { "heading": "C.12.1 TRANSFER ON ATARI", "text": "As a very preliminary result, we investigate feature transfer between randomly selected Atari games. In order to study this question, we follow the experimental protocol of Rusu et al. (2016).\nWe start by training RIMs on three source games (Pong, River Raid, and Seaquest) and test if the learned features transfer to a different subset of randomly selected target games (Alien, Asterix, Boxing, Centipede, Gopher, Hero, James Bond, Krull, Robotank, Road Runner, Star Gunner, and Wizard of Wor). We observe, that RIMs result in positive transfer in 9 out of 12 target games, with three cases of negative transfer. On the other hand progressive networks (Rusu et al., 2016) result in positive transfer in 8 out of 12 target games, and two cases of negative transfer. We also compare to LSTM baseline, which yields positive transfer in 3 of 12 games." }, { "heading": "C.13 BOUNCING MNIST: DROPPING OFF RIMS", "text": "We use the Stochastic Moving MNIST (SM-MNIST) (Denton & Fergus, 2018) dataset which consists of sequences of frames of size 64× 64, containing one or two MNIST digits moving and bouncing off the walls. Training sequences are generated on the fly by sampling two different MNIST digits from the training set (60k total digits) and two distinct trajectories.\nHere, we show the effect of masking out a particular RIM and study the effect of the masking on the ensemble of RIMs. Ideally, we would want different RIMs not to co-adapt with each other. So, masking out a particular RIM should not really effect the dynamics of the entire model. We show qualitative comparisons in Fig. 19, 20, 21, 22, 23. In each of these figures, the model gets the ground truth image as input for first 5 time steps, and then asked to simulate the dynamics for next 25 time-steps. We find that sparsity is needed otherwise different RIMs co-adapt with each other (for ex. see Fig. 20, 22, 23). We tried similar masking experiments for different\nmodels like RMC, Transformers, EntNet (which learns a mixture of experts), LSTMs, but all of them failed to do anything meaningful (after masking). We suspect this is partly due to learning a homogeneous network." }, { "heading": "C.13.1 ATARI RESULTS: COMPARISON WITH LSTM-PPO", "text": "" }, { "heading": "C.13.2 ATARI RESULTS: NO INPUT ATTENTION", "text": "Here we compare the proposed method to the baseline, where we dont use input attention, and we force different RIMs to communicate with each at all the time steps." } ]
2,019
RECURRENT INDEPENDENT MECHANISMS
SP:6283566c3a3868af63b1721a727aad52eab1aec8
[ "The paper proposes a deep reinforcement learning algorithm of advantage actor-critic (A2C), which firstly learns the causal structure of the environment and then leverages the learned causal information to assist policy learning. The causal structure is computed by calculating Average Causal Effect (ACE) between different categories of entities, and the authors assume that an intrinsic reward is given to encourage the agent to interact more with critical entities of the causal graph. Two experiments were conducted on simulation environments (Shepherd and Ant tasks) and demonstrated the effectiveness in obtaining the rewards and interpretable and accurate detection of the true graph." ]
Reinforcement Learning (RL) has shown great potential to deal with sequential decision-making problems. However, most RL algorithms do not explicitly consider the relations between entities in the environment. This makes the policy learning suffer from the problems of efficiency, effectivity and interpretability. In this paper, we propose a novel deep reinforcement learning algorithm, which firstly learns the causal structure of the environment and then leverages the learned causal information to assist policy learning. The proposed algorithm learns a graph to encode the environmental structure by calculating Average Causal Effect (ACE) between different categories of entities, and an intrinsic reward is given to encourage the agent to interact more with entities belonging to top-ranked categories, which significantly boosts policy learning. Several experiments are conducted on a number of simulation environments to demonstrate the effectiveness and better interpretability of our proposed method.
[]
[ { "authors": [ "William Agnew", "Pedro Domingos" ], "title": "Relevance-guided modeling of object dynamics for reinforcement learning", "venue": null, "year": 2003 }, { "authors": [ "Prithviraj Ammanabrolu", "Mark Riedl" ], "title": "Playing text-adventure games with graph-based deep reinforcement learning", "venue": "In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies,", "year": 2019 }, { "authors": [ "Christopher Berner", "Greg Brockman", "Brooke Chan", "Vicki Cheung", "Przemysław Debiak", "Christy Dennison", "David Farhi", "Quirin Fischer", "Shariq Hashme", "Chris Hesse" ], "title": "Dota 2 with large scale deep reinforcement learning", "venue": "arXiv preprint arXiv:1912.06680,", "year": 2019 }, { "authors": [ "Aditya Chattopadhyay", "Piyushi Manupriya", "Anirban Sarkar", "Vineeth N Balasubramanian" ], "title": "Neural network attributions: A causal perspective", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy P. Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2015 }, { "authors": [ "Pieter-Jan Kindermans", "Sara Hooker", "Julius Adebayo", "Maximilian Alber", "Kristof T Schütt", "Sven Dähne", "Dumitru Erhan", "Been Kim" ], "title": "The (un) reliability of saliency methods", "venue": "In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning,", "year": 2019 }, { "authors": [ "Shingo Mabu", "Kotaro Hirasawa", "Jinglu Hu" ], "title": "A graph-based evolutionary algorithm: Genetic network programming (GNP) and its extension using reinforcement learning", "venue": "Evolutionary Computation,", "year": 2007 }, { "authors": [ "Sridhar Mahadevan", "Mauro Maggioni" ], "title": "Proto-value functions: A laplacian framework for learning representation and control in markov decision processes", "venue": "Journal of Machine Learning Research,", "year": 2007 }, { "authors": [ "Jan Hendrik Metzen" ], "title": "Learning graph-based representations for continuous reinforcement learning domains. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2013, Prague, Czech Republic", "venue": "September 23-27,", "year": 2013 }, { "authors": [ "V Mnih", "K Kavukcuoglu", "D Silver", "A.A. Rusu", "J Veness", "M.G. Bellemare", "A Graves", "M Riedmiller", "A.K. Fidjeland", "G Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S. Fearing", "Sergey Levine" ], "title": "Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning", "venue": "In International Conference on Robotics and Automation (ICRA),", "year": 2018 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2015 }, { "authors": [ "Martin A. Riedmiller", "Roland Hafner", "Thomas Lampe", "Michael Neunert", "Jonas Degrave", "Tom Van de Wiele", "Vlad Mnih", "Nicolas Heess", "Jost Tobias Springenberg" ], "title": "Learning by playing solving sparse reward tasks from scratch", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Nicolas Heess", "Jost Tobias Springenberg", "Josh Merel", "Martin A. Riedmiller", "Raia Hadsell", "Peter W. Battaglia" ], "title": "Graph networks as learnable physics engines for inference and control", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Wenjie Shi", "Zhuoyuan Wang", "Shiji Song", "Gao Huang" ], "title": "Self-supervised discovering of causal features: Towards interpretable reinforcement learning", "venue": "arXiv preprint arXiv:2003.07069,", "year": 2020 }, { "authors": [ "Farzaneh Shoeleh", "Masoud Asadpour" ], "title": "Graph based skill acquisition and transfer learning for continuous reinforcement learning domains", "venue": "Pattern Recognition Letters,", "year": 2017 }, { "authors": [ "D Silver", "A. Huang", "C.J. Maddison", "A Guez", "L Sifre", "den Driessche G Van", "J Schrittwieser", "I Antonoglou", "V Panneershelvam", "M Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. Nature,", "year": 2016 }, { "authors": [ "David Silver", "Richard S. Sutton", "Martin Müller" ], "title": "Sample-based learning and search with permanent and transient memories", "venue": "In International Conference on Machine Learning (ICML),", "year": 2008 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Richard S. Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "SIGART Bulletin,", "year": 1991 }, { "authors": [ "Richard S Sutton", "Andrew G Barto" ], "title": "Reinforcement learning: An introduction", "venue": "MIT press,", "year": 2018 }, { "authors": [ "Varun Kumar Vijay", "Abhinav Ganesh", "Hanlin Tang", "Arjun Bansal" ], "title": "Generalization to novel", "venue": "Processing Systems (NIPS),", "year": 2000 }, { "authors": [ "Tingwu Wang", "Renjie Liao", "Jimmy Ba", "Sanja Fidler" ], "title": "starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is a powerful approach towards dealing with sequential decisionmaking problems. Combined with deep neural networks, deep reinforcement learning (DRL) has been applied in a variety of fields such as playing video games (Mnih et al., 2015; Vinyals et al., 2019; Berner et al., 2019), mastering the game of Go (Silver et al., 2016) and robotic control (Riedmiller et al., 2018). However, current DRL algorithms usually learn a black box policy approximated by a deep neural network directly using the state transitions and reward signals, without explicitly understanding the structure information of the environment.\nCompared with DRL agents, an important reason why humans are believed to be better at learning is the ability to build model on the relations between entities in the environment and then reason based on it. This ability is an important component of human cognition (Spelke & Kinzler, 2007). As the learning process continues, through interactions with the environment and observations of it, human can gradually understand its actions’ causal effects on the entities as well as the relations between entities and then reason based on them to figure it out the most important actions to take in order to improve the efficiency. In scenarios that contain multiple entities with complicated relations, optimal policy may be obtained only when the structured relation information is captured and exploited. However, most current DRL algorithms do not consider structured relation information explicitly. The knowledge learned by an agent is implicitly entailed in the policy or action-value function, which are usually unexplainable neural networks. Therefore, whether the relations are well understood and exploited by the agent is unknown. When the environment is with high complexity, blackbox learning of policies suffers from low efficiency, while policy learning over explicit representation of entity relations can significantly boost the learning efficiency. Based on the fact that entities in an environment are often not independent but causally related, we argue that disentangling the learning task into two sequential tasks, namely relational structure learning and policy learning, and leveraging an explicit environmental structure model to facilitate the policy learning process of DRL agents are expected to boost the performance. With the learned relational structure information, the agent performs exploration with a tendency of prioritizing interaction with critical entities, which is encouraged by intrinsic rewards, to learn optimal policy effectively.\nTaking this inspiration, we propose a deep reinforcement learning algorithm which firstly learns the relations between entities and then recognize critical entity categories and develop an intrinsic reward based approach to improve policy learning efficiency and explainability. The proposed algo-\nrithm learns a graph to encode the relation information between categories of entities, by evaluating causal effect of one category of entities to another. Thereafter, intrinsic reward based on the learned graph is given to an agent to encourage it to prioritize interaction with entities belonging to important categories (the categories that are root causes in the graph). Previous works also use graphs to provide additional structured information for the agent to assist policy learning (Wang et al., 2018; Vijay et al., 2019). However, graphs leveraged by these works are provided by human and thus rely heavily on prior knowledge. Compared with their methods, our algorithm overcomes the deficiency that the graph can not be generated automatically. Our approach requires no prior knowledge and can be combined with existing policy-based or value-based DRL algorithms to boost their learning performance. The key contributions of this work are summarized as follows:\n• We propose a novel causal RL framework that decomposes the whole task into the structure learning and causal structure aware policy learning.\n• The learned causal information is leveraged by giving causality based intrinsic reward to an agent, to encourage it to interact with entities belonging to critical categories for accomplishing the task.\n• We design two new game tasks which contain multiple entities with causal relations as benchmarks to be released to the community. The new benchmarks are designed in such ways that categories of objects are causally related. Experiments are conducted on our designed simulation environments, which show that our algorithm achieves state-of-the-art performance and can facilitate the learning process of DRL agents under other algorithmic frameworks.\nThe paper is organized as follows. In Section 2, we introduce deep reinforcement learning and Average Causal Effect (ACE), which are key components of this work. Then we illustrate our algorithm in Section 3 in details. In Section 4, we show the experimental results on the designed environment to demonstrate the effectiveness of our framework. In Section 5, we introduce previous works that relate to our method. Finally, conclusions and future work are provided in Section 6." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 DEEP REINFORCEMENT LEARNING", "text": "An MDP can be defined by a 5-tuple (S,A,P,R, γ), where S is the state space, A is the action space, P is the transition function, R is the reward function and γ is the discount factor (Sutton & Barto, 2018). A RL agent observes a state st ∈ S at time step t. Then it selects an action at from the action space A following a policy π (at|st), which is a mapping of state space to action space. After taking the action, the agent receives a scalar reward rt according toR(st, at). Then the agent transits to the next state st+1 according to the state transition probabilityP (st+1|st, at). A RL agent aims to learn a policy that maximizes the cumulative discount reward, which can be formulated as Rt = ∑T k=0 γ\nkrt+k where T is the length of the whole episode. In the process of learning an optimal policy, a RL agent generally approximates the state-value function Vπ(s) or the action value function Qπ(s, a). The state value function is the expected cumulative future discounted reward from a state with actions sampled from a policy π:\nVπ(s) = Eπ [ T∑ k=0 γkrt+k|St = s ] . (1)\nDeep Reinforcement Learning (DRL) which combines Deep Neural Networks (DNNs) with RL can be an effective way to deal with high-dimensional state space. It benefits from the representation ability of DNNs, which enable automatic feature engineering and end-to-end learning through gradient descent.\nSeveral effective algorithms have been proposed in the literature and we use A2C in this paper as our basic algorithm, which is a synchronous version of A3C (Mnih et al., 2016). A2C consists of a variety of actor-critic algorithms (Sutton et al., 2000). It directly optimizes the policy πθ parameterized by θ to maximize the objective J(θ) = Eπ[ ∑T k=0 γ\nkrt+k] by taking steps in the direction of ∇θJ(θ). The gradient of the policy can be written as:\n∇θJ(θ) = Eπ[∇θ log πθ(a|s)Aπ(s, a)], (2)\nwhere Aπ(s, a) = Qπ(s, a) − V π(s) is the advantage function. The advantage function can be estimated by one-step TD-error Âπ(st, at) = rt + γVφ(st+1)− Vφ(st), where Vφ(s) is the approximation of the state-value function V π(s) parameterized by φ." }, { "heading": "2.2 CAUSAL NEURAL NETWORK ATTRIBUTIONS", "text": "Attributions are defined as the effect of an input feature on the prediction function’s output (Sundararajan et al., 2017) and Chattopadhyay et al. (2019) propose a neural network attribution methodology built from first principles of causality. Chattopadhyay et al. (2019) views the neural network as a Structural Causal Model (SCM), and proposes a new method to compute the Average Causal Effect of an input neuron on an output neuron based on the do(.) calculus (Pearl, 2009). Definition 1. (Average Causal Effect). The Average Causal Effect (ACE) of a binary random variable x on another random variable y is commonly defined as E[y|do(x = 1)]− E[y|do(x = 0)].\nWhile the above definition is for binary-valued random variables, the domain of the function learnt by neural networks is usually continuous. Given a neural network with input layer l1 and output layer ln, we hence measure the ACE of an input feature xi ∈ l1 with value α on an output feature y ∈ ln as: ACEydo(xi=α) = E[y|do(xi = α)]− baselinexi . (3) Definition 2. (Causal Attribution). The causal attribution of input neuron xi for an output neuron y is defined as ACEydo(xi=α).\nIn Equation 3, an ideal baseline would be any point along the decision boundary of the neural network, where predictions are neutral. However, Kindermans et al. (2019) showed that when a reference baseline is fixed to a specific value (such as a zero vector), attribution methods are not affine invariant. Therefore, Chattopadhyay et al. (2019) proposed the average ACE of xi on y as the baseline value for xi:\nbaselinexi = Exi [Ey[y|do(xi = α)]] (4)\nIn this paper, we use causal attribution method to infer the relations between entities." }, { "heading": "3 METHOD", "text": "In this section, we present a novel DRL framework named CARE (CAusal RElation) that enables the agent to infer the causal relationship between entities. Figure 1 illustrates the overall framework of CARE. CARE adopts a two-stage training paradigm. In the first stage, the agent learns a model of the environment by minimizing the prediction error of the state transition. Then, we calculate ACE values between categories of entities, which are used for constructing a causal graph G. When G is at hand, we are able to obtain the causal ordering of all categories, that is, a permutation such that nodes ranked lower cannot cause nodes ranked higher. This order is used to measure the importance of the categories, and intrinsic reward is given based on the causal ordering. Specifically, the agent receive a one-step “enhanced” reward rt where:\nrt = r G t + r ext t , (5)\nwhere rextt is the original extrinsic reward given by the environment, and r G t is the intrinsic reward, designed by the learning algorithm to encourage the agent to maximize the effect of its behaviour on the change of states of critical entities. We describe the details in later sections.\nThis section is organized as follow. In section 3.1, we first introduce category-oriented state factorization. In section 3.2, we describe how to get the relation graph G and in section 3.3, we show how to calculate the intrinsic reward rGt ." }, { "heading": "3.1 CATEGORY-ORIENTED STATE FACTORIZATION", "text": "We focus on environments which contain multiple categories of entities. The category of entity is the acting rules that govern the actions of the entity. Each entity is within one category, and the entities within one category share the same acting rules. An example of the category of entity appears in the experimental section. Consider an environment consisting of two kinds of sheep:\none is the ewe that takes random walk. The other is the lamb that always follow the ewe. Then there are two categories of entities in the environment. In this paper, we infer the causal relations among categories. Another view is to infer the causal graph among all entities in the environment. However, certain abstraction of entities is beneficial and simplifies the learning because quite often in a dynamic and interactive environment, entities could pop up or disappear as the result of actions taken by the agent or the environmental evolution. Therefore, maintaining a graph with changing nodes could be quite challenging, rendering the learning algorithm unnecessarily complicated. We thus choose category-level casuality inference for scalability. The category of each entity is given as a prior, or generated by applying computer machine vision technology such as unsupervised or pretrained object segmentation models (Agnew & Domingos, 2020), shape and color analysis (Ren et al., 2015; He et al., 2017). We introduce a factored space to represent the states. The factored state space S consisting of entities of K categories is S = S1 × ... × SK , where Si is the state space of the ith category. At time t, the state of the entities of ith category is sit ∈ Si, and the state st ∈ S of all entities is composed of local states of all category st = [s1t , s2t , ..., sKt ]. This factorization ensures that each category of entities is independently represented. In this paper, the state is represented using a K-channel feature map, each corresponding to one category of entities. More details can be found in the Appendix.\n3.2 CAUSAL RELATION ANALYSIS\nIn this section, we demonstrate how to obtain the casual graph. Firstly, we learn a model of the environment, which predicts the next step state of each category of entities. Thereafter, we perform average causal effect analysis between each pair of categories. Namely, conditioning on all other categories, we compute a measurement quantifying the influence of one category on the other. Based on this, we are able to recover the whole causal graph, and the causal ordering of categories. In hypothesis, vertices with higher ranking are more important in the environment, and we will give intrinsic rewards based on the influence of the agent’s action on different categories of entities." }, { "heading": "Environment Model Learning", "text": "To learn the environment model, we first use a random agent to interact with the environment to collect a buffer of experienceB = {(st, at, st+1)}Tt=1.\nIt contains T tuples of states st ∈ S, actions at ∈ A, and follow-up states st+1 ∈ S, which is reached after taking action at. Our goal is to predict next step state of each category of entities to understand the environment, without agent’s policy training. It should be noted that our model is only used to analyze the relations between category of entities, i.e. the causal ordering of all entity categories, instead of using the model for planning like model-based RL. Therefore, our method does not require extremely high model accuracy.\nOur model employs an encoder-decoder architecture (See Figure 2). The encoder is a CNN operating directly on observations from the environment, and the output of the encoder zt = Enc(st, at) is flattened to a vector. The decoder is composed of several deconvolutional layers. There are K decoders Dec1...DecK , each corresponding to a category of entities, which take the encoded vector as input and predict the next state of each category of entities.\nŝkt+1 = Dec k(zt) (6)\nThe model is trained by minimizing the following loss:\nL = K∑ k=1 d(skt+1, ŝ k t+1) (7)\nwhere d(skt+1, ŝ k t+1) denotes the distance between the predicted next state ŝ k t+1 and the true one." }, { "heading": "Calculating ACE", "text": "After getting the environment model trained, we can calculate ACE values of each pair of categories following the method described in Section 2.2, which computes ACE by intervening the states of each category of entities. Specifically, the ACE of a category i on another category j can be calculated by:\nACE sjt+1 do(sit=s i τ ) = E[sjt+1|do(sit = siτ )]− baselinesit (8)\nHere siτ ∈ Si is the interventional state. do(sit = siτ ) means that we manually control sit to another state siτ , known as do calculus in causal analysis. The baselinesit is calculated by:\nbaselinesit = Esit [Esjt+1 [s j t+1|do(sit = siτ )]] (9)\nBy definition, the interventional expectation E[sjt+1|do(sit = siτ )] is written as E[sjt+1|do(sit = siτ )] = ∫ sjt+1p(s j t+1|do(sit = siτ )) ds j t+1 (10)\nComputing the integral is intractable because the exact distribution of sjt+1 is unknown. Thus, we approximate Equation 10 by empirical historical distribution sampling:\nE[sjt+1|do(sit = siτ )] ≈ 1\nN ∑ (sm,am,sm+1)∈BN ŝjm+1 (11)\nwhere ŝjm+1 = Dec j(Enc(ṡ (i) m , am)) is the predicted next state of category j and ṡ (i) m = [s1m, ..., s i τ , ..., s K m] is the interventional state which sets s i m = s i τ while leaves other categories unchanged. BN ⊆ B is a batch of experience sampled from the buffer with sample size N . The maximal ACE value is used as the final effect of category i on category j:\nACEi→j = max siτ∈Si\n(ACE sjt+1 do(sit=s i τ ) ) (12)\nIn practice, it is also computed by sampling from the historical states set of the category i.\nAfter getting pairwise ACE values, we are able to get the causal graph G = (V, E) of all categories of entities. V is the set of all vertices and each vertex represents a category of entities. E is the set of all edges and eij ∈ E represents that category i causes category j. Let H be the K ×K adjacency matrix of G, which is obtained by the edge directing rule:\nHij = { 1, if ACEi→j > ACEj→i 0, otherwise\n(13)\nSince G is assumed to be a directed acyclic graph (DAG), there are no feedback loops or any path starting from category i and then back to itself. Consequently, there exists a causal ordering of all vertices. Causal ordering is a permutation µ of all index of vertices {1, ...,K}, where vertices ranked higher cannot be caused by ones ranked lower. The nodes ranked higher are hypothetically more critical entity categories for the task. This will be used for designing intrinsic reward. Based on this, we define the criticality of entity category. Definition 3. (Criticality of Entity Category). The criticality of entity category is defined as the ranking of the category in the causal ordering µ." }, { "heading": "3.3 INTRINSIC REWARD", "text": "We adopt the tendency to prioritize critical entities by giving intrinsic rewards to the agent in addition to the original extrinsic reward for policy learning. The basic idea is that actions that have a relatively large effect on entities whose category ranks higher in the causal ordering are rewarded. Based on the learned model as described in Section 3.2, we define the effect Ii(st, at) of the agent’s behavior on the ith category of entities, similar to ACE as:\nIi(st, at) = fi(st, at)− 1 |A| ∑ a∈A fi(st, a) (14)\nHere fi(st, at) = Deci(Enc(st, at)) denotes the learned model and |A| is the size of the action space. This method calculates the effect of the agent’s action on the a certain category of entities and the second term 1|A| ∑ a∈A fi(st, a) in equation 14 serves as the baseline when calculating" }, { "heading": "ACE.", "text": "The intrinsic reward is defined as rGt = ∑K i=1 r G,i t . For each category, the intrinsic reward is:\nrG,it = { βi, if Ii(st, at) > δ 0, otherwise\n(15)\nHere βi and δ are hyperparameters. It is constrained that the rewards of categories along the causal ordering are non-increasing βµ1 ≥ βµ2 ≥ ... ≥ βµK , where µi corresponds to position i in the causal ordering." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "In this section, we evaluate CARE on two test domains (see Figure 3), Shepherd and Ant, where an agent is required to have the ability of relational reasoning to learn an ideal policy. We compare CARE with flat DRL method (A2C) and Relational DRL (Zambaldi et al., 2018), which is an A2C based algorithm and uses multi-head attention to extract relation between entities. Experiment details can be found in the Appendix." }, { "heading": "4.1 SHEPHERD", "text": "In the Shepherd game (Figure 3, left), a shepherd is expected to drive all of the ewe and lambs, which are randomly distributed in the ranch at the beginning, back into the sheepfold. The agent’s\nobjective is to finish the task using as little time as possible. Unless the shepherd gets close and drives the sheep, the ewe will walk around the ranch and the lambs follow the ewe. The shepherd is viewed as the agent here and it has five actions: up, down, left, right and drive. Each action makes the agent move to the corresponding direction with one step size except for drive. If the agent takes the drive action, sheep near the agent will be driven to move towards the sheepfold. The game ends when all sheep are in the sheepfold or the time exceeds a fixed period. At every step, the agent receives a reward equal to the negative sum of distance from each sheep to the sheepfold. Here we have K = 2 categories of entities.\nWe firstly evaluate our algorithm in this game. The experimental result is shown in Figure 4(a). The result shows that our method converges to a better policy with higher mean episode return than other methods. Flat A2C agent can also learn a relatively good policy with slightly worse performance compared with our method, but it takes a longer time. This result shows that understanding the causal relations and leveraging the learned relational information can significantly boost the learning process of DRL agents. The performance of the Relational DRL algorithm is not to our expectation, possibly because the attention mechanism does not capture correct relations of the entities in the environment.\nThe causal graph learned by CARE is in Figure 6 (left). Here E, L and A denote Ewe, Lamb and Agent respectively. The edge from Ewe to Lamb represents that lambs are attracted by the ewe. The two edges from the Agent to Ewe and Lamb means that the ewe and lambs are driven by the agent. Although the agent is also considered a node, it is not ranked in the causal ordering. The ACE values for getting this graph are listed in Table 1.\nWe also evaluate the effect Ii(s, a) calculated by Equation 14, which is the cornerstone of the intrinsic reward. We firstly sampled a state from the historical trajectories, and then manually set the agent to every grid in the field. At each grid, we calculate the agent’s effect on the category of ewe using Equation 14. Finally we visualize the calculated effect in Figure 5. The value of coordinate (x, y) in the heatmap corresponds to effect of the agent on the ewewhen it is on position (x, y) and taking action drive. As shown in the Figure 5, the calculated Iewe(s, drive) is high only when the agent is near the ewe. This is be-\ncause the ewe will be driven to move towards the sheepfold only when the shepherd and sheep are close enough. This result shows that the effect of the agent on the target category of entities is well\nmodeled. Moreover, this result also shows that we can choose the hyperparameter δ easily, because there is a large gap between the calculated Ii(s, a) when the agent’s behavior affects ith category of entities or not.\nTable 1: ACE values in the Shepherd game.\nACE ewe lamb agent ewe − 1.52e-03 1.03e-06 lamb 5.14e-05 − 1.06e-06 agent 3.65e-05 6.44e-06 −" }, { "heading": "4.2 ANT", "text": "In the Ant game (Figure 3, right), the agent is expected to kill all ants in the field. There are two queen ants and four worker ants at the beginning. Queen ants move around the field. Worker ants will firstly go to the nearest food and then bring it to a queen ant. The queen ant obtains some energy by eating the food. If the queen ant’s energy exceeds a threshold, it will generate a new worker ant. Food will continue to be produced in the fixed positions. For this environment, there are K = 3 categories of entities. The agent has five actions: up, down, left, right, each of which makes the agent move one step towards the corresponding direction and attack, which kill the ant around the agent if there exists. The game ends when all ants are killed or the time span exceeds a fixed period. The agent will receive a reward of +1 if it kill one ant, whether it is a worker ant or a queen ant. At the end of the episode, the agent will receive a reward of −(10 × n + 100 ×m), where n and m denote the number of left worker ants and queen ants respectively.\nAn optimal policy should prioritize the task of killing the queen ant. Otherwise, worker ants will be continuously produced by the queen ants and the number of ants grows up very fast. We evaluate our algorithm in this game, comparing to flat A2C and Relational DRL. The experimental results are given in the Figure 4(b). In this game, we observe that our method learns a policy that kills the queen ants first and then other ants. However, flat A2C and Relational DRL both learn a policy that keeps the queen ants alive but stays at a certain position around the food to wait for and kill worker ants coming for food.\nWe show the learned causal graph in Figure 6 (right). Q, W , F and A denote Queen, Worker, Food and Agent respectively. The causal ordering is [Q,F,W ] and the calculated ACE values can be found in the Table 2.\nSince our algorithm has a graph learning procedure, we also record the running time of the algorithms in Table 3. Our algorithm takes longer time than A2C when the training step is the same. However, given the performance gain, we think the cost in time is reasonable: our algorithm learns a better policy, which is not obtainable by other algorithms. The performance gap is especially obvious in Ant game." }, { "heading": "5 RELATED WORK", "text": "Our framework learns a graph and the graph entails the relationship between entities in the environment. Compared with model-based reinforcement learning algorithms (Sutton, 1991; Silver et al.,\n2008; Heess et al., 2015; Nagabandi et al., 2018), which usually learn environment dynamics and plan or learn on the learned dynamics to reduce the interaction with the real environment, our method focuses on learning relations of entities but not environment dynamics. The learned causal graph is used to order the category of entity. Mahadevan & Maggioni (2007); Mabu et al. (2007); Metzen (2013); Shoeleh & Asadpour (2017); Sanchez-Gonzalez et al. (2018) also use graphs to learn a representation of the environment. However, these methods still focus on learning environment dynamics and thus these problems are usually solved via model-based RL.\nThe learned graph could be viewed as a structure description of the environment. Applying structure knowledge of environments in RL has been studied in previous works. Wang et al. (2018) explicitly models the structure of an agent as a graph and uses a GNN to approximate a policy. Vijay et al. (2019) builds a knowledge graph as prior for the agent, which illustrates different relations between entities in the environment. However, graphs leveraged by these two works are priors provided by human. Compared with these works, our algorithm supports automatic graph learning and requires no human prior knowledge. Ammanabrolu & Riedl (2019) proposed KG-DQN, which constructs a knowledge graph to represent the environment and uses Graph Neural Networks to extract features of the graph. This work nevertheless only adapts to Text-Adventure Game, because their knowledge graph can be only generated from natural language. Zambaldi et al. (2018) use multi-head attention to extract relation between entities. However, their method solves problem from the aspect of entity instead of category. Notice that our model deploys a encoder-decoder structure for processing the input signals. This is used by Shi et al. (2020) known as self-supervised interpretable network for extracting task-relevant attention masks which are interpretable features for agent’s decisions. Agnew & Domingos (2020) uses object-centric state representations and exploits the object interactions and dynamics for identifying task-relevant object representations." }, { "heading": "6 CONCLUSIONS", "text": "In this paper, we propose a novel deep reinforcement learning algorithm, which firstly learns environmental causal structure and then leverages the learned relational information to assist policy learning. Experimental results show that our algorithm has good performance, indicating that incorporating the environmental structure for reasoning is a promising research direction. Future work includes studying environments with dynamical graphs, and improving the training efficiency of the framework." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 EXPERIMENTAL SETTINGS", "text": "" }, { "heading": "State Representation", "text": "In both the Shepherd game and the Ant game, the whole game field is a grid world with size 18×18. The game state is represented by a tensor of size K × 18× 18. Each channel represents the state of one category of entities." }, { "heading": "Environmental model", "text": "The encoder is composed of three convolutional layers. The first, second and third layer have 64, 128 and 256 3 × 3 filters with stride 2 respectively, each followed by a Batch Normalization layer. The output of the encoder is flatten to a vector and then enter two fully connected layers both with 512 hidden units. The action is encoded into a vector of size 32 by three FC layers with 100 hidden units and then is concatenated with the output of the encoder. Each decoder takes the output of encoder as input and pass it into four fully connected layers, with 900, 300, 512, 512 hidden units respectively. Then the output of the FC layers is taken as input of three deconvolutional layers. The first two layers have 128 and 64 3 × 3 filters with stride 2 respectively. The last layer has num categories 4×4 filters also with stride 2. Each deconvolutional layer is also followed by a Batch Normalization layer.\nWhen training the model, we collect a buffer with 5000 episodes. We use the Adam optimizer to train the model with a learning rate of 1e− 4 and a batch size of 32." }, { "heading": "Parameter Setting of RL", "text": "CARE and flat A2C use the same network architecture. The actor and critic share the same first two convolutional layers and a FC layer. The two convolutional layers have 64 and 32 3 × 3 filters with stride 2 respectively. The FC layer has 128 hidden units. The critic and the actor both take the output of the share part and pass it into two FC layers with 512 hidden units. Finally they output the value and the action distribution. For the Relational DRL algorithm, we use an open source implementation1. Other parameters are listed as follow:\n1https://github.com/mavischer/DRRL" } ]
2,020
null
SP:d98564657b7cd55efa243520c5bd7ef8be405a26
[ "This work studies the adaptive proximal gradient descent method, and specifically studies the group sparsity. To encourage the group sparsity, a regularizer which is a combination of $\\ell_1$ norm, block $\\ell_1$ norm and $\\ell_2$ norm square is used. This paper gives the update rule of the proximal gradient with the specific regularizer. After proposing the update rule, the paper analyzes the convergence and regret guarantee of the algorithm." ]
We develop a novel framework that adds the regularizers to a family of adaptive optimizers in deep learning, such as MOMENTUM, ADAGRAD, ADAM, AMSGRAD, ADAHESSIAN, and create a new class of optimizers, which are named GROUP MOMENTUM, GROUP ADAGRAD, GROUP ADAM, GROUP AMSGRAD and GROUP ADAHESSIAN, etc., accordingly. We establish theoretically proven convergence guarantees in the stochastic convex settings, based on primal-dual methods. We evaluate the regularized effect of our new optimizers on three largescale real-world ad click datasets with state-of-the-art deep learning models. The experimental results reveal that compared with the original optimizers with the post-processing procedure which use the magnitude pruning method, the performance of the models can be significantly improved on the same sparsity level. Furthermore, in comparison to the cases without magnitude pruning, our methods can achieve extremely high sparsity with significantly better or highly competitive performance.
[]
[ { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Thore Graepel", "Joaquin Quiñonero Candela", "Thomas Borchert", "Ralf Herbrich" ], "title": "Web-scale bayesian click-through rate prediction for sponsored search advertising in microsoft’s bing search engine", "venue": "Proceedings of the 27th International Conference on Machine Learning", "year": 2010 }, { "authors": [ "Diederik P. Kingma", "Jimmy Lei Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Nick Littlestone" ], "title": "From on-line to batch learning", "venue": "Proceedings of the Second Annual Workshop on Computational Learning Theory, COLT", "year": 1989 }, { "authors": [ "H. Brendan McMahan", "Matthew J. Streeter" ], "title": "Adaptive bound optimization for online convex optimization", "venue": "In COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel, June", "year": 2010 }, { "authors": [ "H. Brendan McMahan", "Gary Holt", "D. Sculley", "Michael Young", "Dietmar Ebner", "Julian Grady", "Lan Nie", "Todd Phillips", "Eugene Davydov", "Daniel Golovin", "Sharat Chikkerur", "Dan Liu", "Martin Wattenberg", "Arnar Mar Hrafnkelsson", "Tom Boulos", "Jeremy Kubica" ], "title": "Ad click prediction: a view from the trenches", "venue": "In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2013 }, { "authors": [ "Yurii E. Nesterov" ], "title": "Smooth minimization of non-smooth functions", "venue": "Math. Program.,", "year": 2005 }, { "authors": [ "Yurii E. Nesterov" ], "title": "Primal-dual subgradient methods for convex problems", "venue": "Math. Program.,", "year": 2009 }, { "authors": [ "Xiuyan Ni", "Yang Yu", "Peng Wu", "Youlin Li", "Shaoliang Nie", "Qichao Que", "Chao Chen" ], "title": "Feature selection for facebook feed ranking system via a group-sparsity-regularized training algorithm", "venue": "In Proceedings of the 28th ACM International Conference on Information and Knowledge Management,", "year": 2019 }, { "authors": [ "Boris T. Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Yanru Qu", "Han Cai", "Kan Ren", "Weinan Zhang", "Yong Yu", "Ying Wen", "Jun Wang" ], "title": "Product-based neural networks for user response prediction", "venue": "IEEE 16th International Conference on Data Mining,", "year": 2016 }, { "authors": [ "Sashank J. Reddi", "Satyen Kale", "Sanjiv Kumar" ], "title": "On the convergence of adam and beyond", "venue": "In Proceedings of the 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Herbert Robbins", "Sutton Monro" ], "title": "A stochastic approximation method", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "R. Tyrrell Rockafellar" ], "title": "Convex Analysis. Princeton Landmarks in Mathematics and Physics", "venue": null, "year": 1970 }, { "authors": [ "Simone Scardapane", "Danilo Comminiello", "Amir Hussain", "Aurelio Uncini" ], "title": "Group sparse regularization for deep neural networks", "venue": "Neurocomputing, 241:43–52,", "year": 2016 }, { "authors": [ "Ruoxi Wang", "Bin Fu", "Gang Fu", "Mingliang Wang" ], "title": "Deep & cross network for ad click predictions", "venue": "In Proceedings of the ADKDD’17,", "year": 2017 }, { "authors": [ "Lin Xiao" ], "title": "Dual averaging method for regularized stochastic learning and online optimization", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Haiqin Yang", "Zenglin Xu", "Irwin King", "Michael R. Lyu" ], "title": "Online learning for group lasso", "venue": "In Proceedings of the 27th International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Zhewei Yao", "Amir Gholami", "Sheng Shen", "Kurt Keutzer", "Michael W. Mahoney" ], "title": "ADAHESSIAN: an adaptive second order optimizer for machine learning", "venue": "CoRR, abs/2006.00719,", "year": 2020 }, { "authors": [ "Matthew D. Zeiler" ], "title": "Adadelta: An adaptive learning rate method", "venue": "CoRR, abs/1212.5701,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the development of deep learning, deep neural network (DNN) models have been widely used in various machine learning scenarios such as search, recommendation and advertisement, and achieved significant improvements. In the last decades, different kinds of optimization methods based on the variations of stochastic gradient descent (SGD) have been invented for training DNN models. However, most optimizers cannot directly produce sparsity which has been proven effective and efficient for saving computational resource and improving model performance especially in the scenarios of very high-dimensional data. Meanwhile, the simple rounding approach is very unreliable due to the inherent low accuracy of these optimizers.\nIn this paper, we develop a new class of optimization methods, that adds the regularizers especially sparse group lasso to prevalent adaptive optimizers, and retains the characteristics of the respective optimizers. Compared with the original optimizers with the post-processing procedure which use the magnitude pruning method, the performance of the models can be significantly improved on the same sparsity level. Furthermore, in comparison to the cases without magnitude pruning, the new optimizers can achieve extremely high sparsity with significantly better or highly competitive performance. In this section, we describe the two types of optimization methods, and explain the motivation of our work." }, { "heading": "1.1 ADAPTIVE OPTIMIZATION METHODS", "text": "Due to the simplicity and effectiveness, adaptive optimization methods (Robbins & Monro, 1951; Polyak, 1964; Duchi et al., 2011; Zeiler, 2012; Kingma & Ba, 2015; Reddi et al., 2018; Yao et al., 2020) have become the de-facto algorithms used in deep learning. There are multiple variants, but they can be represented using the general update formula (Reddi et al., 2018):\nxt+1 = xt − αtmt/ √ Vt, (1)\nwhere αt is the step size, mt is the first moment term which is the weighted average of gradient gt and Vt is the so called second moment term that adjusts updated velocity of variable xt in each direction. Here, √ Vt := V 1/2 t , mt/ √ Vt := √ Vt −1 ·mt. By setting different mt, Vt and αt , we can derive different adaptive optimizers including MOMENTUM (Polyak, 1964), ADAGRAD (Duchi et al., 2011), ADAM (Kingma & Ba, 2015), AMSGRAD (Reddi et al., 2018) and ADAHESSIAN (Yao et al., 2020), etc. See Table 1." }, { "heading": "1.2 REGULARIZED OPTIMIZATION METHODS", "text": "Follow-the-regularized-leader (FTRL) (McMahan & Streeter, 2010; McMahan et al., 2013) has been widely used in click-through rates (CTR) prediction problems, which adds `1-regularization (lasso) to logistic regression and can effectively balance the performance of the model and the sparsity of features. The update formula (McMahan et al., 2013) is:\nxt+1 = arg min x\ng1:t · x+ 1\n2 t∑ s=1 σs‖x− xs‖22 + λ1‖x‖1, (2)\nwhere g1:t = ∑ t s=1gs, 1 2 ∑t s=1 σs‖x− xs‖22 is the strong convex term that stabilizes the algorithm and λ1‖x‖1 is the regularization term that produces sparsity. However, it doesn’t work well in DNN models since one input feature can correspond to multiple weights and lasso only can make single weight zero hence can’t effectively delete zeros features.\nTo solve above problem, Ni et al. (2019) adds the `21-regularization (group lasso) to FTRL, which is named G-FTRL. Yang et al. (2010) conducts the research on a group lasso method for online learning that adds `21-regularization to the algorithm of Dual Averaging (DA) (Nesterov, 2009), which is named DA-GL. Even so, these two methods cannot been applied to other optimizers. Different scenarios are suitable for different optimizers in the deep learning fields. For example, MOMENTUM (Polyak, 1964) is typically used in computer vision; ADAM (Kingma & Ba, 2015) is used for training transformer models for natural language processing; and ADAGRAD (Duchi et al., 2011) is used for recommendation systems. If we want to produce sparsity of the model in some scenario, we have to change optimizer which probably influence the performance of the model." }, { "heading": "1.3 MOTIVATION", "text": "Eq. (1) can be rewritten into this form:\nxt+1 = arg min x\nmt · x+ 1 2αt ‖ √ Vt 1 2 (x− xt)‖22. (3)\nFurthermore, we can rewrite Eq. (3) into\nxt+1 = arg min x\nm1:t · x+ t∑\ns=1\n1\n2αs ‖Q\n1 2 s (x− xs)‖22, (4)\nwhere m1:t = ∑t s=1ms, ∑t s=1Qs/αs = √ Vt/αt. It is easy to prove that Eq. (3) and Eq. (4) are equivalent using the method of induction. The matrices Qs can be interpreted as generalized learning rates. To our best knowledge, Vt of Eq. (1) of all the adaptive optimization methods are diagonal for the computation simplicity. Therefore, we considerQs as diagonal matrices throughout this paper.\nWe find that Eq. (4) is similar to Eq. (2) except for the regularization term. Therefore, we add the regularization term Ψ(x) to Eq. (4), which is the sparse group lasso penalty also including `2-\nregularization that can diffuse weights of neural networks. The concrete formula is:\nΨt(x) = G∑ g=1 ( λ1‖xg‖1 + λ21 √ dxg‖A 1 2 t x g‖2 ) + λ2‖x‖22, (5)\nwhere λ1, λ21, λ2 are regularization parameters of `1, `21, `2 respectively, G is the total number of groups of weights, xg is the weights of group g and dxg is the size of group g. In DNN models, each group is defined as the set of outgoing weights from a unit which can be an input feature, or a hidden neuron, or a bias unit (see, e.g., Scardapane et al. (2016)). At can be arbitrary positive matrix satisfying At+1 At, e.g., At = I. In Section 2.1, we let At = ( ∑t s=1 Qgs 2αs\n+ λ2I) just for solving the closed-form solution directly, where Qgs is a diagonal matrix whose diagonal elements are part of Qs corresponding to xg . The ultimate update formula is:\nxt+1 = arg min x\nm1:t · x+ t∑\ns=1\n1\n2αs ‖Q\n1 2 s (x− xs)‖22 + Ψt(x). (6)" }, { "heading": "1.4 OUTLINE OF CONTENTS", "text": "The rest of the paper is organized as follows. In Section 1.5, we introduce the necessary notations and technical background.\nIn Section 2, we present the closed-form solution of Eq. (4) and the algorithm of general framework of adaptive optimization methods with sparse group lasso. We prove the algorithm is equivalent to adaptive optimization methods when regularization terms vanish. In the end, we give two concrete examples of the algorithm.1\nIn Section 3, we derive the regret bounds of the method and convergence rates.\nIn Section 4, we validate the performance of new optimizers in the public datasets.\nIn Section 5, we summarize the conclusion.\nAppendices A-B list the details of GROUP ADAM and Group Adagrad respectively. Appendices C-F contain technical proofs of our main results and Appendix G includes the details of the empirical results of Section 4.4." }, { "heading": "1.5 NOTATIONS AND TECHNICAL BACKGROUND", "text": "We use lowercase letters to denote scalars and vectors, and uppercase letters to denote matrices. We denote a sequence of vectors by subscripts, that is, x1, . . . , xt, and entries of each vector by an additional subscript, e.g., xt,i. We use the notation g1:t as a shorthand for ∑t s=1 gs. Similarly we\nwrite m1:t for a sum of the first moment mt, and f1:t to denote the function f1:t(x) = ∑t s=1 fs(x). Let Mt = [m1 · · ·mt] denote the matrix obtained by concatenating the vector sequence {mt}t≥1 andMt,i denote the i-th row of this matrix which amounts to the concatenation of the i-th component of each vector. The notation A 0 (resp. A 0) for a matrix A means that A is symmetric and positive semidefinite (resp. definite). Similarly, the notations A B and A B mean that A−B 0 andA−B 0 respectively, and both tacitly assume thatA andB are symmetric. Given A 0, we write A 12 for the square root of A, the unique X 0 such that XX = A (McMahan & Streeter (2010), Section 1.4).\nLet E be a finite-dimension real vector space, endowed with the Mahalanobis norm ‖ · ‖A which is denoted by ‖·‖A = √ 〈·, A·〉 as induced byA 0. Let E∗ be the vector space of all linear functions\non E . The dual space E∗ is endowed with the dual norm ‖ · ‖∗A = √ 〈·, A−1·〉.\nLet Q be a closed convex set in E . A continuous function h(x) is called strongly convex on Q with norm ‖ · ‖H ifQ ⊆ dom h and there exists a constant σ > 0 such that for all x, y ∈ Q and α ∈ [0, 1] we have\nh(αx+ (1− α)y) ≤ αh(x) + (1− α)h(y)− 1 2 σα(1− α)‖x− y‖2H .\n1To fulfill research interest of optimization methods, we will release the code in the future.\nThe constant σ is called the convexity parameter of h(x), or the modulus of strong convexity. We also denote by ‖ · ‖h = ‖ · ‖H . Further, if h is differential, we have\nh(y) ≥ h(x) + 〈∇h(x), y − x〉+ σ 2 ‖x− y‖2h.\nWe use online convex optimization as our analysis framework. On each round t = 1, . . . , T , a convex loss function ft : Q 7→ R is chosen, and we pick a point xt ∈ Q hence get loss ft(xt). Our goal is minimizing the regret which is defined as the quantity\nRT = T∑ t=1 ft(xt)−min x∈Q T∑ t=1 ft(x). (7)\nOnline convex optimization can be seen as a generalization of stochastic convex optimization. Any regret minimizing algorithm can be converted to a stochastic optimization algorithm with convergence rate O(RT /T ) using an online-to-batch conversion technique (Littlestone, 1989). In this paper, we assume Q ≡ E = Rn, hence we have E∗ = Rn. We write sTx or s · x for the standard inner product between s, x ∈ Rn. For the standard Euclidean norm, ‖x‖ = ‖x‖2 =√ 〈x, x〉 and ‖s‖∗ = ‖s‖2. We also use ‖x‖1 = ∑n i=1 |x(i)| and ‖x‖∞ = max i|x(i)| to denote `1-norm and `∞-norm respectively, where x(i) is the i-th element of x." }, { "heading": "2 ALGORITHM", "text": "" }, { "heading": "2.1 CLOSED-FORM SOLUTION", "text": "We will derive the closed-form solution of Eq. (6) with specific At and Algorithm 1 with slight modification in this section. We have the following theorem.\nTheorem 1. Given At = ( ∑t s=1 Qgs 2αs\n+ λ2I) of Eq. (5), zt = zt−1 + mt − Qtαt xt at each iteration t = 1, . . . , T and z0 = 0, the optimal solution of Eq. (6) is updated accordingly as follows:\nxt+1 = ( t∑ s=1 Qs αs + 2λ2I)−1 max(1− √ dxgt λ21 ‖s̃t‖2 , 0)st (8)\nwhere the i-th element of st is defined as\nst,i = { 0 if |zt,i| ≤ λ1, sign(zt,i)λ1 − zt,i otherwise, (9)\ns̃t is defined as\ns̃t = ( t∑ s=1 Qs 2αs + λ2I)−1st (10)\nand ∑t s=1 Qs αs is the diagonal and positive definite matrix.\nThe proof of Theorem 1 is given in Appendix C. We slightly modify (8) where we let s̃t = st. Our purpose is to let every entry of the group have the same effect of `21-regularization. Hence, we get Algorithm 1. Furthermore, we have the following theorem which shows the relationship between Algorithm 1 and adaptive optimization methods. The proof is given in Appendix D. Theorem 2. If regularization terms of Algorithm 1 vanish, Algorithm 1 is equivalent to Eq. (1)." }, { "heading": "2.2 CONCRETE EXAMPLES", "text": "Using Algorithm 1, we can easily derive the new optimizers based on ADAM (Kingma & Ba, 2015), ADAGRAD (Duchi et al., 2011) which we call GROUP ADAM, GROUP ADAGRAD respectively.\nGROUP ADAM\nThe detail of the algorithm is given in Appendix A. From Theorem 2, we know that when λ1, λ2, λ21 are all zeros, Algorithm 2 is equivalent to ADAM (Kingma & Ba, 2015).\nAlgorithm 1 Generic framework of adaptive optimization methods with sparse group lasso 1: Input: parameters λ1, λ21, λ2 x1 ∈ Rn, step size {αt > 0}Tt=1, sequence of functions {φt, ψt}Tt=1, initialize z0 = 0, V0 = 0, α0 = 0\n2: for t = 1 to T do 3: gt = ∇ft(xt) 4: mt = φt(g1, . . . , gt) and Vt = ψt(g1, . . . , gt)\n5: Qt αt = √ Vt αt − √ Vt−1 αt−1 6: zt ← zt−1 +mt − Qtαt xt 7: for i ∈ {1, . . . , n} do\n8: st,i = {\n0 if |zt,i| ≤ λ1 sign(zt,i)λ1 − zt,i otherwise.\n9: end for\n10: xt+1 = ( √ Vt αt + 2λ2I)−1 max(1− √ d x g t λ21 ‖st‖2 , 0)st 11: end for\nGROUP ADAGRAD\nThe detail of the algorithm is given in Appendix B. Similarly, from Theorem 2, when λ1, λ2, λ21 are all zeros, Algorithm 3 is equivalent to ADAGRAD (Duchi et al., 2011). Furthermore, we can find that when λ21 = 0, Algorithm 3 is equivalent to FTRL (McMahan et al., 2013). Therefore, GROUP ADAGRAD can also be called GROUP FTRL from the research of Ni et al. (2019).\nSimilarly, GROUP MOMENTUM, GROUP AMSGRAD, GROUP ADAHESSIAN, etc., can be derived from MOMENTUM (Polyak, 1964), AMSGRAD (Reddi et al., 2018), ADAHESSIAN (Yao et al., 2020), etc., with the same framework and we will not list the details." }, { "heading": "3 CONVERGENCE AND REGRET ANALYSIS", "text": "Using the framework developed in Nesterov (2009); Xiao (2010); Duchi et al. (2011), we have the following theorem providing the bound of the regret. Theorem 3. Let the sequence {xt} be defined by the update (6) and\nx1 = arg min x∈Q\n1 2 ‖x− c‖22, (11)\nwhere c is an arbitrary constant vector. Suppose ft(x) is convex for any t ≥ 1 and there exists an optimal solution x∗ of ∑T t=1 ft(x), i.e., x ∗ = arg minx∈Q ∑T t=1 ft(x), which satisfies the condition\n〈mt−1, xt − x∗〉 ≥ 0, t ∈ [T ], (12) where mt is the weighted average of the gradient ft(xt) and [T ] = {1, . . . , T} for simplicity. Without loss of generality, we assume\nmt = γmt−1 + gt, (13)\nwhere γ < 1 and m0 = 0. Then\nRT ≤ ΨT (x∗) + T∑ t=1 1 2αt ‖Q 1 2 t (x ∗ − xt)‖22 + 1 2 T∑ t=1 ‖mt‖2h∗t−1 , (14)\nwhere ‖·‖h∗t is the dual norm of ‖·‖ht . ht is 1-strongly convex with respect to ‖·‖√Vt/αt for t ∈ [T ] and h0 is 1-strongly convex with respect to ‖ · ‖2.\nThe proof of Theorem 3 is given in Appendix E. Since in most of adaptive optimizers, Vt is the weighted average of diag(g2t ), without loss of generality, we assume αt = α and\nVt = ηVt−1 + diag(g2t ), t ≥ 1, (15) where V0 = 0 and η ≤ 1. Hence, we have the following lemma whose proof is given in Appendix F.1.\nLemma 1. Suppose Vt is the weighted average of the square of the gradient which is defined by (15), αt = α, mt is defined by (13) and Vt satisfies the following arbitrary conditions:\n1. η = 1,\n2. η < 1, η ≥ γ and κVt Vt−1 for all t ≥ 1 where κ < 1.\nThen we have T∑ t=1 ‖mt‖2 ( √ Vt αt )−1 < 2α 1− ν d∑ i=1 ‖MT,i‖2, (16)\nwhere ν = max(γ, κ) and d is the dimension of xt.\nWe can always add δ2I to Vt at each step to ensure Vt 0. Therefore, ht(x) is 1-strongly convex with respect to ‖ · ‖√δ2I+Vt/αt . Let δ ≥ maxt∈[T ] ‖gt‖∞, for t > 1, we have\n‖mt‖2h∗t−1 = 〈 mt, αt(δ 2I + Vt−1)− 1 2mt 〉 ≤ 〈 mt, αt ( diag(g2t ) + ηVt−1 )− 12 mt〉 = 〈 mt, αtV − 12 t mt 〉 = ‖mt‖2\n( √ Vt αt )−1 .\n(17)\nFor t = 1, we have ‖m1‖2h∗0 = 〈 m1, α1(δ 2I + I)− 1 2m1 〉 ≤ 〈 m1, α1 ( diag− 1 2 (g21) ) m1 〉 = 〈 m1, α1V − 12 1 m1 〉 = ‖m1‖2\n( √ V1 α1 )−1 .\n(18)\nFrom (17), (18) and Lemma 1, we have\nLemma 2. Suppose Vt, mt, αt, ν, d are defined the same as Lemma 1, maxt∈[T ] ‖gt‖∞ ≤ δ, ‖ · ‖2h∗t = 〈 ·, αt(δ2I + Vt)− 1 2 · 〉 for t ≥ 1 and ‖ · ‖2h∗0 = 〈 ·, α1 ( (δ2 + 1)I )− 12 ·〉. Then T∑ t=1 ‖mt‖2h∗t−1 < 2α 1− ν d∑ i=1 ‖MT,i‖2. (19)\nTherefore, from Theorem 3 and Lemma 2, we have\nCorollary 1. Suppose Vt, mt, αt, h∗t , ν, d are defined the same as Lemma 2, there exist constants G, D1, D2 such that maxt∈[T ] ‖gt‖∞ ≤ G ≤ δ, ‖x∗‖∞ ≤ D1 and maxt∈[T ] ‖xt − x∗‖∞ ≤ D2. Then\nRT < dD1 ( λ1 + λ21( √ TG\n2α + λ2)\n1 2 + λ2D1 ) + dG ( D22 2α + α (1− ν)2 )√ T . (20)\nThe proof of Corollary 1 is given in F.2. Furthermore, from Corollary 1, we have\nCorollary 2. Suppose mt is defined as (13), αt = α and satisfies the condition (19). There exist constantsG,D1,D2 such that tG2I Vt, maxt∈[T ] ‖gt‖∞ ≤ G, ‖x∗‖∞ ≤ D1 and maxt∈[T ] ‖xt− x∗‖∞ ≤ D2. Then\nRT < dD1 ( λ1 + λ21( √ TG\n2α + λ2)\n1 2 + λ2D1 ) + dG ( D22 2α + α (1− ν)2 )√ T . (21)\nTherefore, we know that the regret of the update (6) is O( √ T ) and can achieve the optimal convergence rate O(1/ √ T ) under the conditions of Corollary 1 or Corollary 2." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 EXPERIMENT SETUP", "text": "We test the algorithms on three different large-scale real-world datasets with different neural network structures. These datasets are various display ads logs for the purpose of predicting ads CTR. The details are as follows.\na) The Avazu CTR dataset (Avazu, 2015) contains approximately 40M samples and 22 categorical features over 10 days. In order to handle categorical data, we use the one-hot-encoding based embedding technique (see, e.g., Wang et al. (2017), Section 2.1 or Naumov et al. (2019), Section 2.1.1) and get 9.4M features in total. For this dataset, the samples from the first 9 days (containing 8.7M one-hot features) are used for training, while the rest is for testing. Our DNN model follows the basic structure of most deep CTR models. Specifically, the model comprises one embedding layer, which maps each one-hot feature into 16-dimensional embeddings, and four fully connected layers (with output dimension of 64, 32, 16 and 1, respectively) in sequence.\nb) The iPinYou dataset2 (iPinYou, 2013) is another real-world dataset for ad click logs over 21 days. The dataset contains 16 categorical features3. After one-hot encoding, we get a dataset containing 19.5M instances with 1033.1K input dimensions. We keep the original train/test splitting scheme, where the training set contains 15.4M samples with 937.7K one-hot features. We use Outer Product-based Neural Network (OPNN) (Qu et al., 2016), and follow the standard settings of Qu et al. (2016), i.e., one embedding layer with the embedding dimension of 10, one product layer and three hidden layers of size 512, 256, 128 respectively where we set dropout rate at 0.5.\nc) The third dataset is the Criteo Display Ads dataset (Criteo, 2014) which contains approximately 46M samples over 7 days. There are 13 integer features and 26 categorical features. After onehot encoding of categorical features, we have total 33.8M features. We split the dataset into 7 partitions in chronological order and select the earliest 6 parts for training which contains 29.6M features and the rest for testing though the dataset has no timestamp. We use Deep & Cross Network (DCN) (Wang et al., 2017) and choose the following settings4: one embedding layer with embedding dimension 8, two deep layers of size 64 each, and two cross layers.\nFor the convenience of discussion, we use MLP, OPNN and DCN to represent the aforementioned three datasets coupled with their corresponding models. It is obvious that the embedding layer has most of parameters of the neural networks when the features have very high dimension, therefore we just add the regularization terms to the embedding layer. Furthermore, each embedding vector is considered as a group, and a visual comparison between `1, `21 and mixed regularization effect is given in Fig. 2 of Scardapane et al. (2016).\nWe treat the training set as the streaming data, hence we train 1 epoch with a batch size of 512 and do the validation. The experiments are conducted with 4-9 workers and 2-3 parameter servers, which depends on the different sizes of the datasets. We use the area under the receiver-operator curve (AUC) as the evaluation criterion since it is widely used in evaluating classification problems. Besides, some work validates AUC as a good measurement in CTR estimation (Graepel et al., 2010). We explore 5 learning rates from 1e-5 to 1e-1 with increments of 10x and choose the one with the best AUC for each new optimizer in the case of no regularization terms (It is equivalent to the original optimizer according to Theorem 2). All the experiments are run 5 times repeatedly and tested statistical significance using t-test. Without loss of generality, we choose two new optimizers to validate the performance, which are GROUP ADAM and GROUP ADAGRAD." }, { "heading": "4.2 ADAM VS. GROUP ADAM", "text": "First, we compare the performance of the two optimizers on the same sparsity level. We keep λ1, λ2 be zeros and choose different values of λ21 of Algorithm 2, i.e., GROUP ADAM, and achieve the\n2We only use the data from season 2 and 3 because of the same data schema. 3See https://github.com/Atomu2014/Ads-RecSys-Datasets/ for details. 4Limited by training resources available, we don’t use the optimal hyperparameter settings of Wang et al.\n(2017).\nsame sparsity with ADAM that uses the magnitude pruning method, i.e., sort the norm of embedding vector from largest to smallest, and keep top N embedding vectors which depend on the sparsity when finish the training. Table 2 reports the average results of the two optimizers in the three datasets. Note that GROUP ADAM significantly outperforms ADAM on the AUC metric on the same sparsity level for most experiments. Furthermore, as shown in Figure 1, the same `21-regularization strength λ21 has different effects of sparsity and accuracy on different datasets. The best choice of λ21 depends on the dataset as well as the application (For example, if the memory of serving resource is limited, sparsity might be relative more important). One can trade off accuracy to get more sparsity by increasing the value of λ21.\n1\n1\n1\nNext, we compare the performance of ADAM without post-processing procedure, i.e., no magnitude pruning, and GROUP ADAM with appropriate regularization terms which we choose in Table 3 on the AUC metric. In general, good default settings of λ2 is 1e-5. The results are shown in Table 4. Note that compared with ADAM, GROUP ADAM with appropriate regularization terms can achieve significantly better or highly competitive performance with producing extremely high sparsity." }, { "heading": "4.3 ADAGRAD VS. GROUP ADAGRAD", "text": "We compare with the performance of ADAGRAD without magnitude pruning and GROUP ADAGRAD with appropriate regularization terms which we choose in Table 5 on the AUC metric. The results are shown in Table 6. Again note that in comparison to ADAGRAD, GROUP ADAGRAD can not only achieve significantly better or highly competitive performance of AUC, but also effectively and efficiently reduce the dimensions of the features.\nTable 4: AUC for three datasets and sparsity (feature rate) in parentheses. The best value for each dataset is bolded. The pvalue of t-test is also listed.\nDataset ADAM GROUP ADAM P-Value\nMLP 0.7458 (1.000) 0.7486 (0.018) 1.10e-3 (2.69e-11)\nOPNN 0.7588 (0.827) 0.7617 (0.130) 0.289 (6.20e-11)\nDCN 0.8021 (1.000) 0.8019 (0.030) 0.422 (1.44e-11)\nTable 6: AUC for three datasets and sparsity (feature rate) in parentheses. The best value for each dataset is bolded. The pvalue of t-test is also listed.\nDataset ADAGRAD GROUP ADAGRAD P-Value\nMLP 0.7453 (1.000) 0.7469 (0.063) 0.106 (1.51e-9)\nOPNN 0.7556 (0.827) 0.7595 (0.016) 0.026 (< 2.2e-16)\nDCN 0.7975 (1.000) 0.7978 (0.040) 0.198 (3.94e-11)" }, { "heading": "4.4 DISCUSSION", "text": "In this section we will discuss the hyperparameters of emdedding dimension, `1-regularization and `21-regularization to show how these hyperparameters affect the effects of regularization.\nEmbedding Dimension Table 7 of Appendix G reports the average results of different embedding dimensions of MLP, whose optimizer is GROUP ADAM and regularization terms are same to MLP of Table 5. Note that the sparsity increases with the growth of the embedding dimension. The reason is that the square root of the embedding dimension is the multiplier of `21-regularization.\n`1 vs. `21 From lines 8 and 10 of Algorithm 1, we know that if zt has the same elements, the values of `1 and `21, i.e., λ1 and λ21, have the same regularization effects. However, this situation almost cannot be happen in reality. Without loss of generality, we set optimizer, λ2 and embedding dimension be GROUP ADAM, 1e-5 and 16 respectively, and choose different values of λ1, λ21. The results on MLP are shown in Table 8 of Appendix G. It is obvious that `21-regularization is much more effective than `1-regularization in producing sparsity. For example, when λ1 = 0 and λ21 = 5e-3, the feature sparsity is 0.136, while for λ1 = 5e-3 and λ21 = 0, the feature sparsity is 0.470. Therefore, if just want to produce sparsity, we can only tune λ21 and use default settings for λ2 and λ1, i.e., λ2 = 1e-5 and λ1 = 0." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a novel framework that adds the regularization terms to a family of adaptive optimizers for producing sparsity of DNN models. We apply this framework to create a new class of optimizers. We provide closed-form solutions and algorithms with slight modification. We built the relation between new and original optimizers, i.e., our new optimizers become equivalent with the corresponding original ones, once the regularization terms vanish. We theoretically prove the convergence rate of the regret and also conduct empirical evaluation on the proposed optimizers in comparison to the original optimizers with and without magnitude pruning. The results clearly demonstrate the advantages of our proposed optimizers in both getting significantly better performance and producing sparsity. Finally, it would be interesting in the future to investigate the convergence in non-convex settings and evaluate our optimizers on more applications from fields such as compute vision, natural language processing and etc." }, { "heading": "A GROUP ADAM", "text": "Algorithm 2 Group Adam 1: Input: parameters λ1, λ21, λ2, β1, β2, x1 ∈ Rn, step size α, initialize z0 = 0, m̂0 = 0, V̂0 = 0, V0 = 0\n2: for t = 1 to T do 3: gt = ∇ft(xt) 4: m̂t ← β1m̂t−1 + (1− β1)gt 5: mt = m̂t/(1− βt1) 6: V̂t ← β2V̂t−1 + (1− β2)diag(g2t ) 7: Vt = V̂t/(1− βt2)\n8: Qt = { √ Vt − √ Vt−1 + I t = 1√\nVt − √ Vt−1 t > 1\n9: zt ← zt−1 +mt − 1αQtxt 10: for i ∈ {1, . . . , n} do\n11: st,i = {\n0 if |zt,i| ≤ λ1 sign(zt,i)λ1 − zt,i otherwise.\n12: end for 13: xt+1 = ( √ Vt+ I α + 2λ2I)−1 max(1− √ d x g t λ21 ‖st‖2 , 0)st 14: end for" }, { "heading": "B GROUP ADAGRAD", "text": "Algorithm 3 Group Adagrad 1: Input: parameters λ1, λ21, λ2, x1 ∈ Rn, step size α, initialize z0 = 0, V0 = 0\n2: for t = 1 to T do 3: gt = ∇ft(xt) 4: mt = gt\n5: Vt = { Vt−1 + diag(g2t ) + I t = 1 Vt−1 + diag(g2t ) t > 1 6: Qt = √ Vt − √ Vt−1 7: zt ← zt−1 +mt − 1αQtxt 8: for i ∈ {1, . . . , n} do\n9: st,i = {\n0 if |zt,i| ≤ λ1 sign(zt,i)λ1 − zt,i otherwise.\n10: end for 11: xt+1 = ( √ Vt α + 2λ2I)−1 max(1− √ d x g t λ21 ‖st‖2 , 0)st 12: end for" }, { "heading": "C PROOF OF THEOREM 1", "text": "Proof.\nxt+1 = arg min x\nm1:t · x+ t∑\ns=1\n1\n2αs (x− xs)TQs(x− xs) + Ψt(x)\n= arg min x\nm1:t · x+ t∑\ns=1\n1\n2αs (‖Q\n1 2 s x‖22 − 2x · (Qsxs) + ‖Q 1 2 s xs‖22) + Ψt(x)\n= arg min x\n( m1:t −\nt∑ s=1 Qs αs xs\n) · x+\nt∑ s=1 1 2αs ‖Q 1 2 s x‖22 + Ψt(x).\n(22)\nDefine zt−1 = m1:t−1 − ∑t−1 s=1 Qs αs xs (t ≥ 2) and we can calculate zt as\nzt = zt−1 +mt − Qt αt xt, t ≥ 1. (23)\nBy substituting (23), (22) is simplified to be\nxt+1 = arg min x\nzt · x+ t∑\ns=1\nQs 2αs ‖x‖22 + Ψt(x). (24)\nBy substituting Ψt(x) (Eq. (5)) into (24), we get\nxt+1 = arg min x zt · x+ G∑ g=1 ( λ1‖xg‖1 + λ21 √ dxg‖( t∑ s=1 Qgs 2αs + λ2I) 1 2xg‖2 ) +\n‖( t∑\ns=1\nQs 2αs + λ2I) 1 2x‖22.\n(25)\nSince the objective of (25) is component-wise and element-wise, we can focus on the solution in one group, say g, and one entry, say i, in the g-th group. Let ∑t s=1 Qgs 2αs = diag(σgt ) where σ g t = (σgt,1, . . . , σ g t,dxg ). The objective of (25) on xgt+1,i is\nΩ(xgt+1,i) = z g t,ix g t+1,i + λ1|x g t+1,i|+ Φ(x g t+1,i), (26)\nwhere Φ(xgt+1,i) = λ21 √ dxg‖(σgt,i + λ2) 1 2xgt+1,i‖2 + ‖(σ g t,i + λ2) 1 2xgt+1,i‖22 is a non-negative function and Φ(xgt+1,i) = 0 iff x g t+1,i = 0 for all i ∈ {1, . . . , dxg}.\nWe discuss the optimal solution of (26) in three cases:\na) If zgt,i = 0, then x g t+1,i = 0.\nb) If zgt,i > 0, then x g t+1,i ≤ 0. Otherwise, if x g t+1,i > 0, we have Ω(−x g t+1,i) < Ω(x g t+1,i), which\ncontradicts the minimization value of Ω(x) on xgt+1,i.\nNext, if zgt,i ≤ λ1, then x g t+1,i = 0. Otherwise, if x g t+1,i < 0, we have Ω(x g t+1,i) = (z g t,i − λ1)x g t+1,i + Φ(x g,i t+1) > Ω(0), which also contradicts the minimization value of Ω(x) on x g t+1,i.\nThird, zgt,i > λ1 (∀ i = 1, . . . , dxg ). The objective of (26) for the g-th group, Ω(x g t+1), becomes\n(zgt − λ11dxg ) · x g t+1 + Φ(x g t+1).\nc) If zgt,i < 0, the analysis is similar to b). We have x g t+1,i ≥ 0. When −z g t,i ≤ λ1, x g t+1,i = 0.\nWhen −zgt,i > λ1 (∀ i = 1, . . . , dxg ), we have\nΩ(xgt+1) = (z g t + λ11dxg ) · x g t+1 + Φ(x g t+1).\nFrom a), b), c) above, we have\nxgt+1 = arg min x −sgt · x+ Φ(x), (27)\nwhere the i-th element of sgt is defined same as (9).\nDefine y = (diag(σgt ) + λ2I) 1 2x. (28)\nBy substituting (28) into (27), we get\nygt+1 = arg min y\n−s̃gt · y + λ21 √ dxg‖y‖2 + ‖y‖22, (29)\nwhere s̃gt = (diag(σ g t )+λ2I)−1s g t which is defined same as (10). This is unconstrained non-smooth optimization problem. Its optimality condition (see Rockafellar (1970), Section 27) states that ygt+1 is an optimal solution if and only if there exists ξ ∈ ∂‖ygt+1‖2 such that\n−s̃gt + λ21 √ dxgξ + 2y g t+1 = 0. (30)\nThe subdifferential of ‖y‖2 is ∂‖y‖2 = { {ζ ∈ Rdxg | − 1 ≤ ζ(i) ≤ 1, i = 1, . . . , dxg} if y = 0, y ‖y‖2 if y 6= 0.\nSimilarly to the analysis of `1-regularization, we discuss the solution of (30) in two different cases:\na) If ‖s̃gt ‖2 ≤ λ21 √ dxg , then y g t+1 = 0 and ξ = s̃gt λ21 √ dxg ∈ ∂‖0‖2 satisfy (30). We also show that\nthere is no solution other than ygt+1 = 0. Without loss of generality, we assume y g t+1,i 6= 0 for all i ∈ {1, . . . , dxg}, then ξ = ygt+1 ‖ygt+1‖2 , and\n−s̃gt + λ21 √ dxg\n‖ygt+1‖2 ygt+1 + 2y g t+1 = 0. (31)\nFrom (31), we can derive\n( λ21 √ dxg\n‖ygt+1‖2 + 2)‖ygt+1‖2 = ‖s̃ g t ‖2.\nFurthermore, we have\n‖ygt+1‖2 = 1\n2 (‖s̃gt ‖2 − λ21\n√ dxg ), (32)\nwhere ‖ygt+1‖2 > 0 and ‖s̃ g t ‖2 − λ21 √ dxg ≤ 0 contradict each other.\nb) If ‖s̃gt ‖2 > λ21 √ dxg , then from (31) and (32), we get\nygt+1 = 1 2 (1− λ21\n√ dxg\n‖s̃gt ‖2 )s̃gt . (33)\nWe replace ygt+1 of (33) by x g t+1 using (28), then we have\nxgt+1 = (diag(σ g t ) + λ2I)− 1 2 ygt+1\n= (2diag(σgt ) + 2λ2I)−1(1− λ21 √ dxg\n‖s̃gt ‖2 )sgt\n= ( t∑ s=1 Qs αs + 2λ2I)−1(1− λ21 √ dxg ‖s̃gt ‖2 )sgt .\n(34)\nCombine a) and b) above, we finish the proof." }, { "heading": "D PROOF OF THEOREM 2", "text": "Proof. We use the method of induction.\na) When t = 1, then Algorithm 1 becomes\nQ1 = α1( √ V1 α1 − √ V0 α0\n) = √ V1,\nz1 = z0 +m1 − Q1 α1 x1 = m1 − √ V1 α1 x1, s1 = −z1 = √ V1 α1 x1 −m1,\nx2 = ( √ V1 α1 )−1s1 = x1 − α1 m1√ V1 ,\nwhich equals to Eq. (1).\nb) Assume t = T , Eq. (35) are true.\nzT = mT − √ VT αT xT , xT+1 = xT − αT mT√ VT . (35)\nFor t = T + 1, we have\nzT+1 = zT +mT+1 − QT+1 αT+1 xT+1\n= mT − √ VT αT xT +mT+1 − QT+1 αT+1 xT+1 = mT − √ VT αT (xT+1 + αT mT√ VT ) +mT+1 − QT+1 αT+1 xT+1 = mT+1 − ( √ VT αT + QT+1 αT+1 )xT+1 = mT+1 − √ VT+1 αT+1 xT+1,\nxT+2 = (\n√ VT+1\nαT+1 )−1sT+1 = −(\n√ VT+1\nαT+1 )−1zT+1 = xT+1 − αT mT+1√ VT+1 .\nHence, we complete the proof." }, { "heading": "E PROOF OF THEOREM 3", "text": "Proof. Let\nht(x) =\n{ ∑t s=1 1 2αs ‖Q 1 2 s (x− xs)‖22 ∀ t ∈ [T ],\n1 2‖x− c‖ 2 2 t = 0.\nIt is easy to verify that for all t ∈ [T ], ht(x) is 1-strongly convex with respect to ‖ · ‖√Vt/αt which√ Vt αt = ∑t s=1 Qs αs , and h0(x) is 1-strongly convex with respect to ‖ · ‖2.\nFrom (7), we have\nRT = T∑ t=1 (ft(xt)− ft(x∗)) ≤ T∑ t=1 〈gt, xt − x∗〉\n= T∑ t=1 〈mt − γmt−1, xt − x∗〉 ≤ T∑ t=1 〈mt, xt − x∗〉\n= T∑ t=1 〈mt, xt〉+ ΨT (x∗) + hT (x∗) + ( T∑ t=1 〈−mt, x∗〉 −ΨT (x∗)− hT (x∗))\n≤ T∑ t=1 〈mt, xt〉+ ΨT (x∗) + hT (x∗) + sup x∈Q {〈−m1:T , x〉 −ΨT (x)− hT (x)} ,\n(36)\nwhere in the first and second inequality above, we use the convexity of ft(x) and the condition (12) respectively.\nWe define h∗t (u) to be the conjugate dual of Ψt(x) + ht(x):\nh∗t (u) = sup x∈Q {〈u, x〉 −Ψt(x)− ht(x)} , t ≥ 0,\nwhere Ψ0(x) = 0. Since ht(x) is 1-strongly convex with respect to the norm ‖ · ‖ht , the function h∗t has 1-Lipschitz continuous gradients with respect to ‖ · ‖h∗t (see, Nesterov (2005), Theorem 1): ‖∇h∗t (u1)−∇h∗t (u2)‖ht ≤ ‖u1 − u2‖h∗t , (37) and\n∇h∗t (u) = arg min x∈Q {− 〈u, x〉+ Ψt(x) + ht(x)} . (38)\nAs a trivial corollary of (37), we have the following inequality:\nh∗t (u+ δ) ≤ h∗t (u) + 〈∇h∗t (u), δ〉+ 1\n2 ‖δ‖2h∗t . (39)\nSince ht+1(x) ≥ ht(x) and Ψt+1(x) ≥ Ψt(x), from (38), (39), (6), we have h∗T (−m1:T ) ≤ h∗T−1(−m1:T )\n≤ h∗T−1(−m1:T−1)− 〈 ∇h∗T−1(−m1:T−1),mT 〉 + 1\n2 ‖mT ‖2h∗T−1\n≤ h∗T−2(−m1:T−1)− 〈xT ,mT 〉+ 1\n2 ‖mT ‖2h∗T−1\n≤ h∗0(0)− 〈∇h∗0(0),m1〉 − T∑ t=2 〈xt,mt〉+ 1 2 T∑ t=2 ‖mt‖2h∗t−1\n= − T∑ t=1 〈xt,mt〉+ 1 2 T∑ t=1 ‖mt‖2h∗t−1 .\n(40)\nwhere the last equality above follows from h∗0(0) = 0 and (11) which deduces x1 = ∇h∗0(0). By substituting (40), (36) becomes\nRT ≤ T∑ t=1 〈mt, xt〉+ ΨT (x∗) + hT (x∗) + h∗T (−m1:T )\n≤ ΨT (x∗) + hT (x∗) + 1\n2 T∑ t=1 ‖mt‖2h∗t−1 .\n(41)" }, { "heading": "F ADDITIONAL PROOFS", "text": "F.1 PROOF OF LEMMA 1\nProof. Let Vt = diag(σt) where σt is the vector of the diagonal elements of Vt. For i-th entry of σt, by substituting (13) into (15), we have\nσt,i = g 2 t,i + ησt−1,i = (mt,i − γmt−1,i)2 + ηg2t−1,i + η2σt−2,i\n= t∑ s=1 ηt−s(ms,i − γms−1,i)2 ≥ t∑ s=1 ηt−s(1− γ)(m2s,i − γm2s−1,i)\n= (1− γ) ( m2t,i + (η − γ) t−1∑ s=1 ηt−s−1m2s,i ) .\n(42)\nNext, we will discuss the value of η in two cases.\na) η = 1. From (42), we have\nσt,i ≥ (1− γ) ( m2t,i + (1− γ) t−1∑ s=1 m2s,i ) > (1− γ)2 t∑ s=1 m2s,i ≥ (1− ν)2 t∑ s=1 m2s,i. (43)\nRecalling the definition of Mt,i in Section 1.5, from (43), we have T∑ t=1 m2t,i√ σt,i < 1 1− ν T∑ t=1 m2t,i ‖Mt,i‖2 ≤ 2 1− ν ‖MT,i‖2,\nwhere the last inequality above follows from Appendix C of Duchi et al. (2011). Therefore, we get\nT∑ t=1 ‖mt‖2 ( √ Vt αt )−1 = α T∑ t=1 d∑ i=1 m2t,i√ σt,i < 2α 1− ν d∑ i=1 ‖MT,i‖2. (44)\nb) η < 1. We assume η ≥ γ and κVt Vt−1 where κ < 1, then we have t∑\ns=1\nκt−sσt,i ≥ t∑\ns=1\nσs,i ≥ (1− γ) t∑\ns=1\nm2s,i.\nHence, we get\nσt,i ≥ 1− κ 1− κt\n(1− γ) t∑\ns=1\nm2s,i > (1− κ)(1− γ) t∑\ns=1\nm2s,i ≥ (1− ν)2 t∑\ns=1\nm2s,i, (45)\nwhich deduces the same conclusion (44) of a).\nCombine a) and b), we complete the proof.\nF.2 PROOF OF COROLLARY 1\nProof. From the definition of mt (13), Vt (15), we have\n|mt,i| = | t∑\ns=1\nγt−sgs,i| ≤ 1− γt\n1− γ G <\nG 1− γ ≤ G 1− ν ,\n|σt,i| = | t∑\ns=1\nηt−sg2s,i| ≤ tG2.\nHence, we have\nΨT (x ∗) ≤ λ1dD1 + λ21dD1(\n√ TG\n2α + λ2)\n1 2 + λ2dD 2 1, (46)\nhT (x ∗) ≤ dD\n2 2G\n2α\n√ T , (47)\n1\n2 T∑ t=1 ‖mt‖2h∗t−1 < α 1− ν d∑ i=1 √ TG 1− ν = dαG (1− ν)2 √ T . (48)\nCombining (46), (47), (48), we complete the proof." }, { "heading": "G ADDITIONAL EMPIRICAL RESULTS", "text": "" } ]
2,020
null
SP:eceef2daaa86f86534b3b33ca96c19f0b52e20b7
[ "In one-shot differentiable NAS, a supergraph is usually trained (via bilevel optimization as in DARTS, or other approximations to bilevel such as gumbel softmax, etc). After supergraph training, a final architecture is obtained by taking the operator at each edge which has the highest architecture weight magnitude. This step is usually termed as the 'finalization' step. (In DARTS the finalization step actually orders incoming edges by the max of the architecture weight magnitudes at each edge and selects the top two edges and the corresponding maximum architecture weight in them as the final operators.). This paper examines this quite ad hoc step very closely. It finds that the magnitude of architecture weights (alphas commonly in this niche literature) are misleading. It shows by careful ablation experiments that alpha magnitudes are very much not useful in selecting good operators. " ]
Differentiable Neural Architecture Search is one of the most popular Neural Architecture Search (NAS) methods for its search efficiency and simplicity, accomplished by jointly optimizing the model weight and architecture parameters in a weight-sharing supernet via gradient-based algorithms. At the end of the search phase, the operations with the largest architecture parameters will be selected to form the final architecture, with the implicit assumption that the values of architecture parameters reflect the operation strength. While much has been discussed about the supernet’s optimization, the architecture selection process has received little attention. We provide empirical and theoretical analysis to show that the magnitude of architecture parameters does not necessarily indicate how much the operation contributes to the supernet’s performance. We propose an alternative perturbation-based architecture selection that directly measures each operation’s influence on the supernet. We re-evaluate several differentiable NAS methods with the proposed architecture selection and find that it is able to extract significantly improved architectures from the underlying supernets consistently. Furthermore, we find that several failure modes of DARTS can be greatly alleviated with the proposed selection method, indicating that much of the poor generalization observed in DARTS can be attributed to the failure of magnitude-based architecture selection rather than entirely the optimization of its supernet.
[ { "affiliations": [], "name": "Ruochen Wang" }, { "affiliations": [], "name": "Minhao Cheng" }, { "affiliations": [], "name": "Xiangning Chen" }, { "affiliations": [], "name": "Xiaocheng Tang" }, { "affiliations": [], "name": "Cho-Jui Hsieh" } ]
[ { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kaifeng Bi", "Changping Hu", "Lingxi Xie", "Xin Chen", "Longhui Wei", "Qi Tian" ], "title": "Stabilizing darts with amended gradient estimation on architectural parameters, 2019", "venue": null, "year": 2019 }, { "authors": [ "Andrew Brock", "Theo Lim", "J.M. Ritchie", "Nick Weston" ], "title": "SMASH: One-shot model architecture search through hypernetworks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Han Cai", "Ligeng Zhu", "Song Han" ], "title": "ProxylessNAS: Direct neural architecture search on target task and hardware", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Xiangning Chen", "Cho-Jui Hsieh" ], "title": "Stabilizing differentiable architecture search via perturbationbased regularization", "venue": "Proceedings of the 37th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Xiangning Chen", "Ruochen Wang", "Minhao Cheng", "Xiaocheng Tang", "Cho-Jui Hsieh" ], "title": "DrNAS: Dirichlet neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Searching for a robust neural architecture in four gpu hours", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Nas-bench-201: Extending the scope of reproducible neural architecture search", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Klaus Greff", "Rupesh K. Srivastava", "Jürgen Schmidhuber" ], "title": "Highway and residual networks learn unrolled iterative estimation", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling, 2019", "venue": null, "year": 2019 }, { "authors": [ "Dongyoon Han", "Jiwhan Kim", "Junmo Kim" ], "title": "Deep pyramidal residual networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi: 10.1109/cvpr.2017.243. URL http://dx.doi.org/10.1109/ CVPR.2017.243", "year": 2017 }, { "authors": [ "Guilin Li", "Xing Zhang", "Zitong Wang", "Zhenguo Li", "Tong Zhang" ], "title": "Stacnas: Towards stable and consistent differentiable neural architecture", "venue": null, "year": 2019 }, { "authors": [ "Guohao Li", "Guocheng Qian", "Itzel C. Delgadillo", "Matthias Muller", "Ali Thabet", "Bernard Ghanem" ], "title": "Sgas: Sequential greedy architecture search", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hanwen Liang", "Shifeng Zhang", "Jiacheng Sun", "Xingqiu He", "Weiran Huang", "Kechen Zhuang", "Zhenguo Li" ], "title": "Darts+: Improved differentiable architecture search with early", "venue": null, "year": 2019 }, { "authors": [ "Chenxi Liu", "Barret Zoph", "Maxim Neumann", "Jonathon Shlens", "Wei Hua", "Li-Jia Li", "Li Fei-Fei", "Alan Yuille", "Jonathan Huang", "Kevin Murphy" ], "title": "Progressive neural architecture search", "venue": "Lecture Notes in Computer Science,", "year": 2018 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Oriol Vinyals", "Chrisantha Fernando", "Koray Kavukcuoglu" ], "title": "Hierarchical representations for efficient architecture", "venue": null, "year": 2017 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Renqian Luo", "Fei Tian", "Tao Qin", "Enhong Chen", "Tie-Yan Liu" ], "title": "Neural architecture optimization", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Esteban Real", "Sherry Moore", "Andrew Selle", "Saurabh Saxena", "Yutaka Leon Suematsu", "Jie Tan", "Quoc V. Le", "Alexey Kurakin" ], "title": "Large-scale evolution of image classifiers", "venue": "In Proceedings of the 34th International Conference on Machine Learning - Volume 70,", "year": 2017 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V. Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Kenneth O. Stanley", "Risto Miikkulainen" ], "title": "Evolving neural networks through augmenting topologies", "venue": "Evolutionary Computation,", "year": 2002 }, { "authors": [ "Andreas Veit", "Michael Wilber", "Serge Belongie" ], "title": "Residual networks behave like ensembles of relatively shallow networks", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "PCDARTS: Partial channel connections for memory-efficient architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Quanming Yao", "Ju Xu", "Wei-Wei Tu", "Zhanxing Zhu" ], "title": "Efficient neural architecture search via proximal iterations", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Arber Zela", "Thomas Elsken", "Tonmoy Saikia", "Yassine Marrakchi", "Thomas Brox", "Frank Hutter" ], "title": "Understanding and robustifying differentiable architecture search", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhao Zhong", "Junjie Yan", "Wei Wu", "Jing Shao", "Cheng-Lin Liu" ], "title": "Practical block-wise neural network architecture generation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hongpeng Zhou", "Minghao Yang", "Jun Wang", "Wei Pan" ], "title": "Bayesnas: A bayesian approach for neural architecture search", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural Architecture Search (NAS) has been drawing increasing attention in both academia and industry for its potential to automatize the process of discovering high-performance architectures, which have long been handcrafted. Early works on NAS deploy Evolutionary Algorithm (Stanley & Miikkulainen, 2002; Real et al., 2017; Liu et al., 2017) and Reinforcement Learning (Zoph & Le, 2017; Pham et al., 2018; Zhong et al., 2018) to guide the architecture discovery process. Recently, several one-shot methods have been proposed that significantly improve the search efficiency (Brock et al., 2018; Guo et al., 2019; Bender et al., 2018).\nAs a particularly popular instance of one-shot methods, DARTS (Liu et al., 2019) enables the search process to be performed with a gradient-based optimizer in an end-to-end manner. It applies continuous relaxation that transforms the categorical choice of architectures into continuous architecture parameters α. The resulting supernet can be optimized via gradient-based methods, and the operations associated with the largest architecture parameters are selected to form the final architecture. Despite its simplicity, several works cast doubt on the effectiveness of DARTS. For example, a simple randomized search (Li & Talwalkar, 2019) outperforms the original DARTS; Zela et al. (2020) observes that DARTS degenerates to networks filled with parametric-free operations such as the skip connection or even random noise, leading to the poor performance of the selected architecture.\nWhile the majority of previous research attributes the failure of DARTS to its supernet optimization (Zela et al., 2020; Chen & Hsieh, 2020; Chen et al., 2021), little has been discussed about the validity of another important assumption: the value of α reflects the strength of the underlying operations. In this paper, we conduct an in-depth analysis of this problem. Surprisingly, we find that in many cases, α does not really indicate the operation importance in a supernet. Firstly, the operation associated\nwith larger α does not necessarily result in higher validation accuracy after discretization. Secondly, as an important example, we show mathematically that the domination of skip connection observed in DARTS (i.e. αskip becomes larger than other operations.) is in fact a reasonable outcome of the supernet’s optimization but becomes problematic when we rely on α to select the best operation.\nIf α is not a good indicator of operation strength, how should we select the final architecture from a pretrained supernet? Our analysis indicates that the strength of each operation should be evaluated based on its contribution to the supernet performance instead. To this end, we propose an alternative perturbation-based architecture selection method. Given a pretrained supernet, the best operation on an edge is selected and discretized based on how much it perturbs the supernet accuracy; The final architecture is derived edge by edge, with fine-tuning in between so that the supernet remains converged for every operation decision. We re-evaluate several differentiable NAS methods (DARTS (Liu et al., 2019), SDARTS (Chen & Hsieh, 2020), SGAS (Li et al., 2020)) and show that the proposed selection method is able to consistently extract significantly improved architectures from the supernets than magnitude-based counterparts. Furthermore, we find that the robustness issues of DARTS can be greatly alleviated by replacing the magnitude-based selection with the proposed perturbation-based selection method." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "Preliminaries of Differentiable Architecture Search (DARTS) We start by reviewing the formulation of DARTS. DARTS’ search space consists of repetitions of cell-based microstructures. Every cell can be viewed as a DAG with N nodes and E edges, where each node represents a latent feature map xi, and each edge is associated with an operation o (e.g. skip connect, sep conv 3x3) from the search space O. Continuous relaxation is then applied to this search space. Concretely, every operation on an edge is activated during the search phase, with their outputs mixed by the architecture parameter α to form the final mixed output of that edge m̄(xi) = ∑ o∈O expαo∑ o′ expαo′\no(xi). This particular formulation allows the architecture search to be performed in a differentiable manner: DARTS jointly optimizes α and model weight w with the following bilevel objective via alternative gradient updates:\nmin α Lval(w∗, α) s.t. w∗ = arg min w Ltrain(w,α). (1)\nWe refer to the continuous relaxed network used in the search phase as the supernet of DARTS. At the end of the search phase, the operation associated with the largest αo on each edge will be selected from the supernet to form the final architecture.\nFailure mode analysis of DARTS Several works cast doubt on the robustness of DARTS. Zela et al. (2020) tests DARTS on four different search spaces and observes significantly degenerated performance. They empirically find that the selected architectures perform poorly when DARTS’ supernet falls into high curvature areas of validation loss (captured by large dominant eigenvalues of the Hessian∇2α,αLval(w,α)). While Zela et al. (2020) relates this problem to the failure of supernet training in DARTS, we examine it from the architecture selection aspects of DARTS, and show that much of DARTS’ robustness issue can be alleviated by a better architecture selection method.\nProgressive search space shrinking There is a line of research on NAS that focuses on reducing the search cost and aligning the model sizes of the search and evaluation phases via progressive search space shrinking (Liu et al., 2018; Li et al., 2019; Chen et al., 2021; Li et al., 2020). The general scheme of these methods is to prune out weak operations and edges sequentially during the search phase, based on the magnitude of α following DARTS. Our method is orthogonal to them in this respect, since we select operations based on how much it contributes to the supernet’s performance rather than the α value. Although we also discretize edges greedily and fine-tune the network in between, the purpose is to let the supernet recover from the loss of accuracy after discretization to accurately evaluate operation strength on the next edge, rather than to reduce the search cost." }, { "heading": "3 THE PITFALL OF MAGNITUDE-BASED ARCHITECTURE SELECTION IN", "text": "DARTS\nIn this section, we put forward the opinion that the architecture parameter α does not necessarily represent the strength of the underlying operation in general, backed by both empirical and theoretical evidence. As an important example, we mathematically justify that the skip connection domination phenomenon observed in DARTS is reasonable by itself, and becomes problematic when combined with the magnitude-based architecture selection.\n3.1 α MAY NOT REPRESENT THE OPERATION STRENGTH\nFollowing DARTS, existing differentiable NAS methods use the value of architecture parameters α to select the final architecture from the supernet, with the implicit assumption that α represents the strength of the underlying operations. In this section, we study the validity of this assumption in detail.\nConsider one edge on a pretrained supernet; the strength of an operation on the edge can be naturally defined as the supernet accuracy after we discretize to this operation and fine-tune the remaining network until it converges again; we refer to this as ”discretization accuracy at convergence” for short. The operation that achieves the best discretization accuracy at convergence can be considered as the best operation for the given edge. Figure 1 shows the comparison of α (blue) and operation strength (orange) of randomly select edges on DARTS supernet. As we can see, the magnitude of α for each operation does not necessarily agree with their relative strength measured by discretization accuracy at convergence. Moreover, operations assigned with small αs are sometimes strong ones that lead to high discretization accuracy at convergence. To further verify the mismatch, we investigate the operation strength on search space S2, where DARTS fails dramatically due to excessive skip connections (Zela et al., 2020). S2 is a variant of DARTS search space that only contains two operations per edge (skip connect, sep conv 3x3). Figure 2 shows the selected operations based on α (left) and operation strength (right) on all edges on S2. From Figure 2a, we can see that αskip connect > αsep conv 3x3 on 12 of 14 edges. Consequently, the derived child architecture will lack representation ability and perform poorly due to too many skip connections. However, as shown\nin Figure 2b, the supernet benefits more from discretizing to sep conv 3x3 than skip connect on half of the edges." }, { "heading": "3.2 A CASE STUDY: SKIP CONNECTION", "text": "Several works point out that DARTS tends to assign large α to skip connections, resulting in shallow architectures with poor generability (Zela et al., 2020; Liang et al., 2019; Bi et al., 2019). This ”skip connection domination” issue is generally attributed to the failure of DARTS’ supernet optimization. In contrast, we draw inspiration from research on ResNet (He et al., 2016) and show that this phenomenon by itself is a reasonable outcome while DARTS refines its estimation of the optimal feature map, rendering αskip ineffective in the architecture selection.\nIn vanilla networks (e.g., VGG), each layer computes a new level of feature map from the output feature map of the predecessor layer; thus, reordering layers at test time would dramatically hurt the performance (Veit et al., 2016). Unlike vanilla networks, Greff et al. (2017) and Veit et al. (2016) discover that successive layers in ResNet with compatible channel sizes are in fact estimating the same optimal feature map so that the outputs of these layers\nstay relatively close to each other at convergence; As a result, ResNet’s test accuracy remains robust under layer reordering. Greff et al. (2017) refers to this unique way of feature map estimation in ResNet as the ”unrolled estimation.”\nDARTS’ supernet resembles ResNet, rather than vanilla networks like VGG, in both appearance and behavior. Appearance-wise, within a cell of DARTS’ supernet, edges with skip connection are in direct correspondence with the successive residual layers in ResNet. Behavior-wise, DARTS’ supernet also exhibits a high degree of robustness under edge shuffling. As shown in Table 1, randomly reordering edges on a pretrained DARTS’ supernet at test time also has little effect on its performance. This evidence indicates that DARTS performs unrolled estimation like ResNet as well, i.e., edges within a cell share the same optimal feature map that they try to estimate. In the following proposition, we apply this finding and provide the optimal solution of α in the sense of minimizing the variance of feature map estimation. Proposition 1. 1 Without loss of generality, consider one cell from a simplified search space consists of two operations: (skip, conv). Let m∗ denotes the optimal feature map, which is shared across all edges according to the unrolled estimation view (Greff et al., 2017). Let oe(xe) be the output of convolution operation, and let xe be the skip connection (i.e., the input feature map of edge e). Assume m∗, oe(xe) and xe are normalized to the same scale. The current estimation of m∗ can then be written as:\nme(xe) = exp(αconv)\nexp(αconv) + exp(αskip) oe(xe) +\nexp(αskip)\nexp(αconv) + exp(αskip) xe, (2)\nwhere αconv and αskip are the architecture parameters defined in DARTS. The optimal α∗conv and α∗skip minimizing var(me(xe) − m∗), the variance of the difference between the optimal feature map m∗ and its current estimation me(xe), are given by:\nα∗conv ∝ var(xe −m∗) (3) α∗skip ∝ var(oe(xe)−m∗). (4)\nWe refer the reader to Appendix A.4 for detailed proof. From eq. (3) and eq. (4), we can see that the relative magnitudes of αskip and αconv come down to which one of xe or oe(xe) is closer to m∗ in variance:\n• xe (input of edge e) comes from the mixed output of the previous edge. Since the goal of every edge is to estimate m∗ (unrolled estimation), xe is also directly estimating m∗.\n1Proposition 1 unfolds the optimal α in principle and does not constraint the particular optimization method (i.e., bilevel, single-level, or blockwise update) to achieve it. Moreover, this proposition can be readily extended to various other search spaces since we can group all non-skip operations into a single oe(·).\n• oe(xe) is the output of a single convolution operation instead of the complete mixed output of edge e, so it will deviate from m∗ even at convergence.\nTherefore, in a well-optimized supernet, xe will naturally be closer tom∗ than oe(xe), causing αskip to be greater than αconv .\nOur analysis above indicates that the better the supernet, the larger the (αskip − αconv) gap (softmaxed) will become since xe gets closer and closer tom∗ as the supernet is optimized. This result is evidenced in Figure 3, where mean(αskip − αconv) continues to grow as the supernet gets better. In this case, although αskip > αconv is reasonable by itself, it becomes an inductive bias to NAS if we were to select the final architecture based on α." }, { "heading": "4 PERTURBATION-BASED ARCHITECTURE SELECTION", "text": "Instead of relying on the α value to select the best operation, we propose to directly evaluate operation strength in terms of its contribution to the supernet’s performance. The operation selection criterion is laid out in section 4.1. In section 4.2, we describe the entire architecture selection process." }, { "heading": "4.1 EVALUATING THE STRENGTH OF EACH OPERATION", "text": "In section 3.1, we define the strength of each operation on a given edge as how much it contributes to the performance of the supernet, measured by discretization accuracy. To avoid inaccurate evaluation due to large disturbance of the supernet during discretization, we fine-tune the remaining supernet until it converges again, and then compute its validation accuracy (discretization accuracy at convergence). The fine-tuning process needs to be carried out for evaluating each operation on an edge, leading to substantial computation costs.\nTo alleviate the computational overhead, we consider a more practical measure of operation strength: for each operation on a given edge, we mask it out while keeping all other operations, and re-evaluate the supernet. The one that results in the largest drop in the supernet’s validation accuracy will be considered as the most important operation on that edge. This alternative criterion incurs much less perturbation to the supernet than discretization since it only deletes one operation from the supernet at a time. As a result, the supernet’s validation accuracy after deletion stays close to the unmodified supernet, and thus it alleviates the requirement of tuning the remaining supernet to convergence. Therefore, we implement this measurement for the operation selection in this work.\nAlgorithm 1: Perturbation-based Architecture Selection Input: A pretrained supernet S, Set of edges E from S, Set of nodes N from S Result: Set of selected operations {o∗e}e∈E while |E| > 0 do\nrandomly select an edge e ∈ E (and remove it from E); forall operation o on edge e do\nevaluate the validation accuracy of S when o is removed (ACC\\o); end select the best operation for e: o∗e ← arg minoACC\\o; discretize edge e to o∗e and tune the remaining supernet for a few epochs;\nend" }, { "heading": "4.2 THE COMPLETE ARCHITECTURE SELECTION PROCESS", "text": "Our method operates directly on top of DARTS’ pretrained supernet. Given a supernet, we randomly iterate over all of its edges. We evaluate each operation on an edge, and select the best one to be discretized based on the measurement described in section 4.1. After that, we tune the supernet for\na few epochs to recover the accuracy lost during discretization. The above steps are repeated until all edges are decided. Algorithm 1 summarizes the operation selection process. The cell topology is decided in a similar fashion. We refer the reader to Appendix A.3 for the full algorithm, including deciding the cell topology. This simple method is termed ”perturbation-based architecture selection (PT)” in the following sections." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "In this section, we demonstrate that the perturbation-based architecture selection method is able to consistently find better architectures than those selected based on the values of α. The evaluation is based on the search space of DARTS and NAS-Bench-201 (Dong & Yang, 2020), and we show that the perturbation-based architecture selection method can be applied to several variants of DARTS." }, { "heading": "5.1 RESULTS ON DARTS’ CNN SEARCH SPACE", "text": "We keep all the search and retrain settings identical to DARTS since our method only modifies the architecture selection part. After the search phase, we perform perturbation-based architecture selection following Algorithm 1 on the pretrained supernet. We tune the supernet for 5 epochs between two selections as it is enough for the supernet to recover from the drop of accuracy after discretization. We run the search and architecture selection phase with four random seeds and report both the best and average test errors of the obtained architectures.\nAs shown in Table 2, the proposed method (DARTS+PT) improves DARTS’ test error from 3.00% to 2.61%, with manageable search cost (0.8 GPU days). Note that by only changing the architecture selection method, DARTS performs significantly better than many other differentiable NAS methods that enjoy carefully designed optimization process of the supernet, such as GDAS (Dong & Yang, 2019) and SNAS (Xie et al., 2019). This empirical result suggests that architecture selection is crucial to DARTS: with the proper selection algorithm, DARTS remains a very competitive method.\nOur method is also able to improve the performance of other variants of DARTS. To show this, we evaluate our method on SDARTS(rs) and SGAS (Chen & Hsieh, 2020; Li et al., 2020). SDARTS(rs) is a variant of DARTS that regularizes the search phase by applying Gaussian perturbation to α. Unlike DARTS and SDARTS, SGAS performs progressive search space shrinking. Concretely, SGAS progressively discretizes its edges with the order from most to least important, based on a novel edge importance score. For a fair comparison, we keep its unique search space shrinking process unmodified and only replace its magnitude-based operation selection with ours. As we can see from Table 2, our method consistently achieves better average test errors than its magnitudebased counterpart. Concretely, the proposed method improves SDARTS’ test error from 2.67% to 2.54% and SGAS’ test error from 2.66% to 2.56%. Moreover, the best architecture discovered in our experiments achieves a test error of 2.44%, ranked top among other NAS methods." }, { "heading": "5.2 PERFORMANCE ON NAS-BENCH-201 SEARCH SPACE", "text": "To further verify the effectiveness of the proposed perturbation-based architecture selection, we conduct experiments on NAS-Bench-201. NAS-Bench-201 provides a unified cell-based search space similar to DARTS. Every architecture in the search space is trained under the same protocol on three\ndatasets (cifar10, cifar100, and imagenet16-120), and their performance can be obtained by querying the database. As in section 5.1, we take the pretrained supernet from DARTS and apply our method on top of it. All other settings are kept unmodified. Figure 4 shows the performance trajectory of DARTS+PT compared with DARTS. While the architectures found by magnitude-based selection degenerates over time, the perturbation-based method is able to extract better architectures from the same underlying supernets stably. The result implies that the DARTS’ degenerated performance comes from the failure of magnitude based architecture selection." }, { "heading": "6 ANALYSIS", "text": "" }, { "heading": "6.1 ISSUE WITH THE ROBUSTNESS OF DARTS", "text": "Zela et al. (2020) observes that DARTS tends to yield degenerate architectures with abysmal performance. We conjecture that this robustness issue of DARTS can be explained by the failure of magnitude-based architecture selection.\nTo show this, we test DARTS’ perfor-\nNotably, DARTS+PT is able to find meaningful architecture on S2 (skip connect, sep conv 3x3) and S4 (noise, sep conv 3x3), where DARTS failed dramatically. As shown in Figure 5, on S2, while magnitude-based selection degenerates to architectures filled with skip connections, DARTS+PT is\nable to find architecture with 4 convolutions; On S4, DARTS+PT consistently favors sep conv 3x3 on edges where α selects noise.\n6.2 PROGRESSIVE TUNING\nIn addition to operation selection, we also tune the supernet after an edge is discretized so that the supernet could regain the lost accuracy. To measure the effectiveness of our operation selection criterion alone, we conduct an ablation study on the progressive tuning part. Concretely, we test a baseline by combining progressive tuning with magnitude-based operation selection instead of our selection criterion, which we code-named DARTS+PT-Mag. Figure 6 plots the change of validation accuracy of DARTS+PT and DARTS+PT-Mag during the operation selection phase. As we can see, DARTS+PT is able to identify better operations that lead to higher validation accuracy than the magnitude-based alternative, revealing the effectiveness of our operation selection criteria. Moreover, DARTS+PT-Mag is only able to obtain a test error of 2.85% on DARTS space\non cifar10, much worse than DARTS+PT (2.61%), indicating that the operation selection part plays a crucial role in our method.\n6.3 FIXING α AS UNIFORM\nSince the proposed method does not rely on α for architecture selection, a natural question is whether it is necessary to optimize a stand-alone α. We find that by fixing α = 0 (uniform weights for all the operations) while training supernet and applying perturbation-based architecture se-\nlection, the resulting method performs on-par with DARTS+PT, and in some cases even better. For example, DARTS+PT (fix α) achieves better performance than DARTS+PT on NAS-Bench-201. On DARTS’ search space and its variants S1-S4, DARTS+PT (fix α) performs similarly to DARTS+PT. The results can be found in Table 3 and Table 4. This surprising finding suggests that even the most naive approach, simply training a supernet without α, will be a competitive method when combining with the proposed perturbation-based architecture selection." }, { "heading": "7 CONCLUSION AND DISCUSSION", "text": "This paper attempts to understand Differentiable NAS methods from the architecture selection perspective. We re-examine the magnitude-based architecture selection process of DARTS and provide empirical and theoretical evidence on why it does not indicate the underlying operation strength. We introduce an alternative perturbation-based architecture selection method that directly measures the operation strength via its contribution to the supernet performance. The proposed selection method is able to consistently extract improved architecture from supernets trained identically to the respective base methods on several spaces and datasets.\nOur method brings more freedom in supernet training as it does not rely on α to derive the final architecture. We hope the perturbation-based architecture selection can bring a new perspective to the NAS community to rethink the role of α in Differential NAS." }, { "heading": "ACKNOWLEDGEMENT", "text": "This work is supported in part by NSF under IIS-1901527, IIS-2008173, IIS-2048280 and by Army Research Laboratory under agreement number W911NF-20-2-0158." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 DESCRIPTION ABOUT OUR BASELINE MODELS", "text": "" }, { "heading": "A.1.1 DARTS", "text": "DARTS (Liu et al., 2019) is a pioneering work that introduces the general differentiable NAS framework, which we reviewed in section 2. In DARTS, the topology and operation are searched together. Concretely, at the end of the search, it selects one operation for every edge in the normal (reduction) cell based on the architecture parameter α. Then it selects two input edges for each node in the cell by comparing the largest α of every input edge. The final architecture consists of one operation on each of the eight edges in the normal (reduction) cell. The operation on an edge will be selected from a pool of seven candidates: skip connection, avg pool 3x3, max pool 3x3, sep conv 3x3, sep conv 5x5, dil conv 3x3, and dil conv 5x5. In addition to these operations, DARTS also maintains a ”none” op, which is used exclusively for determining the topology rather than treated as an operation (Liu et al., 2019). Since the main focus of our paper is operation assignment, we omit none op when applying the proposed selection method on DARTS." }, { "heading": "A.1.2 SDARTS", "text": "SDARTS (Chen & Hsieh, 2020) is a variant of DARTS aiming at regularizing the bilevel optimization process in DARTS via random Gaussian perturbation, inspired by the recent finding that regularizing DARTS’ supernet leads to improved performance. While the optimization of architecture parameter α in SDARTS is identical to DARTS, it distorts the architecture parameters α with random Gaussian noise while training the model weights w. This simple yet effective regularizer is able to consistently improve the robustness and performance of DARTS.\nA.1.3 SGAS\nSGAS (Li et al., 2020) represents a line of research on improving the search efficiency of differentiable NAS by progressively shrinking the search space. It first trains the model weights w alone for 10 epochs. After that, it selects one edge from the supernet and then selects the best operation on that edge based on α to be discretized. The edge selection is based on the ranking of the proposed edge importance score. The process stops after all eight edges of the final architecture are decided." }, { "heading": "A.2 MICROARCHITECTURE OF SPACE S1 - S4", "text": "Zela et al. (2020) introduces four variants of the DARTS’ space (S1, S2, S3, and S4) to study the robustness of DARTS. These spaces differ from DARTS’ original space only in the number and types of operations on each edge. Apart from that, everything else is the same.\n• S1 is a pre-optimized search space consisting of top2 operations selected by DARTS. As a result, each edge contains a different set of operations to be searched from.\n• S2 consists of two operations: skip connect and sep conv 3x3.\n• S3 consists of three operations: none, skip connect and sep conv 3x3.\n• S4 consists of two operations: noise and sep conv 3x3. The noise operation outputs a random Gaussian noise N (0, 1) regardless of the input. This operation generally hurts the performance of discretized architecture, and should be avoided by NAS algorithm." }, { "heading": "A.3 THE COMPLETE ALGORITHM OF PERTURBATION-BASED ARCHITECTURE SELECTION", "text": "Algorithm 2: Perturbation-based Architecture Selection Input: A pretrained Supernet S, Set of Edges E from S, Set of Nodes N from S Result: Set of selected operations {o∗e}e∈E , and top2 input edges for each node {(e(1)∗n , e(2)∗n }n∈N while |E| > 0 do // operation selection phase\nrandomly select an edge e ∈ E (and remove it from E); forall operation o on edge e do\nevaluate the validation accuracy of S when o is removed (ACC\\o); end select the best operation on e: o∗e ← arg minoACC\\o; discretize edge e to o∗e and train the remaining supernet until it converges again;\nend while |N | > 0 do // topology selection phase\nrandomly select a node n ∈ N (and remove it from N ); forall input edge e of node n do\nevaluate the validation accuracy of S when e is removed (ACC\\e); end set top2 edges on n (e(1)∗n , e (2)∗ n ) to be the ones with lowest and second lowest ACC\\e;\nprune out all other edges of n and train the remaining supernet until it converges again; end" }, { "heading": "A.4 PROOF OF PROPOSITION 1", "text": "Proof. Let θskip = Softmax(αskip) and θconv = Softmax(αconv). Then the mixed operation can be written as me(xe) = θconvoe(xe) + θskipxe. We formally formulate the objective to be:\nmin θskip,θconv\nV ar(me(xe)−m∗) (5)\ns.t. θskip + θconv = 1 (6)\nThis constraint optimization problem can be solved with Lagrangian multipliers:\nL(θskip, θconv, λ) = V ar(me(xe)−m∗)− λ(θskip + θconv − 1) (7) = V ar(θconvoe(xe) + θskipxe −m∗)− λ(θskip + θconv − 1) (8) = V ar(θconv(oe(xe)−m∗) + θskip(xe −m∗)) − λ(θskip + θconv − 1) (9)\n= V ar(θconv(oe(xe)−m∗)) + V ar(θskip(xe −m∗)) + 2Cov(θconv(oe(xe)−m∗), θskip(xe −m∗)) − λ(θskip + θconv − 1) (10)\n= θ2convV ar(oe(xe)−m∗) + θ2skipV ar(xe −m∗) + 2θconvθskipCov(oe(xe)−m∗, xe −m∗) − λ(θskip + θconv − 1) (11)\nSetting: ∂L\n∂λ = θconv + θskip − 1 = 0 (12)\n∂L\n∂θconv = 2θconvV ar(oe(xe)−m∗) + 2θskipCov(oe(xe)−m∗, xe −m∗)\n− λ = 0 (13) ∂L\n∂θskip = 2θconvCov(oe(xe)−m∗, xe −m∗) + 2θskipV ar(xe −m∗)\n− λ = 0 (14)\nSolving the above equations will give us:\n(15)\nθ∗conv = V ar(xe −m∗)− Cov(oe(xe)−m∗, xe −m∗)\nZ (16)\nθ∗skip = V ar(oe(xe)−m∗)− Cov(oe(xe)−m∗, xe −m∗)\nZ (17)\nWhere Z = V ar(oe(xe)−m∗)−Cov(oe(xe)−m∗, xe−m∗) + V ar(xe−m∗)−Cov(oe(xe)− m∗, xe −m∗). Aligning basis with DARTS, we get:\nα∗conv = log [ V ar(xe −m∗)− Cov(oe(xe)−m∗, xe −m∗) ] + C (18)\nα∗skip = log [ V ar(oe(xe)−m∗)− Cov(oe(xe)−m∗, xe −m∗) ] + C (19)\nThe only term that differentiates αskip from αconv is the first term inside the logarithm, therefore:\nα∗conv ∝ var(xe −m∗) (20) α∗skip ∝ var(oe(xe)−m∗) (21)\nA.5 MORE FIGURES ON α AND DISCRETIZATION ACCURACY AT CONVERGENCE\nWe provide extra figures similar to Figure 1 to take into account the randomness of supernet’s training. We first train 6 supernets with different seeds, and then randomly select 1 edge from each of them. We can see that the results are consistent with Figure 1. As shown in Figure 7, the magnitude of α for each operation does not necessarily agree with its relative discretization accuracy at convergence.\nA.6 PERFORMANCE TRAJECTORY OF DARTS+PT (FIX α) ON NAS-BENCH-201\nWe plot the performance trajectory of DARTS+PT (fixα) on NAS-Bench-201 similar to Figure 4. As shown in Figure 8, it consistently achieves strong performance without training α at all, indicating that the extra freedom of supernet training without α can be explored to develop improved search algorithm in the future." }, { "heading": "A.7 ABLATION STUDY ON THE NUMBER OF FINE-TUNING EPOCHS", "text": "As described in section 4.2, we perform fine-tuning between two edge decisions to recover supernet from the accuracy drop after discretization. The number of fine-tuning epochs is set to 5 for all experiments because empirically we find that it is enough for the supernet to converge again. In this section, we conduct an ablation study on the effect of the number of fine-tuning epochs. As shown in Figure 9, the gain from tuning the supernet longer than 5 epochs is marginal." }, { "heading": "A.8 ARCHITECTURE TRANSFERABILITY EVALUATION ON IMAGENET", "text": "We further evaluate the performance of the derived architecture on ImageNet. We strictly follow the training protocals as well as the hyperparameter settings of DARTS (Liu et al., 2019) for this experiment. As shown in Table 5, the proposed method improves the top1 performance of DARTS by 1.2%.\nA.9 SEARCHED ARCHITECTURES" } ]
2,021
null
SP:173177f78449ef09647670389b0ffba1e35db0ba
[ "The paper's starting point is the question whether the episodic training is beneficial, or not, for FSL / Prototypical Networks. The work can be seen as a follow-up of the recent works showing that simple baselines can outperform rather sophisticated few-shot learning models. Towards answering this question, this paper points out that Prototypical Networks (PN) are related to Neighborhood Component Analysis (NCA), and NCA can be considered as an episodic training-free alternative of PN.", "This paper conducts a case study for the non-parametric few-shot classification methods (e.g. Prototypical Networks). It proposes to utilize the classic Neighbourhood Component Analysis (NCA) sampling instead of the original matching or prototypical style episode sampling. The authors conducted ablation experiments to investigate the properties of this new sampling and compare it with the basic Prototypical Networks (PNs) method. The final accuracy is comparable with the recent methods on three benchmark datasets. ", "This paper studies the question of whether episodes (using a split of support & query sets) are necessary for non-parameteric approaches for few-shot learning (examples include Prototypical Networks & Matching Networks). The authors propose a Neighborhood Component Analysis-based method for few-shot learning, where a mini-batch consists of examples from a subset of base classes, with no support or query split. The NCA loss then involves distances computed across all examples in the mini-batch rather than only using distances computed between support and query examples (as is done in Prototypical Networks & Matching Networks). The authors show that their proposed method is able to achieve better performance for 1-shot & 5-shot classification in miniImageNet, cifar-fs, and tieredImageNet benchmarks. They also speculate that the proposed method performs better because of its use of more distance computations in the loss compared to Prototypical Networks & Matching Networks and confirm this by conducting an experiments where they randomly drop distances in their NCA loss and showing this has a negative impact on performance.", "This paper studies the role of the popular episodic training paradigm, in the context of two metric learning-based episodic models: Prototypical Networks (PN) and Matching Networks (MN). They show that these popular methods underperform compared to the closely-related NCA model which is non-episodic, i.e. does not separate the examples sampled in each training batch into disjoint support and query sets. They argue that the superior performance of NCA is because, due to not performing that separation, the total number of pairwise comparisons that are used in the loss computation is larger than those used for PN/MN, making the gradients more informative. Indeed, in episodic models, each query example is only compared to support examples, but not to other query examples, resulting in significantly fewer comparisons. Experimentally, they show that for a fixed batch size (where the ‘batch size’ in the case of the episodes is given by the combined size of the support and query sets), NCA outperforms PN and MN, despite its simplicity in terms of its smaller number of hyperparameters. They also show that randomly discarding comparisons from NCA leads to similar performance to the analogous PN/MN models and perform a set of ablations to “bridge” the gap between PN and NCA, further strengthening their finding that support/query separation hurts performance. ", "This paper presents a new perspective on the episodic learning of nonparametric few-shot learning methods. The main claim is that current popular nonparametric methods, such as Prototypical Networks (PNs) and Matching Networks (MNs), are not data-efficient because less gradient signal is propagated during training due to the artificial division of the data points to support and query sets. The authors instead propose to use the standard learning protocol with batches and an equivalent loss function based on the Neighbourhood Component Analysis (NCA). This loss function exploits all connections between data points in the batch. The authors then propose three techniques to perform a few-shot classification during evaluation based on k-NN, nearest centroid, and soft assignment. Through extensive experiments, the authors justify their perspective and show comparable results to other recent FSL methods (not necessarily nonparametric ones). " ]
Episodic learning is a popular practice among researchers and practitioners interested in few-shot learning. It consists of organising training in a series of learning problems (or episodes), each divided into a small training and validation subset to mimic the circumstances encountered during evaluation. But is this always necessary? In this paper, we investigate the usefulness of episodic learning in methods which use nonparametric approaches, such as nearest neighbours, at the level of the episode. For these methods, we not only show how the constraints imposed by episodic learning are not necessary, but that they in fact lead to a data-inefficient way of exploiting training batches. We conduct a wide range of ablative experiments with Matching and Prototypical Networks, two of the most popular methods that use nonparametric approaches at the level of the episode. Their “non-episodic” counterparts are considerably simpler, have less hyperparameters, and improve their performance in multiple few-shot classification datasets.
[ { "affiliations": [], "name": "Steinar Laenen" }, { "affiliations": [], "name": "Luca Bertinetto" } ]
[ { "authors": [ "K.R. Allen", "E. Shelhamer", "H. Shin", "J.B. Tenenbaum" ], "title": "Infinite mixture prototypes for few-shot learning", "venue": "International Conference on Machine Learning", "year": 2019 }, { "authors": [ "H. Altae-Tran", "B. Ramsundar", "A.S. Pappu", "V. Pande" ], "title": "Low data drug discovery with one-shot learning", "venue": "ACS central science", "year": 2017 }, { "authors": [ "Y. Bai", "M. Chen", "P. Zhou", "T. Zhao", "J. Lee", "S. Kakade", "H. Wang" ], "title": "and C", "venue": "Xiong. How important is the train-validation split in meta-learning? In International Conference on Machine Learning", "year": 2021 }, { "authors": [ "S. Bengio", "Y. Bengio", "J. Cloutier", "J. Gecsei" ], "title": "On the optimization of a synaptic learning rule", "venue": "Preprints Conf. Optimality in Artificial and Biological Neural Networks. Univ. of Texas", "year": 1992 }, { "authors": [ "L. Bertinetto", "J.F. Henriques", "P.H. Torr", "A. Vedaldi" ], "title": "Meta-learning with differentiable closed-form solvers", "venue": "International Conference on Learning Representations", "year": 2019 }, { "authors": [ "L. Bertinetto", "J.F. Henriques", "J. Valmadre", "P. Torr", "A. Vedaldi" ], "title": "Learning feed-forward one-shot learners", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "T. Cao", "M. Law", "S. Fidler" ], "title": "A theoretical analysis of the number of shots in few-shot learning", "venue": "International Conference on Learning Representations", "year": 2020 }, { "authors": [ "J. Chen", "X.-M. Wu", "Y. Li" ], "title": "Q", "venue": "Li, L.-M. Zhan, and F.-l. Chung. A closer look at the training strategy for modern meta-learning. Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "W.-Y. Chen", "Y.-C. Liu", "Z. Kira", "Y.-C.F. Wang", "J.-B. Huang" ], "title": "A closer look at few-shot classification", "venue": "International Conference on Learning Representations", "year": 2019 }, { "authors": [ "Y. Chen", "X. Wang", "Z. Liu", "H. Xu", "T. Darrell" ], "title": "A new meta-baseline for few-shot learning", "venue": "arXiv preprint arXiv:2003.04390", "year": 2020 }, { "authors": [ "G.S. Dhillon", "P. Chaudhari", "A. Ravichandran", "S. Soatto" ], "title": "A baseline for few-shot image classification", "venue": "International Conference on Learning Representations", "year": 2020 }, { "authors": [ "N. Fei", "Z. Lu", "T. Xiang", "S. Huang" ], "title": "Melr: Meta-learning via modeling episode-level relationships for few-shot learning", "venue": "International Conference on Learning Representations", "year": 2021 }, { "authors": [ "C. Finn" ], "title": "Stanford cs330: Multi-task and meta-learning", "venue": "2019 | lecture 4 - nonparametric meta-learners. https://www.youtube.com/watch?v=bc-6tzTyYcM&list= PLoROMvodv4rMC6zfYmnD7UG3LVvwaITY5&index=4", "year": 2019 }, { "authors": [ "C. Finn", "P. Abbeel", "S. Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "International Conference on Machine Learning", "year": 2017 }, { "authors": [ "N. Frosst", "N. Papernot", "G. Hinton" ], "title": "Analyzing and improving representations with the soft nearest neighbor loss", "venue": "International Conference on Machine Learning", "year": 2019 }, { "authors": [ "T. Furlanello", "Z. Lipton", "M. Tschannen", "L. Itti", "A. Anandkumar" ], "title": "Born again neural networks", "venue": "International Conference on Machine Learning", "year": 2018 }, { "authors": [ "G. García", "R. Del Amor", "A. Colomer", "R. Verdú-Monedero", "J. Morales-Sánchez", "V. Naranjo" ], "title": "Circumpapillary oct-focused hybrid learning for glaucoma grading using tailored prototypical neural networks", "venue": "Artificial Intelligence in Medicine, 118:102132", "year": 2021 }, { "authors": [ "S. Gidaris", "A. Bursuc", "N. Komodakis", "P. Pérez", "M. Cord" ], "title": "Boosting few-shot visual learning with self-supervision", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "J. Goldberger", "G.E. Hinton", "S.T. Roweis", "R.R. Salakhutdinov" ], "title": "Neighbourhood components analysis", "venue": "Advances in Neural Information Processing Systems", "year": 2005 }, { "authors": [ "M. Goldblum", "S. Reich", "L. Fowl", "R. Ni", "V. Cherepanova", "T. Goldstein" ], "title": "Unraveling meta-learning: Understanding feature representations for few-shot tasks", "venue": "International Conference on Machine Learning. PMLR", "year": 2020 }, { "authors": [ "A. Graves", "G. Wayne", "I. Danihelka" ], "title": "Neural turing machines", "venue": "arXiv preprint arXiv:1410.5401", "year": 2014 }, { "authors": [ "T. Hospedales", "A. Antoniou", "P. Micaelli", "A. Storkey" ], "title": "Meta-learning in neural networks: A survey", "venue": "arXiv preprint arXiv:2004.05439", "year": 2020 }, { "authors": [ "B.M. Lake", "R. Salakhutdinov", "J.B. Tenenbaum" ], "title": "Human-level concept learning through probabilistic program induction", "venue": "Science", "year": 2015 }, { "authors": [ "K. Lee", "S. Maji", "A. Ravichandran", "S. Soatto" ], "title": "Meta-learning with differentiable convex optimization", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "T. Munkhdalai", "H. Yu" ], "title": "Meta networks", "venue": "International Conference on Machine Learning", "year": 2017 }, { "authors": [ "T. Munkhdalai", "X. Yuan", "S. Mehri", "A. Trischler" ], "title": "Rapid adaptation with conditionally shifted neurons", "venue": "International Conference on Machine Learning", "year": 2018 }, { "authors": [ "B. Oreshkin", "P.R. López", "A. Lacoste" ], "title": "Tadam: Task dependent adaptive metric for improved few-shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "M. Patacchiola", "J. Turner", "E.J. Crowley" ], "title": "M", "venue": "O’Boyle, and A. J. Storkey. Bayesian meta-learning for the few-shot setting via deep kernels. Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "H. Qi", "M. Brown", "D.G. Lowe" ], "title": "Low-shot learning with imprinted weights", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "L. Qiao", "Y. Shi", "J. Li", "Y. Wang", "T. Huang", "Y. Tian" ], "title": "Transductive episodic-wise adaptive metric for few-shot learning", "venue": "IEEE International Conference on Computer Vision", "year": 2019 }, { "authors": [ "A. Raghu", "M. Raghu", "S. Bengio", "O. Vinyals" ], "title": "Rapid learning or feature reuse? towards understanding the effectiveness of maml", "venue": "International Conference on Learning Representations", "year": 2020 }, { "authors": [ "S. Ravi", "H. Larochelle" ], "title": "Optimization as a model for few-shot learning", "venue": "International Conference on Learning Representations", "year": 2017 }, { "authors": [ "A. Ravichandran", "R. Bhotika", "S. Soatto" ], "title": "Few-shot learning with embedded class models and shot-free meta training", "venue": "IEEE International Conference on Computer Vision", "year": 2019 }, { "authors": [ "M. Ren", "E. Triantafillou", "S. Ravi", "J. Snell", "K. Swersky", "J.B. Tenenbaum", "H. Larochelle", "R.S. Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "International Conference on Learning Representations", "year": 2018 }, { "authors": [ "R. Salakhutdinov", "G. Hinton" ], "title": "Learning a nonlinear embedding by preserving class neighbourhood structure", "venue": "Artificial Intelligence and Statistics", "year": 2007 }, { "authors": [ "A. Salekin", "N. Russo" ], "title": "Understanding autism: the power of eeg harnessed by prototypical learning", "venue": "Proceedings of the Workshop on Medical Cyber Physical Systems and Internet of Medical Things, pages 12–16", "year": 2021 }, { "authors": [ "A. Santoro", "S. Bartunov", "M. Botvinick", "D. Wierstra", "T. Lillicrap" ], "title": "Meta-learning with memoryaugmented neural networks", "venue": "International Conference on Machine Learning", "year": 2016 }, { "authors": [ "J. Schmidhuber" ], "title": "Evolutionary principles in self-referential learning", "venue": "or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universität München", "year": 1987 }, { "authors": [ "J. Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "Neural Computation", "year": 1992 }, { "authors": [ "J. Snell", "K. Swersky", "R. Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "J. Snell", "R. Zemel" ], "title": "Bayesian few-shot classification with one-vs-each pólya-gamma augmented gaussian processes", "venue": "International Conference on Learning Representations", "year": 2021 }, { "authors": [ "Q. Sun", "Y. Liu", "T.-S. Chua", "B. Schiele" ], "title": "Meta-transfer learning for few-shot learning", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "S. Thrun" ], "title": "Is learning the n-th thing any easier than learning the first", "venue": "In Advances in Neural Information Processing Systems,", "year": 1996 }, { "authors": [ "Y. Tian", "Y. Wang", "D. Krishnan", "J.B. Tenenbaum" ], "title": "and P", "venue": "Isola. Rethinking few-shot image classification: a good embedding is all you need? European Conference on Computer Vision", "year": 2020 }, { "authors": [ "E. Triantafillou", "R.S. Zemel", "R. Urtasun" ], "title": "Few-shot learning through an information retrieval lens", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "P.E. Utgoff" ], "title": "Shift of bias for inductive concept learning", "venue": "Machine learning: An artificial intelligence approach", "year": 1986 }, { "authors": [ "R. Vilalta", "Y. Drissi" ], "title": "A perspective view and survey of meta-learning", "venue": "Artificial Intelligence Review", "year": 2002 }, { "authors": [ "O. Vinyals", "C. Blundell", "T. Lillicrap", "D. Wierstra" ], "title": "et al", "venue": "Matching networks for one shot learning. In Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Y. Wang", "W.-L. Chao", "K.Q. Weinberger" ], "title": "and L", "venue": "van der Maaten. Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623", "year": 2019 }, { "authors": [ "S.W. Yoon", "J. Seo", "J. Moon" ], "title": "Tapnet: Neural network augmented with task-adaptive projection for few-shot learning", "venue": "International Conference on Machine Learning", "year": 2019 }, { "authors": [ "J. Zhang", "C. Zhao", "B. Ni", "M. Xu", "X. Yang" ], "title": "Variational few-shot learning", "venue": "IEEE International Conference on Computer Vision", "year": 2019 } ]
[ { "heading": "1 Introduction", "text": "The problem of few-shot learning (FSL) – classifying examples from previously unseen classes given only a handful of training data – has considerably grown in popularity within the machine learning community in the last few years. The reason is likely twofold. First, being able to perform well on FSL problems is important for several applications, from learning new symbols [23] to drug discovery [2]. Second, since the aim of researchers interested in meta-learning is to design systems that can quickly learn novel concepts by generalising from previously encountered learning tasks, FSL benchmarks are often adopted as a practical way to empirically validate meta-learning algorithms.\nTo the best of our knowledge, there is not a widely recognised definition of meta-learning. In a recent survey, Hospedales et al. [22] informally describe it as “the process of improving a learning algorithm over multiple learning episodes”. In practical terms, following the compelling rationale that “test and train conditions should match” [48, 13], several seminal meta-learning papers (e.g. [48, 32, 14]) have emphasised the importance of organising training into episodes, i.e. learning problems with a limited amount of “training” (the support set) and “test” examples (the query set) to mimic the test-time scenario presented by FSL benchmarks.\nHowever, several recent works (e.g. [9, 49, 11, 44]) showed that simple baselines can outperform established FSL meta-learning methods by using embeddings pre-trained with the standard crossentropy loss, thus casting a doubt on the importance of episodes in FSL. Inspired by these results, we aim at understanding the practical usefulness of episodic learning in popular FSL methods relying on metric-based nonparametric classifiers such as Matching and Prototypical Networks [48, 40]. We chose this family of methods because they do not perform any adaptation at test time. This allows us\n∗Work done while research intern at FiveAI\n35th Conference on Neural Information Processing Systems (NeurIPS 2021).\nto test the efficacy of episodic training without having to significantly change the baseline algorithms, which could potentially introduce confounding factors.\nIn this work we perform a case study focussed on Matching Networks [48] and Prototypical Networks [40], and we show that within this family of methods episodic learning a) is detrimental for performance, b) is analogous to randomly discarding examples from a batch and c) introduces a set of superfluous hyperparameters that require careful tuning. Without episodic learning, these methods are closely related to the classic Neighbourhood Component Analysis (NCA) [19, 35] on deep embeddings and achieve, without bells and whistles, an accuracy that is competitive with recent methods on multiple FSL benchmarks: miniImageNet, CIFAR-FS and tieredImageNet.\nPyTorch code is available at https://github.com/fiveai/on-episodes-fsl." }, { "heading": "2 Background and method", "text": "This section is divided as follows: Sec. 2.1 introduces episodic learning and illustrates a data efficiency issue encountered with nonparametric few-shot learners based on episodes; Sec. 2.2 introduces the losses from Snell et al. [40], Vinyals et al. [48] and Goldberger et al. [19] which we use throughout our experiments; and Sec. 2.3 explains the three options we explored to perform FSL classification with previously-trained feature embeddings." }, { "heading": "2.1 Episodic learning", "text": "A common strategy to train FSL methods is to consider a distribution Ê over possible subsets of labels that is as close as possible to the one encountered during evaluation E 2 [48]. Each episodic batch BE = {S,Q} is obtained by first sampling a subset of labels L from Ê , and then sampling images constituting both support set S and query set Q from the set of images with labels in L, where S = {(s1, y1), . . . , (sn, yn)}, Q = {(q1, y1), . . . , (qm, ym)}, and Sk and Qk denote the sets of images with label y = k in the support set and query set respectively.\nFor most methods, this corresponds to training on a series of mini-batches in which each image belongs to either the support or the query set. Support and query sets are constructed such that they both contain all the classes of L, and a fixed number of images per class. Therefore, episodes are defined by three variables: the number of classes w = |L| (the “ways”), the number of examples per class in the support set n = |Sk| (the “shots”), and the number of examples per class in the query set m = |Qk|. During evaluation, the set {w, n,m} defines the problem setup. Instead, at training time {w, n,m} can be seen as a set of hyperparameters controlling the batch creation, and that (as we will see in Sec. 3.2) requires careful tuning.\n2Note that, in FSL, the sets of classes encountered during training and evaluation are disjoint.\nIn a Maximum Likelihood Estimation framework, training on these episodes can be written as\narg max θ E L∼Ê E S∼L Q∼L ∑ (qi,yi)∈Q logPθ (yi|qi, S, ρ) . (1) For the sake of brevity, with a slight abuse of notation we omit the function fθ (e.g. a deep neural network) which is used to obtain a representation for the images in S and Q, and whose parameters θ are optimised during the training process. Note that the optimisation of Eq. 1 depends on an optional set of parameters ρ. This is obtained by an “inner” optimisation procedure, whose scope is limited to the current episode [22]. The idea is that the “outer” optimisation loop, by attending to a distribution of episodes, will appropriately shift the inductive bias of the algorithm located in the inner loop, thus learning how to learn [47]. In recent years, many interesting proposals have been made about what form ρ should have, and how it should be computed. For instance, in MAML [14] ρ takes the form of an update of the global parameters θ, while Ravi and Larochelle [32] learn to optimise by considering ρ as set of the hyper-parameters of the optimiser’s update rule.\nOther methods, such as Matching and Prototypical Networks [48, 40], avoid learning a separate set of parameters ρ altogether, and utilise a nonparametric learner (such as nearest neighbour classifiers) at the inner level. We chose to focus our case study on these methods not only because they have been seminal for the community, but also for ease of experimental design. Having ρ = ∅ considerably reduces the design complexity of the algorithm, thus allowing precise ablations to understand the efficacy of episodic learning without considerably changing the nature of the original algorithms.\nConsiderations on data efficiency. The constraints imposed by episodic learning on the role each image has in a training batch has subtle but important implications, illustrated in Fig. 1 by highlighting the number of distances contributing to the loss. By dividing batches between support and query set (S and Q) during training, episodes have the side effect of disregarding many of the distances between labelled examples that would constitute useful training signal for nonparametric FSL methods. More specifically, for metric-based nonparametric methods, the number of training distances that are omitted in a batch because of the episodic strategy grows quadratically as O(w2(m2 + n2)) (derivation shown in Appendix A). Table 1 breaks down this difference in terms of gradients from positives and negatives distance pairs (which we simply refer to as positives and negatives throughout the rest of the paper). In a typical training batch with w = 20, m = 15 and n = 5 [40], ignoring the episodic constraints increases the number of both positives and negatives by more than 150%.\nIn the remainder of this paper, we conduct a case study to illustrate how this issue affects two of the most popular FSL algorithms relying on nonparametric approaches at the inner level: Prototypical Networks [40] and Matching Networks [48]." }, { "heading": "2.2 Loss functions", "text": "Prototypical Networks (PNs) [40] are one of the most popular and effective approaches in the few-shot learning literature. They are at the core of several recently proposed FSL methods (e.g. [27, 18, 1, 50, 7]), and they are used in a number of applied machine learning works (e.g. EEG scan analysis for autism [36] and glaucoma grading [17]).\nDuring training, episodes consisting of a support set S and a query set Q are sampled as described in Sec. 2.1. Then, a prototype for each class k is computed as the mean embedding of the samples from the support set belonging to that class: ck = (1/|Sk|) · ∑ (si,yk)∈Sk fθ(si), where fθ is a deep neural network with parameters θ learned via Eq. 1.\nLet C = {(c1, y1), . . . , (ck, yk)} be the set of prototypes and corresponding labels. The loss can be written as follows:\nLPNs = −1 |Q| ∑ (qi,yi)∈Q log ( exp−‖fθ(qi)− cyi‖2∑ k′ exp−‖fθ(qi)− ck′‖2 ) ,\nwhere k′ is an index that goes over all classes.\nMatching Networks (MNs) [48] are closely related to PNs in the multi-shot case and equivalent in the 1-shot case. Rather than aggregating the embeddings of the same class into prototypes, this loss directly computes a softmax over individual embeddings of the support set, as:\nLMNs = −1 |Q| ∑ (qi,y) ∈Q log\n ∑ sj ∈Sy exp−‖fθ(qi)− fθ(sj)‖2\n∑ sk ∈S exp−‖fθ(qi)− fθ(sk)‖2 . In their work, Vinyals et al. [48] use the cosine rather than the Euclidean distance. However, (as [40]) we observed that the Euclidean distance is a better choice for FSL problems, and thus we use it in all the losses of our experiments. Note that Vinyals et al. [48] also suggest a variant to LMNs (MNs with “Full Context Embeddings”), where an LSTM (with an extra set of parameters) is used to condition the way the inputs are embedded in the current support set. In our experiments, we did not consider this variant as it falls in the category of adaptive episodic learning approaches (ρ 6= ∅, see Sec. 2.1).\nNeighbourhood Component Analysis (NCA). LMNs and LPNs sum over the likelihoods that a query image belongs to the same class of a certain sample (or prototype) from the support set by computing the softmax over the distances between the query and the support samples (or prototypes). This is closely related to the Neighbourhood Component Analysis approach by Goldberger et al. [19] (and expanded to the non-linear case by Salakhutdinov et al. [35] and Frosst et al. [15]), except for a few important differences which we discuss at the end of this section.\nLet i ∈ [1, b] be the indices of the images within a batch B. The NCA loss can be written as:\nLNCA = −1 |B| ∑ i∈1,...,b log ∑ j∈1,...,b j 6=i yi=yj exp−‖zi − zj‖2\n∑ k∈1,...,b k 6=i exp−‖zi − zk‖2\n ,\nwhere zi = fθ(xi) is an image embedding and yi its corresponding label. By minimising this loss, distances between embeddings from the same class will be minimised, while distances between embeddings from different classes will be maximised. Importantly, note how the concepts of support set and query set here do not exist. More simply, the images (and respective labels) constituting the batch B = {(x1, y1), . . . , (xb, yb)} are sampled uniformly. Given the similarity between these three losses, and considering that PNs and MNs do not perform episode-specific parameter adaptation, {w,m, n} can be simply interpreted as the set of hyperparameters controlling the sampling of mini-batches during training. More specifically, PNs, MNs and NCA differ in three aspects:\nI. First and foremost, due to the nature of episodic learning, PNs and MNs only consider pairwise distances between the query and the support set (Fig. 1 left); NCA instead uses all the distances within a batch and treats each example in the same way (Fig. 1 right). II. Only PNs rely on the creation of prototypes. III. Because of how L, S and Q are sampled in episodic learning (Eq. 1), for PNs and MNs some\nimages might be sampled more frequently than others (sampling “with replacement”). NCA instead visits every image of the dataset once for each epoch (sampling “without replacement”).\nTo investigate the effects of these three differences, in Sec. 3 we conduct a wide range of experiments." }, { "heading": "2.3 Few-shot classification during evaluation", "text": "Once fθ has been trained, there are many possible ways to perform few-shot classification during evaluation. In this paper we consider three simple approaches that are particularly intuitive for embeddings learned via metric-based losses like the ones described in Sec. 2.2. Note that, in the 1-shot case, all the evaluation methods considered coincide.\nk-NN. To classify an image qi ∈ Q, we first compute the Euclidean distance to each support point sj ∈ S: dij = ‖fθ(qi) − fθ(sj))‖2. Then, we simply assign y(qi) to be majority label of the k nearest neighbours. A downside here is that k is a hyper-parameter that has to be chosen, although a reasonable choice in the FSL setup is to set it equal to the number of “shots” n.\nNearest centroid. Similar to k-NN, we can perform classification by inheriting the label of the closest class centroid, i.e. y(qi) = arg minj∈{1,...,k}‖fθ(xi) − cj‖. This is the approach used by Prototypical Networks [40], SimpleShot [49], and both baselines of Chen et al. [10].\nSoft assignments. To classify an image qi ∈ Q, we compute the values\npij = exp(−‖fθ(qi)− fθ(sj))‖2)∑\nsk∈S exp(−‖fθ(qi)− fθ(sk)‖ 2)\nfor all sj ∈ S, which is the probability that i inherits its class from j. We then compute the likelihood for each class k: ∑ sj∈Sk pij , and choose the class with the highest likelihood y(qi) =\narg maxk ∑ sj∈Sk pij . This is the approach for classification adopted by the original NCA paper [19] and Matching Networks [48].\nWe experiment with all three alternatives and observe that the nearest centroid approach is the most effective (details available in Appendix D). For this reason, unless differently specified, we use it as default in our experiments." }, { "heading": "3 Experiments", "text": "In the following, Sec. 3.1 describes our experimental setup; Sec. 3.2 shows the important effect of the hyperparameters controlling the creation of episodes; in Sec. 3.3 we compare the episodic strategy to randomly discarding pairwise distances within a batch; in Sec. 3.4 we perform a set of ablations to better illustrate the relationship between PNs, MNs and NCA; finally, in Sec. 3.5 we compare our version of the NCA to several recent methods." }, { "heading": "3.1 Experimental setup", "text": "We conduct our experiments on miniImageNet [48], CIFAR-FS [5] and tieredImageNet [34], using the popular ResNet-12 variant first adopted by Lee et al. [24] as embedding function fθ 3 . A detailed description of benchmarks, architecture and choice of hyperparameters is deferred to Appendix F, while below we discuss the most important choices of the experimental setup.\nLike Wang et al. [49], for all our experiments (including those with Prototypical and Matching Networks) we centre and normalise the feature embeddings before performing classification, as it is considerably beneficial for performance. After training, we compute the mean feature vectors of all the images in the training set: x̄ = 1|Dtrain| ∑ x∈Dtrain x. Then, all feature vectors in the test set are updated as xi ← xi − x̄, and normalised by xi ← xi‖xi‖ .\nAs standard [22], performance is assessed on episodes of 5-way, 15-query and 1- or 5-shot. Each model is evaluated on 10,000 episodes sampled from the validation set during training, or from the test set during testing. To further reduce the variance, we trained each model three times with three different random seeds, for a total of 30,000 episodes per configuration, from which 95% confidence intervals are computed." }, { "heading": "3.2 Batch size and episodes", "text": "Despite Prototypical and Matching Networks being among the simplest FSL methods, the creation of episodes requires the use of several hyperparameters ({w,m, n}, Sec. 2.1) which can significantly affect performance. Snell et al. [40] state that the number of shots n between training and testing should match and that one should use a higher number of waysw during training. In their experiments, they train 1-shot models with w = 30, n = 1, m = 15 and 5-shot models with w = 20, n = 5, m = 15, with batch sizes of 480 and 400, respectively. Since the corresponding batch sizes of these configurations differ, making direct comparisons between them is difficult.\n3Note that, since we do not use a final linear layer for classification, our backbone is in fact a ResNet-11.\nInstead, to directly compare configurations across batch sizes, we define an episode by its number of shots n, the batch size b and the total number of images per class m+ n (the sum of elements across support and query set). For example, if we train a 5-shot model with m + n = 8 and b = 256, its corresponding training episodes will have n = 5, m = 8 − n = 3, and w = 256/(m + n) = 32. Using this notation, we train configurations of PNs and MNs covering several combinations of these hyperparameters, so that the resulting batch size corresponding to an episode is 128, 256 or 512. Then, we train three configurations of the NCA, where the sole hyperparameter is the batch size b.\nResults for CIFAR-FS can be found in Fig. 2, where we report results for NCA, PNs and MNs with m+n = 8, 16 or 32. Results for miniImageNet observe the same trend and are deferred to Appendix H. For consistency in our comparisons, we evaluate performance using a nearest centroid classifier when comparing against PNs, and soft assignments when comparing against MNs (see Sec. 2.3). Note that PNs and MNs results for 1-shot with m + n = 16 and m + n = 32 are not reported, as they fare significantly worse. The 1-shot m+ n = 16 is 4% worse in the best case compared to the lowest lines in Fig. 2, and the m+ n = 32 is 10% worse in the best case. This is likely because these two setups exploit the fewest number of pairs among all the setups, which leads to the least training signal being available. In Appendix E we discuss whether the difference in performance between the different episodic batch setups of Fig. 2 can be solely explained by the differences in the number of distance pairs used in the batch configurations. We indeed find that generally speaking the higher the number of pairs the better. However, one should also consider the positive/negative balance and the number of classes present within a batch.\nSeveral things can be observed from Fig. 2. First, NCA-trained embeddings perform better than all configurations, no matter the batch size. Second, PNs and MNs are very sensitive to different hyperparameter configurations. For instance, with batches of size 128, PNs trained with episodes of 5-shot and m+n=32 perform worse than a PNs trained with 5-shot episodes and m+n=16. Note that, as we will show in Table 2, the best episodic configurations for PNs and MNs found with this hyperparameter search is superior to the setting used in the original papers." }, { "heading": "3.3 Episodic batches vs. random sub-sampling", "text": "Despite the inferior performance with respect to the NCA, one might posit that, by training on episodes, PNs and MNs can somehow make better use of a smaller number of distances within a batch. This could be useful, for instance, in situations where it is important to train with very large batches. Given the increased conceptual complexity and the extra hyperparameters, the efficacy of episodic learning (in cases where a smaller number of distances should be considered) should be validated against the much simpler approach of random subsampling. We perform an experiment where we train NCA models by randomly discarding a fraction of the total number of distances used in the loss. Then, for comparison, we include different PNs and MNs models, after having computed to which percentage of discarded pairs (in a normal batch) their episodic batches correspond to.\nResults can be found in Fig. 3. As expected, we can see how subsampling a fraction of the total available number of pairs within a batch negatively affects performance. More importantly, we can notice that the points representing PNs and MNs models lie very close to the under-sampling version of the NCA. This suggests that the episodic strategy is roughly equivalent, empirically, to only exploiting a fraction of the distances available in a batch. Note how, moving along the x-axis of Fig. 3, variants of PNs and MNs exploiting a higher number of distances perform better." }, { "heading": "3.4 Ablation experiments", "text": "To better analyse why NCA performs better, in this section we consider the three key differences discussed at the end of Sec. 2.2 by performing a series of ablations on models trained on batches of size 128, 256 and 512. Results are summarised in Fig. 4. We refer the reader to Appendix B to obtain detailed steps describing how these ablations affect the losses of Sec. 2.2.\nFirst, we compare two variants of the NCA: one in which the sampling of the training batches happens sequentially and without replacement, as is standard in supervised learning, and one where batches are sampled with replacement. This modification (row 1 and 2 of Fig. 4) has a negligible effect, meaning that the replacement sampling introduced by episodic learning should not interfere with the other ablations. We then perform a series of ablations on episodic batches, i.e. sampled with the method described in Sec. 2.1. To obtain a reasonably-performing model for both 1- and 5-shot models, we use configurations with m+ n = 8. This means that, for PNs and MNs, models are trained with 8 images per class, and either 16, 32 or 64 classes (batches of size 128, 256 and 512 respectively). The batch size for NCA is also set to either 128, 256, or 512, allowing direct comparison.\nThe ablations of Fig. 4 compare PNs to NCA. First, we train standard PNs models (row 4 and 5 of Fig. 4). Next, we train a model where “prototypes” are not computed (row 6). This implies that, similar to what happens in MNs, distances are considered between individual points, but a separation between query and support set remains. This ablation allows us to investigate if the loss in performance by PNs compared to NCA can be attributed to prototype computation during training (which turned out not to be the case). Then, we perform an ablation where we ignore the separation between support and query set, and compute the NCA on the union of the support and query set, while still computing prototypes for the points that would belong to the support set (row 7). Last, we perform an ablation where we consider all the previous points together: we sample with replacement, we ignore the separation between support and query set and we do not compute prototypes (row 3). This amounts to the NCA loss, except that it is computed on batches with a fixed number of classes and a fixed number of images per class (row 3). Notice that in Fig. 4 there is only one row dedicated to 1-shot models. This is because we cannot generate prototypes from 1-shot models, so we cannot\nhave a “no proto” ablation. Furthermore, for 1-shot models the “no S/Q” ablation is equivalent to the NCA with a fixed batch composition.\nFrom Fig. 4, we can see that disabling prototypes (row 6) negatively affects the performance of 5-shot (row 5), albeit slightly. Since for PNs the amount of gradient signal is the same with (row 5, Fig.4) or without (row 6, Fig.4) the computation of prototypes, we believe that this could be motivated by the increased misalignment between the training and test setup present in the ablation of row 6. Nonetheless, enabling the computation between all pairs increases the performance (row 7) and, importantly, enabling all the ablations (row 3) completely recovers the performance lost by PNs. Note the meaningful gap in performance between row 1 and 3 in Fig. 4 for batch size 128, which disappears for batch size 512. This is likely due to the number of positives available in an excessively small batch size. Since our vanilla NCA creates batches by simply sampling images randomly from the dataset, there is a limit to how small a batch can be (which depends on the number of classes of the dataset). As an example, consider the extreme case of a batch of size 4. For the datasets considered, it is very likely that such a batch will contain no positive pairs for some classes. Conversely, the NCA ablation with a fixed batch composition (i.e. with a fixed number of images per class) will have a higher number of positive pairs (at the cost of a reduced number of classes per batch). This can explain the difference, as positive pairs constitute a less frequent (and potentially more informative) training signal. In Appendix E we extend this discussion, commenting on the role of positive and negative distances. In Appendix H we also report the results of a second set of ablations to compare NCA and Matching Networks, which are analogous to the ones with Prototypical Networks we just described and lead to the same conclusions.\nThese experiments highlight that the separation of roles between the images belonging to support and query set, which is typical of episodic learning [48], is detrimental for the performance of metric-based nonparametric few-shot learners. Instead, using the NCA loss on standard mini-batches allows full exploitation of the training data and significantly improves performance. Moreover, the NCA has the advantage of simplifying the overall training procedure, as the hyperparameters for the creation of episodes {w, n,m} no longer need to be considered." }, { "heading": "3.5 Comparison with recent methods", "text": "We now evaluate our models on three popular FSL datasets to contextualise their performance with respect to the recent literature. When considering which methods to compare against, we chose those a) which have been recently published, b) that use a ResNet-12 architecture [24] (the most commonly used), and c) with a setup that is not significantly more complicated than ours. For example, we only report results for the main approach of Tian et al. [44]. We omit their self-distillation [16] variant, as it can be applied to most methods and involves multiple stages of training.\nResults can be found in Table 2. Besides the results for the NCA loss, we also report PNs and MNs results with both the episodic setup from Snell et al. [40] and the best one (batch size 512, 5-shot, m + n=16 for both PNs and MNs) found from the experiment of Fig. 2, which brings a considerable improvement over the original and other PNs implementations (See Appendix I for a comparison of our PNs implementation to other works). Note that our aim is not to improve the state of the art, but rather to shed light on the practice of episodic learning. Nonetheless, our vanilla NCA is competitive and sometimes even superior to recent methods, despite being extremely simple. It fares surprisingly well against methods that use meta-learning (and episodic learning), and also against the high-performing simple baselines based on pre-training with the cross-entropy loss. Moreover, because of the explicit inductive bias that it encodes in terms of relative position in the embedding space of samples from the same class, the NCA loss is a useful tool to consider alongside cross-entropy trained baselines." }, { "heading": "4 Related work", "text": "Pioneered by Utgoff [46], Schmidhuber [38, 39], Bengio et al. [4] and Thrun [43], the general concept of meta-learning is several decades old (for a survey see [47, 22]). However, in the last few years it has experienced a surge in popularity, becoming the most used paradigm for learning from very few examples. Several methods addressing the FSL problem by learning on episodes were proposed. MANN [37] uses a Neural Turing Machine [21] to save and access the information useful to metalearn; Bertinetto et al. [6] and Munkhdalai et al. [25] propose a deep network in which a “teacher” branch is tasked with predicting the parameters of a “student” branch; Matching Networks [48] and Prototypical Networks [40] are two nonparametric methods in which the contributions of different examples in the support set are weighted by either an LSTM or a softmax over the cosine distances for Matching Networks, and a simple average for Prototypical Networks; Ravi and Larochelle [32] propose instead to use an LSTM to learn the hyperparameters of SGD, while MAML [14] learns to fine-tune an entire deep network by backpropagating through SGD. Despite these works widely differing in nature, they all stress on the importance of organising training in a series of small learning problems (episodes) that are similar to those encountered during inference at test time.\nIn contrast with this trend, a handful of papers have recently shown that simple approaches that forego episodes and meta-learning can perform well on FSL benchmarks. These methods all have in common that they pre-train a feature extractor with the cross-entropy loss on the “meta-training classes” of the dataset. Then, at test time a classifier is adapted to the support set by weight imprinting [29, 11], fine-tuning [9], transductive fine-tuning [11] or logistic regression [44]. Wang et al. [49] suggest performing test-time classification by using the label of the closest centroid to the query image. Unlike these papers, which propose new methods, we are more focussed on shedding light on the possible causes behind the inefficiency of popular nonparametric few-shot learning algorithms such as Prototypical and Matching Networks.\nDespite maintaining a support and a query set, the work of Raghu et al. [31] is similar in spirit to ours, and modifies episodic learning in MAML, showing that performance is almost entirely preserved when only updating the network head during meta-training and meta-testing. In this paper, we focussed on FSL algorithms just as established, and uncovered inefficiencies that not only allow for notable conceptual simplifications, but that also bring a significant boost in performance. Two related but different works are the ones of Goldblum et al. [20] and Fei et al. [12]. The former addresses PNs’ poorly representative samples by training on episodic pairs with the same classes (but different instances) and using a regularizer enforcing consistency across them. The latter investigates metalearning methods with parametric base learners, and shows interesting findings on the importance of having tightly clustered classes in feature space, which inspires a regularizer that improves non meta-learning models. Bai et al. [3] also show that the episodic strategy in meta-learning is inefficient by providing both theoretical and experimental arguments on methods solving a convex optimization problem at the level of the base learner. Similar to us, though via a different analysis, they show that the classic split is inefficient. Chen et al. [8] derive a generalisation bound for algorithms with a support/query separation. They do not provide any bounds for methods like NCA, which would be an interesting direction for future work. Triantafillou et al. [45] ignore the query/support separation in order to exploit all the available samples while working in a Structured SVM framework. Though the reasoning about batch exploitation is analogous to ours, the scope of the paper is very different. Finally, two recent meta-learning approaches based on Gaussian Processes [28, 41] also\nmerge the support and query sets during learning to take full advantage of the available data within each episode." }, { "heading": "5 Conclusion", "text": "Towards the aim of understanding the reasons behind the poor competitiveness of meta-learning methods with respect to simple baselines, in this paper we investigate the role of episodes in popular nonparametric few-shot learning methods. We found that their performance is highly sensitive to the set of hyperparameters used to sample these episodes. By replacing the Prototypical Networks and Matching Networks losses with the closely related (and non-episodic) Neighbourhood Component Analysis, we were able to ignore these hyperparameters, while improving the few-shot classification accuracy. We found out that the performance discrepancy is in large part caused by the separation between support and query set within each episode, which negatively affects the number of pairwise distances contributing to the loss. Moreover, with nonparametric few-shot approaches, the episodic strategy is almost empirically equivalent to randomly discarding a portion of the distances available within a batch. Finally, we showed that our variant of the NCA achieves an accuracy on multiple popular FSL benchmarks that is competitive with recent methods, making it a simple and appealing baseline for future work.\nBroader impact. We believe that progress in few-shot learning is important, as it can significantly impact important problems such as drug discovery and medical imaging. We also recognise that the capability of leveraging very small datasets might constitute a threat if deployed for surveillance by authoritarian entities (e.g. by applying it to problems such as re-identification and face recognition).\nAcknowledgements and Disclosure of Funding\nWe would like to thank Jõao F. Henriques, Nicholas Lord, John Redford, Stuart Golodetz, Sina Samangooei, and the anonymous reviewers for insightful discussions and helpful suggestions to improve the manuscript and our analysis in general. This work was supported by Five AI Limited." } ]
2,021
On Episodes, Prototypical Networks, and Few-Shot Learning
SP:929103e44d7ee2bc3b8a4d0c9e523c082a17fb44
[ "This work presents a method, named SnAp, that takes Real-time Recurrent Learning derivations and proposes to approximate its computations with sparse approximations to make them more computational tractable. The method is an alternative to overcome the truncation in backpropagation through time (BPTT) over long term temporal structure. The method assumes a sparsity pattern on the parameters of the network that leads to the relationship on how the gradients could be updated. Finally, the method is evaluated against BPTT, UORO and RTRL on character-level language modeling and a copy task. " ]
Recurrent neural networks are usually trained with backpropagation through time, which requires storing a complete history of network states, and prohibits updating the weights ‘online’ (after every timestep). Real Time Recurrent Learning (RTRL) eliminates the need for history storage and allows for online weight updates, but does so at the expense of computational costs that are quartic in the state size. This renders RTRL training intractable for all but the smallest networks, even ones that are made highly sparse. We introduce the Sparse n-step Approximation (SnAp) to the RTRL influence matrix. SnAp only tracks the influence of a parameter on hidden units that are reached by the computation graph within n timesteps of the recurrent core. SnAp with n = 1 is no more expensive than backpropagation but allows training on arbitrarily long sequences. We find that it substantially outperforms other RTRL approximations with comparable costs such as Unbiased Online Recurrent Optimization. For highly sparse networks, SnAp with n = 2 remains tractable and can outperform backpropagation through time in terms of learning speed when updates are done online.
[ { "affiliations": [], "name": "Jacob Menick" }, { "affiliations": [], "name": "Erich Elsen" } ]
[ { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48,", "year": 2016 }, { "authors": [ "Guillaume Bellec", "Franz Scherr", "Elias Hajek", "Darjan Salaj", "Robert Legenstein", "andWolfgang Maass" ], "title": "Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets", "venue": "arXiv.org e-Print archive,", "year": 2019 }, { "authors": [ "Frederik Benzing", "Marcelo Matheus Gauy", "Asier Mujika", "Anders Martinsson", "Angelika Steger" ], "title": "Optimal kronecker-sum approximation of real time recurrent learning", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "David Bradbury" ], "title": "A methodology for the development of recurrent networks for sequence processing", "venue": "URL http://oro.open.ac.uk/65146/", "year": 1997 }, { "authors": [ "James Bradbury", "Roy Frostig", "Peter Hawkins", "Matthew James Johnson", "Chris Leary", "Dougal Maclaurin", "SkyeWanderman-Milne" ], "title": "JAX: composable transformations of Python+NumPy programs, 2018", "venue": "URL http://github.com/google/jax", "year": 2018 }, { "authors": [ "Kaiser", "Zhifeng Chen", "Yonghui Wu", "Macduff Hughes" ], "title": "The best of both worlds: Combining recent advances in neural machine translation", "venue": "In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2018 }, { "authors": [ "KyunghyunCho", "Bart vanMerriënboer", "CaglarGulcehre", "DzmitryBahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Tim Cooijmans", "James Martens" ], "title": "On the variance of unbiased online recurrent optimization", "venue": "CoRR, abs/1902.02405,", "year": 2019 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime Carbonell", "Quoc Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jesse Engel" ], "title": "Optimizing rnns with differentiable graphs", "venue": "URL https://svail.github.io/diff_ graphs/", "year": 2020 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures", "venue": "In Proceedings of the International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Utku Evci", "Trevor Gale", "Jacob Menick", "Pablo Samuel Castro", "Erich Elsen" ], "title": "Rigging the Lottery: Making All Tickets Winners, 2019", "venue": null, "year": 2019 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andreas Griewank", "Andrea Walther" ], "title": "Algorithm 799: Revolve: An implementation of checkpointing for the reverse or adjoint mode of computational differentiation", "venue": "ACM Trans. Math. Softw.,", "year": 2000 }, { "authors": [ "Audrunas Gruslys", "Remi Munos", "Ivo Danihelka", "Marc Lanctot", "Alex Graves" ], "title": "Memoryefficient backpropagation through time", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "Herbert Jaeger" ], "title": "The “echo state” approach to analysing and training recurrent neural networks", "venue": "GMD-Report 148, German National Research Institute for Computer Science,", "year": 2001 }, { "authors": [ "Nal Kalchbrenner", "Erich Elsen", "Karen Simonyan", "SebNoury", "NormanCasagrande", "Edward Lockhart", "Florian Stimberg", "Aäron van den Oord", "Sander Dieleman", "Koray Kavukcuoglu" ], "title": "Efficient Neural Audio Synthesis", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization, 2014. URL http: //arxiv.org/abs/1412.6980. cite arxiv:1412.6980Comment: Published as a conference paper at the 3rd International Conference for Learning Representations", "venue": "San Diego,", "year": 2015 }, { "authors": [ "Timothy P Lillicrap", "Daniel Cownden", "Douglas B Tweed", "Colin J Akerman" ], "title": "Random synaptic feedback weights support error backpropagation for deep learning", "venue": "Nature communications,", "year": 2016 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Asier Mujika", "Florian Meier", "Angelika Steger" ], "title": "Approximating real-time recurrent learning with random kronecker factors", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "James M Murray" ], "title": "Local online learning in recurrent networks with random feedback. eLife, 8: e43299, may 2019", "venue": "ISSN 2050-084X. doi: 10.7554/eLife.43299. URL https://doi.org/10. 7554/eLife.43299", "year": 2050 }, { "authors": [ "Sharan Narang", "Gregory F. Diamos", "Shubho Sengupta", "Erich Elsen" ], "title": "Exploring sparsity in recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "JackWRae", "Jonathan J Hunt", "TimHarley", "Ivo Danihelka", "Andrew Senior", "GregWayne", "Alex Graves", "Timothy P Lillicrap" ], "title": "Scaling memory-augmented neural networks with sparse reads and writes", "venue": "In Proceedings of the 30th International Conference on Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Nikko Ström" ], "title": "Sparse connection and pruning in large dynamic artificial neural networks", "venue": "In EUROSPEECH,", "year": 1997 }, { "authors": [ "Corentin Tallec", "Yann Ollivier" ], "title": "Unbiased online recurrent optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "R.J. Williams", "D. Zipser" ], "title": "A learning algorithm for continually running fully recurrent neural networks", "venue": "Neural Computation,", "year": 1989 }, { "authors": [ "Ronald J. Williams", "Jing Peng" ], "title": "An efficient gradient-based algorithm for on-line training of recurrent network trajectories", "venue": "Neural Computation,", "year": 1990 }, { "authors": [ "F. Zenke", "E.O. Neftci" ], "title": "Brain-inspired learning on neuromorphic substrates", "venue": "Proceedings of the IEEE,", "year": 2020 }, { "authors": [ "Michael Zhu", "Suyog Gupta" ], "title": "To Prune, or Not to Prune: Exploring the Efficacy of Pruning for Model Compression", "venue": "In International Conference on Learning Representations Workshop,", "year": 2018 }, { "authors": [ "Kalchbrenner" ], "title": "base’ 128-unit model. To further motivate the development of sparse training strategies that do not require dense gradients, we show that larger sparser networks trained with BPTT and magnitude pruning monotonically outperform their denser counterparts in language modelling, when holding the number of parameters constant", "venue": null, "year": 2018 } ]
[ { "heading": "1 Introduction", "text": "Recurrent neural networks (RNNs) have been successfully applied to a wide range of sequence learning tasks, including text-to-speech (Kalchbrenner et al., 2018), language modeling (Dai et al., 2019), automatic speech recognition (Amodei et al., 2016), translation (Chen et al., 2018) and reinforcement learning (Espeholt et al., 2018). RNNs have greatly benefited from advances in computational hardware, dataset sizes, and model architectures. However, the algorithm used to compute their gradients in almost all practical applications has not changed since the introduction of Back-Propagation Through Time (BPTT). The key limitation of BPTT is that the entire state history must be stored, meaning that the memory cost grows linearly with the sequence length. For sequences too long to fit in memory, as often occurs in domains such as language modelling or long reinforcement learning episodes, truncated BPTT (TBPTT) (Williams & Peng, 1990) can be used. Unfortunately the truncation length used by TBPTT also limits the duration over which temporal structure can be realiably learned.\nForward-mode differentiation, or Real-Time Recurrent Learning (RTRL) as it is called when applied to RNNs (Williams & Zipser, 1989), solves some of these problems. It doesn’t require storage of any past network states, can theoretically learn dependencies of any length and can be used to update parameters at any desired frequency, including every step (i.e. fully online). However, its fixed storage requirements are O(k · |θ|), where k is the state size and |θ| is the number of parameters θ in the core. Perhaps even more daunting, the computation it requires is O(k2 · |θ|). This makes it impractical for even modestly sized networks. The advantages of RTRL have led to a search for more efficient approximations that retain its desirable properties, whilst reducing its computational and memory costs. One recent line of work introduces unbiased, but noisy approximations to the influence update. Unbiased Online Recurrent Optimization (UORO) (Tallec\n& Ollivier, 2018) is an approximation with the same cost as TBPTT – O(|θ|) – however its gradient estimate is severely noisy (Cooijmans & Martens, 2019) and its performance has in practice proved worse than TBPTT (Mujika et al., 2018). Less noisy approximations with better accuracy on a variety of problems include both Kronecker Factored RTRL (KF-RTRL) (Mujika et al., 2018) and Optimal Kronecker-Sum Approximation (OK) (Benzing et al., 2019). However, both increase the computational costs to O(k3).\nThe last few years have also seen a resurgence of interest in sparse neural networks – both their properties (Frankle & Carbin, 2019) and new methods for training them (Evci et al., 2019). A number of works have noted their theoretical and practical efficiency gains over dense networks (Zhu & Gupta, 2018; Narang et al., 2017; Elsen et al., 2019). Of particular interest is the finding that scaling the state size of an RNN while keeping the number of parameters constant leads to increased performance (Kalchbrenner et al., 2018).\nIn this work we introduce a new sparse approximation to the RTRL influence matrix. The approximation is biased but not stochastic. Rather than tracking the full influence matrix, we propose to track only the influence of a parameter on neurons that are affected by it within n steps of the RNN. The algorithm is strictly less biased but more expensive as n increases. The cost of the algorithm is controlled by n and the amount of sparsity in the Jacobian of the recurrent cell. We study the nature of this bias in Appendix C. Larger n can be coupled with concomitantly higher sparsity to keep the cost fixed. This enables us to achieve the benefits of RTRL with a computational cost per step comparable in theory to BPTT. The approximation approaches full RTRL as n increases. Our contributions are as follows:\n• We propose SnAp – a practical approximation to RTRL, which is is applicable to both dense and sparse RNNs, and is based on the sparsification of the influence matrix.\n• We show that parameter sparsity in RNNs reduces the costs of RTRL in general and SnAp in particular.\n• We carry out experiments on both real-world and synthetic tasks, and demonstrate that the SnAp approximation: (1) works well for language modeling compared to the exact unapproximated gradient; (2) admits learning temporal dependencies on a synthetic copy task and (3) can learn faster than BPTT when run fully online." }, { "heading": "2 Background", "text": "We consider recurrent networks whose dynamics are governed by ht = fθ(ht−1, xt) where ht ∈ Rk is the state, xt ∈ Ra is an input, and θ ∈ Rp are the network parameters. It is assumed that at each step t ∈ {1, ..., T}, the state is mapped to an output yt = gφ(ht), and the network receives a loss Lt(yt, y∗t ). The system optimizes the total loss L = ∑ t Lt with respect to parameters θ by following the gradient ∇θL. The standard way to compute this gradient is BPTT – running backpropagation on the computation graph “unrolled in time” over a number of steps T :\n∇θL = T∑ t=1 ∂L ∂ht ∂ht ∂θt = T∑ t=1 ( ∂L ∂ht+1 ∂ht+1 ∂ht + ∂Lt ∂ht ) ∂ht ∂θt\n(1)\nThe recursive expansion ∂L∂ht = ∂L ∂ht+1 ∂ht+1 ∂ht + ∂Lt∂ht is the backpropagation rule. The slightly nonstandard notation θt refers to the copy of the parameters used at time t, but the weights are shared for all timesteps and the gradient adds over all copies." }, { "heading": "2.1 Real Time Recurrent Learning (RTRL)", "text": "Real Time Recurrent Learning (Williams & Zipser, 1989) computes the gradient as:\n∇θL = T∑ t=1 ∂Lt ∂ht ∂ht ∂θ = T∑ t=1 ∂Lt ∂ht ( ∂ht ∂θt + ∂ht ∂ht−1 ∂ht−1 ∂θ ) (2)\nThis can be viewed as an iterative algorithm, updating ∂ht∂θ from the intermediate quantity ∂ht−1 ∂θ . To simplify equation 2we introduce the following notation: have Jt := ∂ht∂θ , It := ∂ht ∂θt\nandDt := ∂ht∂ht−1 . J stands for “Jacobian”, I for “immediate Jacobian”, and D for “dynamics”. We sometimes refer to J as the “influence matrix”. The recursion can be rewritten Jt = It +DtJt−1.\nCost analysis Jt is a matrix in Rk×|θ|, which can be on the order of gigabytes for even modestly sized RNNs. Furthermore, performing the operationDtJt−1 involves multiplying a k× k matrix by a k × |θ| matrix each timestep. That requires |θ| times more computation than the forward pass of the RNN core. To make explicit just how expensive RTRL is – this is a factor of roughly one million for a vanilla RNN with 1000 hidden units." }, { "heading": "2.2 Truncated RTRL and stale Jacobians", "text": "In analogy to Truncated BPTT, one can consider performing a gradient update partway through a training sequence (at time t) but still passing forward a stale state and a stale influence Jacobian Jt rather than resetting both to zero after the update. This enables more frequent weight updating at the cost of a staleness bias. The Jacobian Jt becomes “stale” because it tracks the sensitivity of the state to old parameters. Experiments (section 5.2) show that this tradeoff can be favourable toward more frequent updates in terms of data efficiency. In fact, much of the RTRL literature assumes that the parameters are updated at every step t (“fully online”) and that the influence Jacobian is never reset, at least until the start of a new sequence. All truncated BPTT experiments in our paper pass forward a stale state if an update is done before the end of the sequence." }, { "heading": "2.3 Sparsity in RNNs", "text": "One of the early explorations of sparsity in the parameters of RNNs (i.e. many entries of θ are exactly zero) was Ström (1997), where one-shot pruning based on weight magnitude with subsequent retraining was employed in a speech recognition task. The current standard approach to inducing sparsity in RNNs (Zhu&Gupta, 2018) remains similar, except that magnitude based pruning happens slowly over the course of training so that no retraining is required.\nKalchbrenner et al. (2018) discovered a powerful property of sparse RNNs in the course of investigating them for text-to-speech – for a constant parameter and flop budget sparser RNNs have more capacity per parameter than dense ones. This property has so far only been shown to hold when the sparsity pattern is adapted during training (in this case, with pruning). Note that parameter parity is achieved by simultaneously increasing the RNN state size and the degree of sparsity. This suggests that training large sparse RNNs could yield powerful sequence models, but the memory required to store the history of (now much larger) states required for BPTT becomes prohibitive for long sequences. In this paper, we use a fixed sparsity pattern rather than pruning (see Appendix B), for simplicity. In particular, we pick uniformly at random which indices of weight matrices to force to zero and hold this sparsity pattern constant over the course of training." }, { "heading": "3 The Sparse n-Step Approximation (SnAp)", "text": "Ourmain contribution in this work is the development of an approximation to RTRL called the Sparse n-Step Approximation (SnAp) which reduces RTRL’s computational requirements substantially.\nSnAp imposes sparsity on J even though it is in general dense. We choose the sparsity pattern to be the locations that are non-zero after n steps of the RNN (Figure 1). We also choose to use the same pattern for all steps, though this is not a requirement. This means that the sparsity pattern of Jt is known and can be used to reduce the amount of computation in the product DtJt−1. See Figure 2 for a visualization of the process. The costs of the resulting methods are compared in Table 1. We note an alternative strategy would be to perform the full multiplication ofDtJt−1 and then only keep the top-k values. This would reduce the bias of the approximation but increase its cost.\nMore formally, we adopt the following approximation for all t: (Jt)ij ≈ { (Jt)ij if (θt)j influences hidden unit (ht+n)i 0 otherwise" }, { "heading": "3.1 Sparse One-Step Approximation (SnAp-1)", "text": "Even for a fully dense RNN, each parameter will in the usual case only immediately influence the single hidden unit it is directly connected to. This means that the immediate Jacobian It tends to be extremely sparse. For example, a Vanilla RNN will have only one nonzero element per column, which is a sparsity level of k−1k . Storing only the nonzero elements of It already saves a significant amount of memory without making any approximations; It is the same shape as the daunting Jt matrix whereas the nonzero values are the same size as θ.\nIt can become more dense in architectures (such as GRU and LSTM) which involve the composition of parameterised layers within a single core step (see Appendix A for an in-depth discussion of the effect of gating architectures on Jacobian sparsity). In the Sparse One-Step Approximation, we only keep entries in Jt if they are nonzero in It. After just two RNN steps, a given parameter has influenced every unit of the state through its intermediate influence on other units. Thus only SnAp with n = 1 is efficient for dense RNNs because n > 1 does not result in any sparsity in J ; for dense networks SnAp-2 already reduces to full RTRL. (N.b.: SnAp-1 is also applicable to sparse networks.) Figure 1 depicts the sparse structure of the influence of a parameter for both sparse and fully dense cases.\nSnAp-1 is effectively diagonal, in the sense that the effect of parameter j on hidden unit i ismaintained throughout time, but ignoring the indirect effect of parameter j on unit i via paths through other units i′. More formally, it is useful to define u(j) as the one component in the state ht connected directly to the parameter j (which has at the other end of the connection some other entry i′ within ht−1 or xt). Let i = u(j). The imposition of the one-step sparsity pattern means only the entry in row i will be kept for column j in Jt. Inspecting the update for this particular entry, we have\n(Jt)ij = (It)ij + n∑ m=1 (Dt)im(Jt−1)mj = (It)ij + (Dt)ii(Jt−1)ij (3)\nThe equality follows from the assumption that (Jt−1)mj = 0 if m 6= i. Diagonal entries in Dt are thus crucial for this approximation to be expressive, such as those arising from skip connections." }, { "heading": "3.2 Optimizations for full RTRL with sparse networks", "text": "When the RNN is sparse, the costs of even full (unapproximated) RTRL can be alleviated to a surprising extent; we save computation proportional to a factor of the sparsity squared. Assume a proportion s of the entries in both θ and Dt are equal to zero and refer to this number as “the level of sparsity in the RNN”. For convenience, d := 1 − s. With a Vanilla RNN, this correspondence between parameter sparsity and dynamics sparsity holds exactly. For popular gating architectures such as GRU and LSTM the relationship is more complicated so we include empirical measurements of the computational cost in FLOPS (Table 2) in addition to the theoretical calculations here. More\ncomplex recurrent architectures involving attention (Rae et al., 2016) would require an independent mechanism for inducing sparsity in Dt; we leave this direction to future work and assume in the remainder of this derivation that sparsity in θ corresponds to sparsity in Dt.\nIf the sparsity level of θ is s, then so is the sparsity in J because the columns corresponding to parameters which are clamped to zero have no effect on the gradient computation. We may extract the columns of J containing nonzero parameters into a new dense matrix J̃ used in place of J everywhere with no effect on the gradient computation. We make the same optimization for It and use the dense matrix Ĩt in its place, leaving us with the update rule (depicted in Figure 2) :\nJ̃t = Ĩt +DtJ̃t−1 (4)\nThese optimizations taken together reduce the storage requirements by 1d (because J̃ is d times the size of J) and the computational requirements by 1d2 becauseDt in the sparse matrix multiplication DtJ̃t−1 has density d, saving us an extra factor of 1d .\n3.3 Sparse N Step Approximation (SnAp-N )\nEven when Dt is sparse, the computation “graph” linking nodes (neurons) in the hidden state over time should still be connected, meaning that J̃ eventually becomes fully dense because after enough iterations every (non-zero) parameter will have influenced every hidden unit in the state. Thus sparse approximations are still available in this setting and indeed required to obtain an efficient algorithm. For sparse RNNs, SnAp simply imposes additional sparsity on J̃t rather than Jt. SnAp-N forN > 1 is both strictly less biased and strictly more expensive, but its costs can be reduced by increasing the degree s of sparsity in the RNN. SnAp-2 is comparable with UORO and SnAp-1 if the sparsity of the RNN is increased so that d < n− 23 , e.g. 99% or higher sparsity for a 1000-unit Vanilla RNN. If this level of sparsity is surprising, the reader is encouraged to see our experiments in Appendix B." }, { "heading": "4 Related Work", "text": "SnAp-1 is actually similar to the original algorithm used to train LSTM (Hochreiter & Schmidhuber, 1997), which employed forward-mode differentiation to maintain the sensitivity to each parameter of a single cell unit, over all time. This exposition was expressed in terms coupled to the LSTM architecture whereas our formulation is general. SnAp-1 was also described in (Bellec et al., 2019) as eprop-1. The exposition in that paper goes into great depth regarding its biological plausibility and\nrelation to spiking neural networks and may be somewhat unfamiliar to readers from a pure machine learning background. The -1 postfix in eprop refers to it being the first of three present algorithms, not the number of connections as in SnAp. Biological plausibility of RTRL variants has also been studied in (Zenke & Neftci, 2021). An idea similar to SnAp was also proposed in (Bradbury, 1997), aiming to overcome poor local minima during optimization.\nRandom Feedback Local Online (Murray, 2019) (RFLO) amounts to accumulating It terms in equation 4 whilst ignoring the product DtJt−1. It admits an efficient implementation through operating on Ĩt as in section 3.2 but is strictly more biased than the approximations considered in this work and performs worse in our experiments. The original paper also used random matrices to propagate errors backward, thus avoiding the weight transport problem (Lillicrap et al., 2016). However, for a fair comparison, our re-implementation uses the same weights that are used for the forward pass, as in standard backpropagation. As mentioned in section 1, stochastic approximations to the influence matrix are an alternative to the methods developed in our work, but suffer from noise in the gradient estimator (Cooijmans & Martens, 2019). A fruitful line of research focuses on reducing this noise (Cooijmans & Martens, 2019), (Mujika et al., 2018), (Benzing et al., 2019).\nIt is possible to reduce the storage requirements of TBPTT using a technique known as “gradient checkpointing” or “rematerialization”. This reduces thememory requirements of backpropagation by recomputing states rather than storing them. First introduced in Griewank&Walther (2000) and later applied specifically to RNNs in Gruslys et al. (2016), these methods are not compatible with the fully online setting where T may be arbitrarily large as even the optimally small amount of re-computation can be prohibitive. For reasonably sized T , however, rematerialization is a straightforward and effective way to reduce the memory requirements of TBPTT, especially if the forward pass can be computed quickly." }, { "heading": "5 Experiments", "text": "We include experimental results on the real world language-modelling task WikiText103 (Merity et al., 2017) and the synthetic ‘Copy’ task (Graves et al., 2016) of simply repeating an observed binary string. Whilst the first is important for demonstrating that our methods can be used for real, practical problems, language modelling doesn’t directly measure a model’s ability to learn structure that spans long time horizons. The Copy task, however, allows us to parameterize exactly the temporal distance over which structure is present in the data. In terms of, respectively, task complexity and RNN state size (up to 1024) these investigations are considerably more “large-scale” than much of the RTRL literature." }, { "heading": "5.1 WikiText103", "text": "All of our WikiText103 experiments tokenize at the character (byte) level and use SGD to optimize the log-likelihood of the data. We use the Adam optimizer (Kingma & Ba, 2014) with default hyperparameters β1 = 0.9, β2 = 0.999, and = 1e−8. We train on randomly cropped sequences of length 128 sampled uniformly with replacement and do not propagate state across the end-of-sequence boundary (i.e. no truncation). Results are reported on the standard validation set." }, { "heading": "5.1.1 Language Modelling with dense RNNs: SnAp-1", "text": "In this section, we refrain from performing a weight update until the end of a training sequence (see section 2.2) so that BPTT is the gold standard benchmark for performance, assuming the gradient is the optimal descent direction. The architecture is a Gated Recurrent Unit (GRU) network (Cho et al., 2014) with 128 recurrent units and a one-layer readout MLP mapping to 1024 hidden relu units before the final 256-unit softmax layer. The embedding matrix is not shared between the input and output. All weights are initialized from a truncated normal distribution with standard deviation equal to the inverse square root of the fan in. Learning curves in Figure 3 (Left) show that SnAp-1 outperforms RFLO and UORO, and that in this setting UORO fails to match the surprisingly strong baseline of not training the recurrent parameters at all and instead leaving them at their randomly initialized value. This random baseline is closely related to the Echo-State network (Jaeger, 2001), and the strong readout network is intended to help keep the comparison to this baseline fair." }, { "heading": "5.1.2 Language Modeling with Sparse RNNs: SnAp-1 and SnAp-2", "text": "Here we use the same architecture as in section 5.1.1, except that we introduce 75% sparsity into the weights of the GRU, in particular the weight matrices (more sparsity levels are considered in\nlater experiments). Biases are always kept fully dense. In order to induce sparsity, we generate a sparsity pattern uniformly at random and fix it throughout training. As would be expected because it is strictly less biased, Figure 3 (Right) shows that SnAp-2 outperforms SnAp-1 but only slightly. Furthermore, both closely match the (gold-standard) accuracy of a model trained with BPTT. Table 2 shows that SnAp-2 actually costs about 600x more FLOPs than BPTT/SnAp-1 at 75% sparsity, but higher sparsity substantially reduces FLOPs. It’s unclear exactly how the cost compares to UORO, which though O(|θ|) does have constant factors required for e.g. random number generation, and additional overheads when approximations use rank higher than one." }, { "heading": "5.2 Copy Task", "text": "Our experiments on the Copy task (Graves et al., 2016) aim to investigate the ability of the proposed sparse RTRL approximations to learn about temporal structure. In this synthetic task, a sequence of bits bt ∈ {0, 1} is presented one at a time, and then a special token is presented, denoting the end of the input pattern. Subsequently, the network receives a series of special tokens indicating that an output is desired, at which time it must output, one token at a time, the same binary string it received as input. Unlike language modelling, there is nothing going on in this problem except for (simple) temporal structure over a known temporal distance: the length of the input sequence.\nWe follow (Mujika et al., 2018) and adopt a curriculum-learning approach over the length L of sequences to be copied, starting with L = 1. When the average bits per character of a training minibatch drops below 0.15, we increment L by one. We sample the length of target sequences uniformly between [max(L− 5, 1), L] as in previous work. We measure performance versus ‘datatime’, i.e. we give each algorithm a time budget in units of the cumulative number of tokens seen throughout training. A consequence of this scheme is that full BPTT is no longer an upper bound on performance because, for example, updating once on a sequence of length 10 with the true gradient may yield slower learning than updating twice on two consecutive sequences of length 5, with truncation.\nIn these experiments we examine SnAp performance for multiple sparsity levels and recurrent architectures including Vanilla RNNs, GRU, and LSTM. Table 2 includes the architectural details. The sparsity pattern is again chosen uniformly at random. As a result, comparison between sparsity levels is discouraged. For each configuration we sweep over learning rates in {10−2.5, 10−3, 10−3.5, 10−4} and compare average performance over three seeds with the best chosen learning rate (all methods performed best with learning rate 10−3). The minibatch size was 16. We train with either full unrolls or truncation with T = 1. This means that the RTRL approximations update the network weights at every timestep and persist the RNN state along with a stale Jacobian (see section 2.2).\nFully online training One striking observation is that Truncated BPTT completely fails to learn temporal structure in the fully online (T = 1) regime. Interestingly, the SnAp methods perform better with more frequent updates. Compare solid versus dotted lines of the same color in Figure 4. Fully online SnAp-2 and SnAp-3 mostly outperform or match BPTT for training LSTM and GRU architectures despite the “staleness” noted in Section 2.2. We attribute this to the hypothesis\nadvanced in the RTRL literature that Jacobian staleness can be mitigated with small learning rates but leave a more thorough investigation of this phenomenon to future work.\nBias versus computational expense For SnAp there is a tradeoff between the biasedness of the approximation and the computational costs of the algorithm. We see that correspondingly, SnAp-1 is outperformed by SnAp-2, which is in turn outperformed by SnAp-3 in the Copy experiments. The RFLO baseline is even more biased than SnAp-1, but both methods have comparable costs. SnAp-1 significantly outperforms RFLO in all of our experiments. The nature of the bias introduced by SnAp is investigated in Appendix C.\nEmpirical FLOPs requirements Here we augment the asymptotic cost calculations from Table 1 with empirical measurements of the FLOPs, broken out by architecture and sparsity level in Table 2. Gating architectures require a high degree of parameter sparsity in order to keep a commensurate amount of of Jacobian sparsity due to the increase in density brought about by composing linear maps with different sparsity patterns (see Appendix A). For instance, the 75% sparse GRU considered in the experiments from Section 5.1.2 lead to SnAp-2 parameter Jacobian that is only 70.88% sparse. With SnAp-3 it becomes much less sparse – only 50%. This may partly explain why SnAp performs best compared to BPTT in the LSTM case (Figure 4), though it still significantly outperforms BPTT in the high sparsity regime when SnAp-2 becomes practical. Also, LSTM is twice as costly to train with RTRL-like algorithms because it has two components to its state, requiring the maintenance of twice as many jacobians and the performance of twice as many jacobian multiplications (Equations 3/5). For a 75% sparse LSTM, the SnAp-2 Jacobian is much denser at 38.5% sparsity and SnAp-3 has essentially reached full density (so it is as costly as RTRL).\nFigure 4 also shows that for Vanilla RNNs, increasing n improves performance, but SnAp does not outperform BPTT with this architecture. In summary, Increasing n improves performance but costs more FLOPs." }, { "heading": "6 Conclusion", "text": "We have shown how sparse operations can make a form of RTRL efficient, especially when replacing dense parameter Jacobians with approximate sparse ones. We introduced SnAp-1, an efficient RTRL approximation which outperforms comparably-expensive alternatives on a popular languagemodeling benchmark. We also developed higher orders of SnAp including SnAp-2 and SnAp-3, approximations tailor-made for sparse RNNs which can be efficient in the regime of high parameter sparsity, and showed that they can learn temporal structure considerably faster than even full BPTT.\nOur results suggest that training very large, sparse RNNs could be a promising path toward more powerful sequencemodels trained on arbitrarily long sequences. This may prove useful for modelling whole documents such as articles or even books, or reinforcement learning agents which learn over an entire lifetime rather than the brief episodes which are common today.\nA few obstacles stand in the way of scaling up our methods further:\n• The need for a high-performing sparse training strategy that does not require dense gradient information.\n• Sparsity support in both software and hardware that enables better realization of the theoretical efficiency gains of sparse operations.\nIt may also be fruitful to further develop our methods for hybrid models combining recurrence and attention (Dai et al., 2019; Rae et al., 2016) or even feedforward architectures with tied weights (Lan et al., 2019) (Dehghani et al., 2018)." }, { "heading": "Appendix A: Jacobian Sparsity of GRUs and LSTMs", "text": "Unlike vanilla RNNs whose dynamics Jacobian Dt has sparsity exactly equal to the sparsity of the weight matrix, GRUs and LSTMs have inter-cell interactions which increase the Jacobians’ density. In particular, the choice of GRU variant can have a very large impact on the increase in density. This is relevant to the “dynamics” jacobian Dt and the parameter jacobians It and Jt.\nConsider a standard formulation of LSTM.\nit = σ(Wiixt +Whiht−1 + bi)\nft = σ(Wifxt +Whfht−1 + bf )\not = σ(Wioxt +Whoht−1 + bo)\ngt = φ(Wigxt +Whght−1 + bg)\nct = ft ct−1 + it gt ht = ot φ(ct)\n(5)\nLooking at LSTM’s update equations, we can see that an individual parameter (W, b) will only directly affect one entry in each gate (it, ft, ot) and the candidate cell gt. These in turn produce the next cell ct and next hidden state ht with element-wise operations (σ is the sigmoid function applied element-wise and φ is usually hyperbolic tangent). In this case Figure 1 is an accurate depiction of the propagation of influence of a parameter as the RNN is stepped.\nHowever, for a GRU there are multiple variants in which a parameter or hidden unit can influence many more units of the next state. The original variant (Cho et al., 2014) is as follows:\nzt = σ(Wizxt +Whzht−1 + bz)\nrt = σ(Wirxt +Whrht−1 + br)\nat = φ(Wiaxt +Wha(rt ht−1) + ba) ht = (1− zt) ht−1 + zt at\n(6)\nFor our purposes the main thing to note is that the parameters influencing rt further influence every unit of at because of the matrix multiplication by Wha. They therefore influence every unit of ht within one recurrent step, which means that the dynamics jacobian Dt is fully dense and the immediate parameter jacobian It forWir,Whr, and br are all fully dense as well.\nAn alternative formulation which was popularized by Engel, and also used in the CuDNN library from NVIDIA is given by:\nzt = σ(Wizxt +Whzht−1 + bz)\nrt = σ(Wirxt +Whrht−1 + br)\nat = φ(Wiaxt + rt Whaht−1 + ba) ht = (1− zt) ht−1 + zt at\n(7)\nThe second variant has moved the reset gate after the matrix multiplication, thus avoiding the composition of parameterized linear maps within a single RNN step. As the modeling performance of the two variants has been shown to be largely the same, but the second variant is faster and results in sparser Dt and It, we adopt the second variant throughout this paper." }, { "heading": "Appendix B: Sparsity Strategy", "text": "Our experiments do not use state-of-the-art strategies for inducing sparsity because there is no such strategy compatible with SnAp at the time of writing. The requirement of a dense gradient in Evci et al. (2019) and Zhu & Gupta (2018) prevents the use of the optimization in Equation 4, which is strictly necessary to fit the RTRL training computations on accelerators without running out of memory.\nTo further motivate the development of sparse training strategies that do not require dense gradients, we show that larger sparser networks trained with BPTT and magnitude pruning monotonically outperform their denser counterparts in language modelling, when holding the number of parameters constant. This provides more evidence for the scaling law observed in Kalchbrenner et al. (2018).\nThe experimental setup is identical to the previous section except that all networks are trained with full BPTT. To hold the number of parameters constant, we start with a fully dense 128-unit GRU. We make the weight matrices 75% sparse when the network has 256 units, 93.8% sparse when the network has 512 units, 98.4% when the network has 1024 units, and so on. The sparsest network considered has 4096 units and over 99.9% sparsity, and performed the best. Indeed it performed better than a dense network with 6.25x as many parameters (Figure 5). Pruning decisions are made on the basis of absolute value every 1000 steps, and the final sparsity is reached after 350,000 training steps." }, { "heading": "Appendix C: Analysis of the bias introduced by SnAp", "text": "Finally, we examine the empirical magnitudes of entries which are nonzero in the true, unapproximated influence matrix but set to zero by SnAp. For the benefit of visualization we train a small GRU network (8 units, 75% sparsity) on a non-curriculum variant of the Copy-task with target sequences fixed in length to 16 timesteps. This enables us to measure and display the bias of SnAp. The influence matrix considered is the final value after processing an entire sequence. The network is optimized with full (untruncated) BPTT. We find (Table 4) that at the beginning of training the influence entries ignored by SnAp are small in magnitude compared to those kept, even after the influence has had many RNN iterations to fill in.\nThis analysis complements the experimental results concerning how useful the approximate gradients are for learning; instead it shows where — and by how much — the sparse approximation to the influence differs from the true accumulated influence. Interestingly, despite the strong task performance of SnAp, the magnitude of ignored entries in the influence matrix is not always small (see Figure 6). The accuracy, as measured by such magnitudes, trends downward over the course of training. We speculate that designing methods to temper the increased bias arising later in training may be beneficial but leave this to future work." }, { "heading": "Appendix D: Code Snippet for SnAp-1", "text": "We include below a code snippet showing how RTRL and SnAp can be implemented in Jax (Bradbury et al., 2018). While it is real and working Jax code, this is just a sketch for pedagogical purposes and does not take full advantage of the optimizations in section 3.2.\nPlease take note of the license at the top of the snippet.\n# Copyright The Authors of \"Practical Real Time Recurrent Learning # with a Sparse Approximation to the Jacobian\", 2020 # SPDX-License-Identifier: Apache-2.0 import jax import jax.numpy as jnp\ndef get_fwd_and_update_influence_func(core_f, use_snap1_approx=False): \"\"\"Transform core_f into a one which maintains influence jacobian w/ RTRL.\"\"\"\ndef fwd_and_update_influence(prev_infl, params, state, inpt): # Run the forward pass on a batch of data. batched_model_fn = jax.vmap(lambda s, i: core_f(params, s, i)) f_out, state_new = batched_model_fn(state, inpt)\n# Compute jacobians of state w.r.t. prev state and params. jac_fn = jax.jacrev(lambda p, s, i: core_f(p, s, i)[1], argnums=(0, 1)) batched_jac_fn = jax.vmap(lambda s, i: jac_fn(params, s, i)) p_jac, s_jac = batched_jac_fn(state, inpt)\n# Update the influence matrix according to RTRL learning rule. new_infl = jax.tree_multimap( lambda j_i, infl_i: j_i + jnp.einsum(’bHh,bh...->bH...’, s_jac, infl_i), p_jac, prev_infl)\n# SnAp-1: Keep only the entries of the influence matrix which are nonzero # after a single core step. This is not an efficient implementation. if use_snap1_approx: onestep_infl_mask = jax.tree_map( lambda t: (jnp.abs(t) > 0.).astype(jnp.float32), p_jac)\nnew_infl = jax.tree_multimap( lambda matrix, mask: matrix * mask, new_infl, onestep_infl_mask)\nreturn f_out, state_new, new_infl return fwd_and_update_influence\ndef compute_gradients(influence_nest, delta): grads = jax.tree_map( lambda influence_i: jnp.einsum(’bH...,bH->...’, influence_i, delta), influence_nest)\nreturn grads\ndef make_zero_infl(param_exemplar, state_exemplar): def make_infl_for_one_state(t): return jax.tree_map( lambda p: jnp.zeros(shape=list(t.shape) + list(p.shape)), param_exemplar)\ninfl = jax.tree_map(make_infl_for_one_state, state_exemplar) return infl\ndef get_rtrl_grad_func(core_f, readout_f, loss_f, use_snap1_approx): \"\"\"Transform functions into one which computes the gradient via RTRL.\"\"\" fwd_and_update_influence = get_fwd_and_update_influence_func( core_f, use_snap1_approx=use_snap1_approx)\ndef rtrl_grad_func(core_params, readout_params, state, data): def rtrl_scan_func(carry, x): \"\"\"Function which can be unrolled with jax.lax.scan.\"\"\" # Unpack state and input. old_state, infl_acc, core_grad_acc, readout_grad_acc, loss_acc = carry inpt, targt, msk = x\n# Update influence matrix. h_t, new_state, new_infl_acc = fwd_and_update_influence( infl_acc, core_params, old_state, inpt)\n# Compute output, loss, and backprop gradients for RNN state. def step_loss(ps, h, t, m): \"\"\"Compute the loss for one RNN step.\"\"\" y = readout_f(ps, h) return loss_f(y, t, m), y step_out_and_grad_func = jax.value_and_grad( step_loss, has_aux=True, argnums=(0, 1))\nstep_out, step_grad = step_out_and_grad_func( readout_params, h_t, targt, msk)\nloss_t, y_out = step_out readout_grad_t, delta_t = step_grad\n# Update accumulated gradients. core_grad_t = compute_gradients(new_infl_acc, delta_t) new_core_grad_acc = jax.tree_multimap( jnp.add, core_grad_acc, core_grad_t)\nnew_readout_grad_acc = jax.tree_multimap( jnp.add, readout_grad_acc, readout_grad_t)\n# Repack carried state and return output. new_carry = (new_state, new_infl_acc,\nnew_core_grad_acc, new_readout_grad_acc, loss_acc + loss_t) return new_carry, y_out\nzero_infl = make_zero_infl(core_params, state) zero_core_grad = jax.tree_map(jnp.zeros_like, core_params)\nzero_readout_grad = jax.tree_map(jnp.zeros_like, readout_params) final_carry, output_seq = jax.lax.scan( rtrl_scan_func, init=(state, zero_infl, zero_core_grad, zero_readout_grad, 0.0), xs=(data[’input_seq’], data[’target_seq’], data[’mask_seq’]))\nfinal_state, _, core_grads, readout_grads, loss = final_carry return (loss, (final_state, output_seq)), (core_grads, readout_grads) return rtrl_grad_func" } ]
2,021
Practical Real Time Recurrent Learning with a Sparse Approximation to the Jacobian
SP:e5152f19fbd60b76c867e34096a7ba19b2ed6af4
[ "The authors propose a new technique for quantization aware training of neural networks that is specially suited for graph neural networks. They do a good job of motivating the problem by demonstrating that the large variation of input degree in GNNs can lead to unique challenges for numerical precision, forcing a compromise between truncation error and rounding error. Th proposed technique incorporates stochastic masking and quantization proportional to the input degree to allow higher input-degree nodes to operate at higher resolution on average. " ]
Graph neural networks (GNNs) have demonstrated strong performance on a wide variety of tasks due to their ability to model non-uniform structured data. Despite their promise, there exists little research exploring methods to make them more efficient at inference time. In this work, we explore the viability of training quantized GNNs, enabling the usage of low precision integer arithmetic during inference. For GNNs seemingly unimportant choices in quantization implementation cause dramatic changes in performance. We identify the sources of error that uniquely arise when attempting to quantize GNNs, and propose an architecturally-agnostic and stable method, Degree-Quant, to improve performance over existing quantizationaware training baselines commonly used on other architectures, such as CNNs. We validate our method on six datasets and show, unlike previous attempts, that models generalize to unseen graphs. Models trained with Degree-Quant for INT8 quantization perform as well as FP32 models in most cases; for INT4 models, we obtain up to 26% gains over the baselines. Our work enables up to 4.7× speedups on CPU when using INT8 arithmetic.
[ { "affiliations": [], "name": "Shyam A. Tailor" }, { "affiliations": [], "name": "Javier Fernandez-Marques" } ]
[ { "authors": [ "R. Achanta", "A. Shaji", "K. Smith", "A. Lucchi", "P. Fua", "S. Süsstrunk" ], "title": "Slic superpixels compared to state-of-the-art superpixel methods", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2012 }, { "authors": [ "Milad Alizadeh", "Javier Fernández-Marqués", "Nicholas D. Lane", "Yarin Gal" ], "title": "A empirical study of binary neural networks", "venue": "optimisation. In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Milad Alizadeh", "Arash Behboodi", "Mart van Baalen", "Christos Louizos", "Tijmen Blankevoort", "Max Welling" ], "title": "Gradient `1 regularization for quantization robustness", "venue": "arXiv preprint arXiv:2002.07520,", "year": 2020 }, { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": null, "year": 2013 }, { "authors": [ "Davis Blalock", "Jose Javier Gonzalez Ortiz", "Jonathan Frankle", "John Guttag" ], "title": "What is the state of neural network pruning", "venue": null, "year": 2020 }, { "authors": [ "Zachariah Carmichael", "Hamed F. Langroudi", "Char Khazanov", "Jeffrey Lillie", "John L. Gustafson", "Dhireesha Kudithipudi" ], "title": "Deep positron: A deep neural network using the posit number system, 2018", "venue": null, "year": 2018 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Liò", "Petar Veličković" ], "title": "Principal neighbourhood aggregation for graph nets, 2020", "venue": null, "year": 2020 }, { "authors": [ "Matthieu Courbariaux", "Yoshua Bengio", "Jean-Pierre David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": null, "year": 2015 }, { "authors": [ "Yinpeng Dong", "Renkun Ni", "Jianguo Li", "Yurong Chen", "Jun Zhu", "Hang Su" ], "title": "Learning accurate low-bit deep neural networks with stochastic quantization, 2017", "venue": null, "year": 2017 }, { "authors": [ "David K Duvenaud", "Dougal Maclaurin", "Jorge Iparraguirre", "Rafael Bombarell", "Timothy Hirzel", "Alán AspuruGuzik", "Ryan P Adams" ], "title": "Convolutional networks on graphs for learning molecular fingerprints", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K. Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Steven K. Esser", "Jeffrey L. McKinstry", "Deepika Bablani", "Rathinakumar Appuswamy", "Dharmendra S. Modha" ], "title": "Learned step size quantization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Angela Fan", "Pierre Stock", "Benjamin Graham", "Edouard Grave", "Remi Gribonval", "Herve Jegou", "Armand Joulin" ], "title": "Training with quantization noise for extreme model compression, 2020", "venue": null, "year": 2020 }, { "authors": [ "Boyuan Feng", "Yuke Wang", "Xu Li", "Shu Yang", "Xueqiao Peng", "Yufei Ding" ], "title": "Sgquant: Squeezing the last bit on graph neural networks with specialized quantization, 2020", "venue": null, "year": 2020 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen" ], "title": "Fast graph representation learning with pytorch geometric, 2019", "venue": null, "year": 2019 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "CoRR, abs/1704.01212,", "year": 2017 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs, 2017", "venue": null, "year": 2017 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "venue": null, "year": 2015 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": null, "year": 2015 }, { "authors": [ "Benoit Jacob", "Skirmantas Kligys", "Bo Chen", "Menglong Zhu", "Matthew Tang", "Andrew Howard", "Hartwig Adam", "Dmitry" ], "title": "Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference, 2017", "venue": null, "year": 2017 }, { "authors": [ "Zhihao Jia", "Sina Lin", "Mingyu Gao", "Matei Zaharia", "Alex Aiken" ], "title": "Improving the accuracy, scalability, and performance of graph neural networks with roc", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Wengong Jin", "Regina Barzilay", "Tommi Jaakkola" ], "title": "Junction tree variational autoencoder for molecular graph generation, 2018", "venue": null, "year": 2018 }, { "authors": [ "Dhiraj Kalamkar", "Dheevatsa Mudigere", "Naveen Mellempudi", "Dipankar Das", "Kunal Banerjee", "Sasikanth Avancha", "Dharma Teja Vooturi", "Nataraj Jammalamadaka", "Jianyu Huang", "Hector Yuen", "Jiyan Yang", "Jongsoo Park", "Alexander Heinecke", "Evangelos Georganas", "Sudarshan Srinivasan", "Abhisek Kundu", "Misha Smelyanskiy", "Bharat Kaul", "Pradeep Dubey" ], "title": "A study of bfloat16 for deep learning training, 2019", "venue": null, "year": 2019 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Raghuraman Krishnamoorthi" ], "title": "Quantizing deep convolutional networks for efficient inference: A whitepaper, 2018", "venue": null, "year": 2018 }, { "authors": [ "Liam Li", "Kevin Jamieson", "Afshin Rostamizadeh", "Ekaterina Gonina", "Jonathan Ben-tzur", "Moritz Hardt", "Benjamin Recht", "Ameet Talwalkar" ], "title": "A system for massively parallel hyperparameter tuning", "venue": "In Proceedings of Machine Learning and Systems", "year": 2020 }, { "authors": [ "Christos Louizos", "Karen Ullrich", "Max Welling" ], "title": "Bayesian compression for deep learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Diamos", "Erich Elsen", "David Garcia", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh", "Hao Wu" ], "title": "Mixed precision training, 2017", "venue": null, "year": 2017 }, { "authors": [ "Anurag Mukkara", "Nathan Beckmann", "Maleen Abeydeera", "Xiaosong Ma", "Daniel Sanchez" ], "title": "Exploiting locality in graph analytics through hardware-accelerated traversal scheduling", "venue": "In Proceedings of the 51st Annual IEEE/ACM International Symposium on Microarchitecture,", "year": 2018 }, { "authors": [ "Gabriele Prato", "Ella Charlaix", "Mehdi Rezagholizadeh" ], "title": "Fully quantized transformer for machine translation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Paul-Edouard Sarlin", "Daniel DeTone", "Tomasz Malisiewicz", "Andrew Rabinovich" ], "title": "Superglue: Learning feature matching with graph neural networks", "venue": "arXiv preprint arXiv:1911.11763,", "year": 2019 }, { "authors": [ "Tao Sheng", "Chen Feng", "Shaojie Zhuo", "Xiaopeng Zhang", "Liang Shen", "Mickey Aleksic" ], "title": "A quantizationfriendly separable convolution for mobilenets", "venue": "doi: 10.1109/emc2.2018.00011. URL http://dx.doi.org/10.1109/emc2.2018.00011", "year": 2018 }, { "authors": [ "Moran Shkolnik", "Brian Chmiel", "Ron Banner", "Gil Shomron", "Yuri Nahshan", "Alex Bronstein", "Uri Weiser" ], "title": "Robust quantization: One model to rule them all", "venue": "arXiv preprint arXiv:2002.07686,", "year": 2020 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": "The journal of machine learning research,", "year": 1929 }, { "authors": [ "Rianne van den Berg", "Thomas N. Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion, 2017", "venue": null, "year": 2017 }, { "authors": [ "Petar Velickovic", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In 6th International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kuan Wang", "Zhijian Liu", "Yujun Lin", "Ji Lin", "Song Han" ], "title": "Haq: Hardware-aware automated quantization with mixed precision, 2018", "venue": null, "year": 2018 }, { "authors": [ "Hao Wu", "Patrick Judd", "Xiaojie Zhang", "Mikhail Isaev", "Paulius Micikevicius" ], "title": "Integer quantization for deep learning inference: Principles and empirical evaluation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Zhenqin Wu", "Bharath Ramsundar", "Evan N. Feinberg", "Joseph Gomes", "Caleb Geniesse", "Aneesh S. Pappu", "Karl Leswing", "Vijay Pande" ], "title": "Moleculenet: A benchmark for molecular machine learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Bencheng Yan", "Chaokun Wang", "Gaoyang Guo", "Yunkai Lou" ], "title": "Tinygnn: Learning efficient graph neural networks", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD", "year": 2020 }, { "authors": [ "Hanqing Zeng", "Viktor Prasanna" ], "title": "Graphact. The 2020", "venue": "ACM/SIGDA International Symposium on FieldProgrammable Gate Arrays, Feb 2020. doi: 10.1145/3373087.3375312", "year": 2020 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning", "venue": null, "year": 2020 }, { "authors": [ "Esser" ], "title": "2020), has been highly effective on CNNs even down to 2 bit quantization. Another approach would be to use robust quantization: the ideas in these works are to reduce the impact of changing quantization ranges i.e. making the architecture more robust to quantization", "venue": "Works in this area include Alizadeh et al", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "GNNs have received substantial attention in recent years due to their ability to model irregularly structured data. As a result, they are extensively used for applications as diverse as molecular interactions (Duvenaud et al., 2015; Wu et al., 2017), social networks (Hamilton et al., 2017), recommendation systems (van den Berg et al., 2017) or program understanding (Allamanis et al., 2018). Recent advancements have centered around building more sophisticated models including new types of layers (Kipf & Welling, 2017; Velickovic et al., 2018; Xu et al., 2019) and better aggregation functions (Corso et al., 2020). However, despite GNNs having few model parameters, the compute required for each application remains tightly coupled to the input graph size. A 2-layer Graph Convolutional Network (GCN) model with 32 hidden units would result in a model size of just 81KB but requires 19 GigaOPs to process the entire Reddit graph. We illustrate this growth in fig. 1.\nOne major challenge with graph architectures is therefore performing inference efficiently, which limits the applications they can be deployed for. For example, GNNs have been combined with CNNs for SLAM feature matching (Sarlin et al., 2019), however it is not trivial to deploy this technique on smartphones, or even smaller devices, whose neural network accelerators often do not implement floating point arithmetic, and instead favour more efficient integer arithmetic. Integer quantization is one way to lower the compute, memory and energy budget required to perform inference, without requiring modifications to the model architecture; this is also useful for model serving in data centers.\nAlthough quantization has been well studied for CNNs and language models (Jacob et al., 2017; Wang et al., 2018; Zafrir et al., 2019; Prato et al., 2019), there remains relatively little work addressing ∗Equal contribution. Correspondence to: Shyam Tailor <sat62@cam.ac.uk>\nGNN efficiency (Mukkara et al., 2018; Jia et al., 2020; Zeng & Prasanna, 2020; Yan et al., 2020). To the best of our knowledge, there is no work explicitly characterising the issues that arise when quantizing GNNs or showing latency benefits of using low-precision arithmetic. The recent work of Wang et al. (2020) explores only binarized embeddings of a single graph type (citation networks). In Feng et al. (2020) a heterogeneous quantization framework assigns different bits to embedding and attention coefficients in each layer while maintaining the weights at full precision (FP32). Due to the mismatch in operands’ bit-width the majority of the operations are performed at FP32 after data casting, making it impractical to use in general purpose hardware such as CPUs or GPUs. In addition they do not demonstrate how to train networks which generalize to unseen input graphs. Our framework relies upon uniform quantization applied to all elements in the network and uses bit-widths (8-bit and 4-bit) that are supported by off-the-shelf hardware such as CPUs and GPU for which efficient low-level operators for common operations found in GNNs exists.\nThis work considers the motivations and problems associated with quantization of graph architectures, and provides the following contributions:\n• The explanation of the sources of degradation in GNNs when using lower precision arithmetic. We show how the choice of straight-through estimator (STE) implementation, node degree, and method for tracking quantization statistics significantly impacts performance.\n• An architecture-agnostic method for quantization-aware training on graphs, Degree-Quant (DQ), which results in INT8 models often performing as well as their FP32 counterparts. At INT4, models trained with DQ typically outperform quantized baselines by over 20%. We show, unlike previous work, that models trained with DQ generalize to unseen graphs. We provide code at this URL: https://github.com/camlsys/degree-quant.\n• We show that quantized networks achieve up to 4.7× speedups on CPU with INT8 arithmetic, relative to full precision floating point, with 4-8× reductions in runtime memory usage." }, { "heading": "2 BACKGROUND", "text": "" }, { "heading": "2.1 MESSAGE PASSING NEURAL NETWORKS (MPNNS)", "text": "Many popular GNN architectures may be viewed as generalizations of CNN architectures to an irregular domain: at a high level, graph architectures attempt to build representations based on a node’s neighborhood (see fig. 2). Unlike CNNs, however, this neighborhood does not have a fixed ordering or size. This work considers GNN architectures conforming to the MPNN paradigm (Gilmer et al., 2017). A graph G = (V,E) has node features X ∈ RN×F , an incidence matrix I ∈ N2×E , and optionallyDdimensional edge features E ∈ RE×D. The forward pass through an MPNN layer consists of message passing, aggregation and update phases: h(i)l+1 = γ(h (i) l , ∧ j∈N (i)[φ(h (j) l ,h (i) l , eij)]). Messages from\nnode u to node v are calculated using function φ, and are aggregated using a permutation-invariant function ∧ . The features at v are subsequently updated using γ.\nWe focus on three architectures with corresponding update rules:\n1. Graph Convolution Network (GCN): h(i)l+1 = ∑\nj∈N (i)∪{i}( 1√ didj Wh (j) l ) (Kipf & Welling,\n2017), where di refers to the degree of node i.\n2. Graph Attention Network (GAT): h(i)l+1 = αi,iWh (i) l + ∑ j∈N (i)(αi,jWh (j) l ), where α\nrepresent attention coefficients (Velickovic et al., 2018).\n3. Graph Isomorphism Network (GIN): h(i)l+1 = fΘ[(1 + )h (i) l + ∑ j∈N (i) h (j) l ], where f is a\nlearnable function (e.g. a MLP) and is a learnable constant (Xu et al., 2019)." }, { "heading": "2.2 QUANTIZATION FOR NON-GRAPH NEURAL NETWORKS", "text": "Quantization allows for model size reduction and inference speedup without changing the model architecture. While there exists extensive studies of the impact of quantization at different bitwidths (Courbariaux et al., 2015; Han et al., 2015; Louizos et al., 2017) and data formats (Micikevicius et al., 2017; Carmichael et al., 2018; Kalamkar et al., 2019), it is 8-bit integer (INT8) quantization that has attracted the most attention. This is due to INT8 models reaching comparable accuracy levels to FP32 models (Krishnamoorthi, 2018; Jacob et al., 2017), offer a 4× model compression, and result in inference speedups on off-the-shelf hardware as 8-bit arithmetic is widely supported.\nQuantization-aware training (QAT) has become the de facto approach towards designing robust quantized models with low error (Wang et al., 2018; Zafrir et al., 2019; Wang et al., 2018). In their simplest forms, QAT schemes involve exposing the numerical errors introduced by quantization by simulating it on the forward pass Jacob et al. (2017) and make use of STE (Bengio et al., 2013) to compute the gradients—as if no quantization had been applied. For integer QAT, the quantization of a tensor x during the forward pass is often implemented as: xq = min(qmax,max(qmin, bx/s+ zc)), where qmin and qmax are the minimum and maximum representable values at a given bit-width and signedness, s is the scaling factor making x span the [qmin, qmax] range and, z is the zero-point, which allows for the real value 0 to be representable in xq. Both s and z are scalars obtained at training time. hen, the tensor is dequantized as: x̂ = (xq − z)s, where the resulting tensor x̂ ∼ x for a high enough bit-width. This similarity degrades at lower bit-widths. Other variants of integer QAT are presented in Jacob et al. (2017) and Krishnamoorthi (2018).\nTo reach performance comparable to FP32 models, QAT schemes often rely on other techniques such as gradient clipping, to mask gradient updates based on the largest representable value at a given bit-width; stochastic, or noisy, QAT, which stochastically applies QAT to a portion of the weights at each training step (Fan et al., 2020; Dong et al., 2017); or the re-ordering of layers (Sheng et al., 2018; Alizadeh et al., 2019)." }, { "heading": "3 QUANTIZATION FOR GNNS", "text": "In this section, we build an intuition for why GNNs would fail with low precision arithmetic by identifying the sources of error that will disproportionately affect the accuracy of a low precision model. Using this insight, we propose our technique for QAT with GNNs, Degree-Quant. Our analysis focuses on three models: GCN, GAT and GIN. This choice was made as we believe that these are among the most popular graph architectures, with strong performance on a variety of tasks (Dwivedi et al., 2020), while also being representative of different trends in the literature." }, { "heading": "3.1 SOURCES OF ERROR", "text": "QAT relies upon the STE to make an estimate of the gradient despite the non-differentiable rounding operation in the forward pass. If this approximation is inaccurate, however, then poor performance will be obtained. In GNN layers, we identify the aggregation phase, where nodes combine messages from a varying number of neighbors in a permutation-invariant fashion, as a source of substantial numerical error, especially at nodes with high in-degree. Outputs from aggregation have magnitudes\nthat vary significantly depending on a node’s in-degree: as it increases, the variance of aggregation values will increase.1 Over the course of training qmin and qmax, the quantization range statistics, become severely distorted by infrequent outliers, reducing the resolution for the vast majority of values observed. This reults in increased rounding error for nodes with smaller in-degrees. Controlling qmin and qmax hence becomes a trade-off balancing truncation error and rounding error.\nWe can derive how the mean and variance of the aggregation output values vary as node in-degree, n, increases for each of the three GNN layers. Suppose we model incoming message values for a single output dimension with random variables Xi, without making assumptions on their exact distribution or independence. Further, we use Yn as the random variable representing the value of node output after the aggregation step. With GIN layers, we have Yn = (1 + )X0 + ∑n i=1Xi. It is trivial to prove that E(Yn) = O(n). The variance of the aggregation output is also O(n) in the case that that ∑ i 6=j Cov(Xi, Xj) ∑ i Var(Xi). We note that if ∑ i 6=j Cov(Xi, Xj) is large then it implies that the network has learned highly redundant features, and may be a sign of over-fitting. Similar arguments can be made for GCN and GAT layers; we would expect GCN aggregation values to grow like O( √ n), and GAT aggregation values to remain constant (O(1)) due to the attention coefficients.\nWe empirically validate these predictions on GNNs trained on Cora; results are plotted in fig. 3. We see that the aggregation values do follow the trends predicted, and that for the values of in-degree in the plot (up to 168) the covariance terms can be neglected. As expected, the variance and mean of the aggregated output grow fastest for GIN, and are roughly constant for GAT as in-degree increases. From this empirical evidence, it would be expected that GIN layers are most affected by quantization.\nBy using GIN and GCN as examples, we can see how aggregation error causes error in weight updates. Suppose we consider a GIN layer incorporating one weight matrix in the update function i.e. h(i)l+1 = f(Wy (i) GIN), where f is an activation function, y (i) GIN = (1 + )h (i) l + ∑ j∈N (i) h (j) l , and\nN (i) denotes the in-neighbors of node i. Writing y(i)GCN = ∑ k∈N (i)( 1√ didk Wh (j) l ), we see that the derivatives of the loss with respect to the weights for GCN and GIN are:\nGIN ∂L ∂W = |V |∑ i=1 ( ∂L ∂h (i) l+1 ◦ f ′(Wy(i)GIN) ) y (i)> GIN\nGCN ∂L ∂W = |V |∑ i=1 ∑ j∈N (i) 1√ didj ( ∂L ∂h (i) l+1 ◦ f ′(y(i)GCN) ) h (j)> l\nThe larger the error in y(i)GIN—caused by aggregation error—the greater the error in the weight gradients for GIN, which results in poorly performing models being obtained. The same argument applies to GCN, with the h(j) >\nl and y (i) GCN terms introducing aggregation error into the weight updates." }, { "heading": "3.2 OUR METHOD: DEGREE-QUANT", "text": "To address these sources of error we propose Degree-Quant (DQ), a method for QAT with GNNs. We consider both inaccurate weight updates and unrepresentative quantization ranges.\n1The reader should note that we are not referring to the concept of estimator variance, which is the subject of sampling based approaches—we are exclusively discussing the variance of values immediately after aggregation.\nAlgorithm 1 Degree-Quant (DQ). Functions accepting a protective mask m perform only the masked computations at full precision: intermediate tensors are not quantized. At test time protective masking is disabled. In fig. 11 (in the Appendix) we show with a diagram how a GCN layers makes use of DQ.\n1: procedure TRAINFORWARDPASS(G,p) 2: . Calculate mask and quantized weights, Θ′, which all operations share 3: m← BERNOULLI(p) 4: Θ′ ← QUANTIZE(Θ) 5: . Messages with masked sources are at full precision (excluding weights) 6: M←MESSAGECALCULATE(G,Θ′,m) 7: X ← QUANTIZE(AGGREGATE(M,Θ′,m), m) . No quantization for masked nodes 8: return UPDATE(X,Θ′,m) . Quantized weights always used 9: end procedure\nStochastic Protection from Quantization to Improve Weight Update Accuracy. DQ aims to encourage more accurate weight updates by stochastically protecting nodes in the network from quantization. At each layer a protective node mask is generated; all masked nodes have the phases of the message passing, aggregation and update performed at full precision. This includes messages sent by protected nodes to other nodes, as shown in fig. 4 (a detailed diagram is shown in fig. 11). It is also important to note that the weights used at all nodes are the same quantized weights; this is motivated by the fact that our method is used to encourage more accurate gradients to flow back to the weights through high in-degree nodes. At test time protection is disabled: all nodes operate at low precision.\nTo generate the mask, we pre-process each graph before training and create a vector of probabilities p with length equal to the number of nodes. At training time, mask m is generated by sampling using the Bernoulli distribution: m ∼ Bernoulli(p). In our scheme pi is higher if the in-degree of node i is large, as we find empirically that high in-degree nodes contribute most towards error in weight updates. We use a scheme with two hyperparameters, pmin and pmax; nodes with the maximum in-degree are assigned pmax as their masking probability, with all other nodes assigned a probability calculated by interpolating between pmin and pmax based on their in-degree ranking in the graph.\nPercentile Tracking of Quantization Ranges. Figure 3 demonstrates large fluctuations in the variance of the aggregation output as in-degree increases. Since these can disproportionately affect the ranges found by using min-max or momentum-based quantization, we propose using percentiles. While percentiles have been used for post-training quantization (Wu et al., 2020), we are the first (to the best of our knowledge) to propose making it a core part of QAT; we find it to be a key contributor to achieving consistent results with graphs. Using percentiles involves ordering the values in the tensor and clipping a fraction of the values at both ends of the distribution. The fraction to clip is a hyperparameter. We are more aggressive than existing literature on the quantity we discard: we clip the top and bottom 0.1%, rather than 0.01%, as we observe the fluctuations to be a larger issue with GNNs than with CNNs or DNNs. Quantization ranges are more representative of the vast majority of values in this scheme, resulting in less rounding error.\nWe emphasize that a core contribution of DQ is that it is architecture-agnostic. Our method enables a wide variety of architectures to use low precision arithmetic at inference time. Our method is also orthogonal—and complementary—to other techniques for decreasing GNN computation requirements, such as sampling based methods which are used to reduce memory consumption (Zeng et al., 2020), or weight pruning (Blalock et al., 2020) approaches to achieve further model compression." }, { "heading": "4 EXPERIMENTS", "text": "In this section we first analyse how the choice of quantization implementation affects performance of GNNs. We subsequently evaluate Degree-Quant against the strong baselines of: FP32, INT8-QAT and, INT8-QAT with stochastic masking of weights (Fan et al., 2020). We refer to this last approach as noisy QAT or nQAT. To make explicit that we are quantizing both weights and activations, we use the notation W8A8. We repeat the experiments at INT4. Our study evaluates performance on six datasets and includes both node-level and graph-level tasks. The datasets used were Cora, CiteSeer, ZINC, MNIST and CIFAR10 superpixels, and REDDIT-BINARY. Across all datasets INT8 models trained with Degree-Quant manage to recover most of the accuracy lost as a result of quantization. In some instances, DQ-INT8 outperform the extensively tuned FP32 baselines. For INT4, DQ outperforms all QAT baselines and results in double digits improvements over QAT-INT4 in some settings. Details about each dataset and our experimental setup can be found in appendix A.1." }, { "heading": "4.1 IMPACT OF QUANTIZATION GRADIENT ESTIMATOR ON CONVERGENCE", "text": "The STE is a workaround for when the forward pass contains non-differentiable operations (e.g. rounding in QAT) that has been widely adopted in practice. While the choice of STE implementation generally results in marginal differences for CNNs—even for binary networks (Alizadeh et al., 2019)—it is unclear whether only marginal differences will also be observed for GNNs. Motivated by this, we study the impact of four off-the-shelve quantization procedures on the three architectures evaluated for each type of dataset; the implementation details of each one is described in appendix A.3. We perform this experiment to ensure that we have the strongest possible QAT baselines. Results are shown in table 1. We found the choice quantization implementation to be highly dependent on the model architecture and type of problem to be solved: we see a much larger variance than is observed with CNNs; this is an important discovery for future work building on our study.\nWe observe a general trend in all INT4 experiments benefiting from momentum as it helps smoothing out the quantization statistics for the inherently noisy training stage at low bitwidths. This trend applies as well for the majority of INT8 experiments, while exhibiting little impact on MNIST. For INT8 Cora-GCN, large gradient norm values in the early stages of training (see fig. 5) mean that these models not benefit from momentum as quantization ranges fail to keep up with the rate of changes in tensor values; higher momentum can help but also leads to instability. In contrast, GAT has stable initial training dynamics, and hence obtains better results with momentum. For the molecules dataset ZINC, we consistently obtained lower regression loss when using momentum. We note that GIN models often suffer from higher performance degradation (as was first noted in fig. 3), specially at W4A4. This is not the case however for image datasets using superpixels. We believe that datasets with Gaussian-like node degree distributions (see fig. 9) are more tolerant of the imprecision introduced by quantization, compared to datasets with tailed distributions. We leave more in-depth analysis of how graph topology affects quantization as future work.\nQuant. Model Node Classification (Accuracy %) Graph Classification (Accuracy %) Graph Regression (Loss) Scheme Arch. Cora ↑ Citeseer ↑ MNIST ↑ CIFAR-10 ↑ ZINC ↓\nRef. (FP32) GCN 81.4± 0.7 71.1± 0.7 90.0± 0.2 54.5± 0.1 0.469± 0.002 GAT 83.1± 0.4 72.5± 0.7 95.6± 0.1 65.4± 0.4 0.463± 0.002 GIN 77.6± 1.1 66.1± 0.9 93.9± 0.6 53.3± 3.7 0.414± 0.009\nOurs (FP32) GCN 81.2± 0.6 71.4± 0.9 90.9± 0.4 58.4± 0.5 0.450± 0.008 GAT 83.2± 0.3 72.4± 0.8 95.8± 0.4 65.1± 0.8 0.455± 0.006 GIN 77.9± 1.1 65.8± 1.5 96.4± 0.4 57.4± 0.7 0.334± 0.024\nQAT (W8A8) GCN 81.0± 0.7 71.3± 1.0 90.9± 0.2 56.4± 0.5 0.481± 0.029 GAT 81.9± 0.7 71.2± 1.0 95.8± 0.3 66.3± 0.4 0.460± 0.005 GIN 75.6± 1.2 63.0± 2.6 96.7± 0.2 52.4± 1.2 0.386± 0.025" }, { "heading": "4.2 OBTAINING QUANTIZATION BASELINES", "text": "Our FP32 results, which we obtain after extensive hyperparameter tuning, and those from the baselines are shown at the top of table 2. We observed large gains on MNIST, CIFAR10 and, ZINC.\nFor our QAT-INT8 and QAT-INT4 baselines, we use the quantization configurations informed by our analysis in section 4.1. For Citeseer we use the best resulting setup analysed for Cora, and for CIFAR10 that from MNIST. Then, the hyperparameters for each experiment were fine tuned individually, including noise rate n ∈ [0.5, 0.95] for nQAT experiments. QAT-INT8 and QAT-INT4 results in table 2 and QAT-INT4, with the exception of MNIST (an easy to classify dataset), corroborate our hypothesis that GIN layers are less resilient to quantization. This was first observed in fig. 3. In the case of ZINC, while all models results in noticeable degradation, GIN sees a more severe 16% increase of regression loss compared to our FP32 baseline. For QAT W4A4 an accuracy drop of over 35% and 47% is observed for Cora and Citeseer respectively. The stochasticity induced by nQAT helped in recovering some of the accuracy lost as a result of quantization for citation networks (both INT8 and INT4) but had little impact on other datasets and harmed performance in some cases." }, { "heading": "4.3 COMPARISONS OF DEGREE-QUANT WITH EXISTING QUANTIZATION APPROACHES", "text": "Degree-Quant provides superior quantization for all GNN datasets and architectures. Our results with DQ are highlighted in gray in table 2 and table 3. Citation networks trained with DQ for W8A8 manage to recover most of the accuracy lost as a result of QAT and outperform most of nQAT baselines. In some instances DQ-W8A8 models outperform the reference FP32 baselines. At 4-bits, DQ results in even larger gains compared to W4A4 baselines. We see DQ being more effective for GIN layers, outperforming INT4 baselines for Cora (+24.9%), Citeseer (+26.2%) and REDDITBINARY (+23.0%) by large margins. Models trained with DQ at W4A4 for graph classification and graph regression also exhibit large performance gains (of over 10%) in most cases. For ZINC, all\nQuantization Model REDDIT-BIN (Acc. %) ↑\nRef. (FP32) GIN 92.2± 2.3 Ours (FP32) GIN 92.0± 1.5\nQAT-W8A8 GIN 76.1± 7.5 nQAT-W8A8 GIN 77.5± 3.4 DQ-W8A8 GIN 91.8± 2.3 (+14.3)\nQAT-W4A4 GIN 54.4± 6.6 nQAT-W4A4 GIN 58.0± 6.3 DQ-W4A4 GIN 81.3± 4.4 (+23.0)\nTable 3: Results for DQ-INT8 GIN models perform nearly as well as at FP32. For INT4, DQ offers a significant increase in accuracy.\nmodels achieve over 20% lower regression loss. Among the top performing models using DQ, ratios of pmin and pmax in [0.0, 0.2] were the most common. Figure 10 in the appendix shows validation loss curves for GIN models trained using different DQ probabilities on the REDDIT-BINARY dataset." }, { "heading": "5 DISCUSSION", "text": "Latency and Memory Implications. In addition to offering significantly lower memory usage (4× with INT8), quantization can reduce latency—especially on CPUs. We found that with INT8 arithmetic we could accelerate inference by up to 4.7×. We note that the latency benefit depends on the graph topology and feature dimension, therefore we ran benchmarks on a variety of graph datasets, including Reddit2, Zinc, Cora, Citeseer, and CIFAR-10; Zinc and Reddit results are shown in table 4, with further results given in the appendix. For a GCN layer with in- and out-dimension of 128, we get speed-ups of: 4.3× on Reddit, 2.5× on Zinc, 1.3× on Cora, 1.3× on Citeseer and, 2.1× on CIFAR-10. It is also worth emphasizing that quantized networks are necessary to efficiently use accelerators deployed in smartphones and smaller devices as they primarily accelerate integer arithmetic, and that CPUs remain a common choice for model serving on servers. The decrease in latency on CPUs is due to improved cache performance for the sparse operations; GPUs, however, see less benefit due to their massively-parallel nature which relies on mechanisms other than caching to hide slow random memory accesses, which are unavoidable in this application.\nAblation Study: Benefits of Percentile Ranges. Figure 5 shows the value of percentiles during training. We see that when using absolute min/max the upper range grows to over double the range required for 99.9% of values, effectively halving the resolution of the quantized values. DQ is more stable, and we obtained strong results with an order of magnitude less tuning relative to the baselines.\nAblation Study: Source of Degradation at INT4. Figure 6 assesses how INT8 GAT (without DQ) degrades as single elements are converted to INT4, in order to understand the precipitous drop in\n2The largest graph commonly benchmarked on in the GNN literature\naccuracy in the INT4 baselines; further plots for GCN and GIN are included in the appendix. We observe that most elements cause only modest performance losses relative to a full INT8 model. DQ is most important to apply to elements which are constrained by numerical precision, such as the aggregation and message elements in GAT. Weight elements, however, are consistently unaffected.\nAblation Study: Effect of Stochastic Element in Degree-Quant. We observe that the stochastic protective masking in DQ alone often achieves most of the performance gain over the QAT baseline; results are given in table 9 in the appendix. The benefit of the percentile-based quantization ranges is stability, although it can yield some performance gains. The full DQ method provides consistently good results on all architectures and datasets, without requiring an extensive analysis as in 4.1." }, { "heading": "6 CONCLUSION", "text": "This work has presented Degree-Quant, an architecture-agnostic and stable method for training quantized GNN models that can be accelerated using off-the-shelf hardware. With 4-bit weights and activations we achieve 8× compression while surpassing strong baselines by margins regularly exceeding 20%. At 8-bits, models trained with DQ perform on par or better than the baselines while achieving up to 4.7× lower latency than FP32 models. Our work offers a comprehensive foundation for future work in this area and is a first step towards enabling GNNs to be deployed more widely, including to resource constrained devices such as smartphones." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work was supported by Samsung AI and by the UK’s Engineering and Physical Sciences Research Council (EPSRC) with grants EP/M50659X/1 and EP/S001530/1 (the MOA project) and the European Research Council via the REDIAL project." }, { "heading": "A APPENDIX", "text": "Readers seeking advice on implementation will find appendix A.5 especially useful. We provide significant advice surrounding best practices on quantization for GNNs, along with techniques which we believe can boost our methods beyond the performance described in this paper, but for which we did not have time to fully evaluate." }, { "heading": "A.1 EXPERIMENTAL SETUP", "text": "As baselines we use the architectures and results reported by Fey & Lenssen (2019) for citation networks, Dwivedi et al. (2020) for MNIST, CIFAR-10 and ZINC and, Xu et al. (2019) for REDDITBINARY. We re-implemented the architectures and datasets used in these publications and replicated the results reported at FP32. Models using GIN layers learn parameter . These models are often referred to as GIN- . The high-level description of these architectures is shown in table 5. The number of parameters for each architecture-dataset in this work are shown in table 6.\nOur infrastructure was implemented using PyTorch Geometric (PyG) (Fey & Lenssen, 2019). We generate candidate hyperparameters using random search, and prune trials using the asynchronous hyperband algorithm (Li et al., 2020). Hyperparameters searched over were learning rate, weight decay, and dropout (Srivastava et al., 2014) and drop-edge (Rong et al., 2020) probabilities. The search ranges were initialized centered at the values used in the reference implementations of the baselines. Degree-Quant requires searching for two additional hyperparameters, pmin and pmax, these were tuned in a grid-search fashion. We report our results using the hyperparameters which achieved the best validation loss over 100 runs on the Cora and Citeseer datasets, 10 runs for MNIST, CIFAR-10 and ZINC, and 10-fold cross-validation for REDDIT-BINARY.\nWe generally used fewer hyperparameter runs for our DQ runs than we did for baselines—even ignoring the searches over the various STE configs. As our method is more stable, finding a reasonable set of parameters was easier than before. As is usual with quantization experiments, we found it useful to decrease the learning rate relative to the FP32 baseline.\nOur experiments ran on several machines in our SLURM cluster using Intel CPUs and NVIDIA GPUs. Each machine was running Ubuntu 18.04. The GPU models in our cluster were: V100, RTX 2080Ti and GTX 1080Ti.\nFor QAT experiments, all elements of each network are quantized: inputs to each layer, the weights, the messages sent between nodes, the inputs to aggregation stage and its outputs and, the outputs of the update stage (which are the outputs of the GNN layer before activation). In this way, all intermediate tensors in GNNs are quantized with the exception of the attention mechanism in GAT; we do not quantize after the softmax calculation, due to the numerical precision required at this\nstage. With the exception of Cora and Citeseer, the models evaluated in this work make use of Batch Normalization (Ioffe & Szegedy, 2015). For deployments of quantized models, Batch Normalization layers are often folded with the weights (Krishnamoorthi, 2018). This is to ensure the input to the next layer is within the expected [qmin, qmax] ranges. In this work, for both QAT baselines and QAT+DQ, we left BN layers unfolded but ensure the inputs and outputs were quantized to the appropriate number of bits (i.e. INT8 or INT4) before getting multiplied with the layer weights. We leave as future work proposing a BN folding mechanism applicable for GNNs and studying its impact for deployments of quantized GNNs.\nThe GIN models evaluated on REDDIT-BINARY used QAT for all layers with the exception of the input layer of the MLP in the first GIN layer. This compromise was needed to overcome the severe degradation introduced by quantization when operating on nodes with a single scalar as feature." }, { "heading": "A.2 DATASETS", "text": "We show in Table 7 the statistics for each dataset either used or referred to in this work. For Cora and Citeseer datasets, nodes correspond to documents and edges to citations between these. Node features are a bag-of-words representation of the document. The task is to classify each node in the graph (i.e. each document) correctly. The MNIST and CIFAR-10 datasets (commonly used for image classification) are transformed using SLIC (Achanta et al., 2012) into graphs where each node represents a cluster of perceptually similar pixels or superpixels. The task is to classify each image using their superpixels graph representation. The ZINC dataset contains graphs representing molecules, were each node is an atom. The task is to regress a molecular property (constrained solubility (Jin et al., 2018)) given the graph representation of the molecule. Nodes in graphs of the REDDIT-BINARY dataset represent users of a Reddit thread with edges drawn between a pair of nodes if these interacted. This dataset contains graphs of two types of communities: question-answer threads and discussion threads. The task is to determine if a given graph is from a question-answer thread or a discussion thread.\nWe use standard splits for MNIST, CIFAR-10 and ZINC. For citation datasets (Cora and Citeseer), we use the splits used by Kipf & Welling (2017). For REDDIT-BINARY we use 10-fold cross validation." }, { "heading": "A.3 QUANTIZATION IMPLEMENTATIONS", "text": "In section 4.1 we analyse different readily available quantization implementations and how they impact in QAT results. First, vanilla STE, which is the reference STE (Bengio et al., 2013) that lets the gradients pass unchanged; and gradient clipping (GC), which clips the gradients based on the maximum representable value for a given quantization level. Or in other words, GC limits gradients if the tensor’s magnitudes are outside the [qmin, qmax] range.\nxmin = { min(X) if step = 0 min(xmin, X) otherwise\n(1)\nxmin = { min(X) if step = 0 (1− c)xmin + cmin(X) otherwise\n(2)\nThe quantization modules keep track of the input tensor’s min and max values, xmin and xmax, which are then used to compute qmin, qmax, zero-point and scale parameters. For both vanilla STE and GC, we study two popular ways of keeping track of these statistics: min/max, which tracks the min/max tensor values observed over the course of training; and momentum, which computes the moving averages of those statistic during training. The update rules for xmin for STE min/max and STE momentum are presented in eq. (1) and eq. (2) respectively, where X is the tensor to be quantized and c is the momentum hyperparameter, which in all our experiments is set to its default 0.01. Equivalent rules apply when updating xmax (omitted).\nFor stochastic QAT we followed the implementation described in Fan et al. (2020), where at each training step a binary mask sampled from a Bernoulli distribution is used to specify which elements of the weight tensor will be quantized and which will be left at full precision. We experimented with block sizes larger than one (i.e. a single scalar) but often resulted in a sever drop in performance. All the reported results use block size of one." }, { "heading": "A.4 DEGREE-QUANT AND GRAPH LEVEL SUMMARIZATION", "text": "The percentile operation in our quantization scheme remains important for summarizing the graph when doing graph-level tasks, such as graph regression (Zinc) or graph classification (MNIST, CIFAR10 and REDDIT-BINARY). Since the number of nodes in each input graph is not constant, this can cause the summarized representation produced from the final graph layer to have a more tailed distribution than would be seen with other types of architectures (e.g. CNN). Adding the percentile operation reduces the impact of these extreme tails in the fully connected graph-summarization layers, thereby increasing overall performance. The arguments regarding weight update accuracy also still apply, as the ∂L\n∂h (i) l+1\nterm in the equations for the GCN and GIN should be more accurate compared to\nwhen the activations are always quantized before the summarization. This phenomenon is also noted by Fan et al. (2020).\nA.5 IMPLEMENTATION ADVICE\nWe provide details that will be useful for others working in the area, including suggestions that should boost the performance of our results and accelerate training. We release code on GitHub; this code is a clean implementation of the paper, suitable for users in downstream works." }, { "heading": "A.5.1 QUANTIZATION SETUP", "text": "As our work studies the pitfalls of quantization for GNNs, we were more aggressive in our implementation than is absolutely necessary: everything (where reasonably possible) in our networks is quantized. In practice, this leaves low-hanging fruit for improvements in accuracy:\n• Not quantizing the final layer (as is common practice for CNNs and Transformers) helps with accuracy, especially at INT4. A similar practice at the first layer will also be useful.\n• Using higher precision for the “summarization” stages of the model, which contributes little towards the runtime in most cases.\n• Taking advantage of mixed precision: since the benefits of quantization are primarily in the message passing phase (discussed below), one technique to boost accuracy is to only make the messages low precision.\nWe advise choosing a more realistic (less aggressive) convention than used in this work. The first two items would be appropriate." }, { "heading": "A.5.2 RELATIVE VALUE OF PERCENTILES COMPARED TO PROTECTIVE MASKING", "text": "There are two components to our proposed technique: stochastic, topology-aware, masking and percentile-based range observers for quantizers. We believe that percentiles provide more immediate value, especially at INT4. We find that they are useful purely from the perspective of stabilizing the optimization and reducing the sensitivity to hyperparameters.\nHowever, adding the masking does improve performance further. This is evident from table 9. In fact, performance may be degraded slightly when percentiles are also applied: this can be observed by comparing table 9 to the main results in the paper, table 2." }, { "heading": "A.5.3 PERCENTILES", "text": "The key downside with applying percentiles for range observers is that the operation can take significant time. Training with DQ is slower than before—however, since there is less sensitivity to hyperparameters, fewer runs end up being needed. We are confident that an effective way to speed up this operation is to use sampling. We expect 10% of the data should be adequate, however we believe that even 1% of the data may be sufficient (dataset and model dependent). However, we have not evaluated this setup in the paper; it is provided in the code release for experimentation.\nA.5.4 IMPROVING ON PERCENTILES\nWe believe that it is possible to significantly boost the performance of GNN quantization by employing a learned step size approach. Although we used percentiles in this paper to illustrate the rangeprecision trade-off for GNNs, we expect that learning the ranges will lead to better results. This approach, pioneered by works such as Esser et al. (2020), has been highly effective on CNNs even down to 2 bit quantization.\nAnother approach would be to use robust quantization: the ideas in these works are to reduce the impact of changing quantization ranges i.e. making the architecture more robust to quantization. Works in this area include Alizadeh et al. (2020) and Shkolnik et al. (2020).\nA.5.5 IMPROVING LATENCY\nThe slowest step of GNN inference is typically the sparse operations. It is therefore best to minimize the sizes of the messages between nodes i.e. quantize the message phase most aggressively. This makes the biggest impact on CPUs which are dependent on caches to obtain good performance.\nWe evaluated our code on CPU using Numpy and Scipy routines. For the GPU, we used implementations from PyTorch and PyTorch Geometric and lightly modified them to support INT8 where necessary. These results, while useful for illustrating the benefits of quantization, are by no means optimal: we did not devote significant time to improving latency. We believe better results can be obtained by taking advantage of techniques such as cache blocking or kernel fusion." }, { "heading": "A.5.6 PITFALLS", "text": "Training these models can be highly unstable: some experiments in the paper had standard deviations as large as 18%. We observed this to affect citation network experiments to the extent that they would not converge on GPUs: all these experiments had to be run on CPUs." }, { "heading": "A.6 DEGRADATION STUDIES", "text": "Figures 7 and 8 show the results of the ablation study conducted in section 5 for GCN and GIN. We observe that GCN is more tolerant to INT4 quantization than other architectures. GIN, however, requires accurate representations after the update stage, and heavily suffers from further quantization like GAT. The idea of performing different stages of inference at different precisions has been proposed, although it is uncommon (Wang et al., 2018).\nQuantization Model REDDIT-BIN ↑\nRef. (FP32) GIN 92.2± 2.3 Ours (FP32) GIN 92.0± 1.5\nDQ-INT8 (0.0, 0.1) GIN 91.8± 2.3 DQ-INT8 (0.1, 0.2) GIN 90.1± 2.5 DQ-INT8 (0.2, 0.2) GIN 89.0± 3.0 DQ-INT8 (0.2, 0.3) GIN 88.1± 3.0\nTable 8: Final test accuracies for FP32 and DQ-INT8 models whose validation loss curves are shown in fig. 10\nGraph summarization and output stage\nfor graph-level tasks using DQ\nGraph summarization and output stage\nfor graph-level tasks using nQAT" }, { "heading": "B CODE LISTINGS", "text": "Our code depends on PyTorch Geometric (Fey & Lenssen, 2019). These snippets should be compatible with Python 3.7 and PyTorch Geometric version 1.4.3. You can see the full code on GitHub: https://github.com/camlsys/degree-quant.\nB.1 MASK GENERATION\nclass ProbabilisticHighDegreeMask: def __init__(self, low_prob, high_prob, per_graph=True):\nself.low_prob = low_prob self.high_prob = high_prob self.per_graph = per_graph\ndef _process_graph(self, graph): # Note that: # 1. The probability of being masked increases as the indegree increases # 2. All nodes with the same indegree have the same bernoulli p # 3. you can set this such that all nodes have some probability of being masked\nn = graph.num_nodes indegree = degree(graph.edge_index[1], n, dtype=torch.long) counts = torch.bincount(indegree)\nstep_size = (self.high_prob - self.low_prob) / n indegree_ps = counts * step_size indegree_ps = torch.cumsum(indegree_ps, dim=0) indegree_ps += self.low_prob graph.prob_mask = indegree_ps[indegree]\nreturn graph\ndef __call__(self, data): if self.per_graph and isinstance(data, Batch):\ngraphs = data.to_data_list() processed = [] for g in graphs:\ng = self._process_graph(g) processed.append(g)\nreturn Batch.from_data_list(processed) else:\nreturn self._process_graph(data)\ndef evaluate_prob_mask(data): return torch.bernoulli(data.prob_mask).to(torch.bool)" }, { "heading": "B.2 MESSAGE PASSING WITH DEGREE-QUANT", "text": "Here we provide code to implement the layers as used by our proposal. These are heavily based off of the classes provided by PyTorch Geometric, with only minor modifications to insert the quantization steps where necessary. The normal quantized versions are similar, except without any concept of high/low masking.\nclass MessagePassingMultiQuant(nn.Module): \"\"\"This class is a lightweight modification of the default PyTorch Geometric MessagePassing class\"\"\"\n# irrelevant methods removed\ndef propagate(self, edge_index, mask, size=None, **kwargs): # some lines skipped ... msg = self.message(**msg_kwargs) if self.training:\n# This is for the masking of messages: edge_mask = torch.index_select(mask, 0, edge_index[0]) out = torch.empty_like(msg) out[edge_mask] = self.mp_quantizers[\"message_high\"](msg[edge_mask]) out[~edge_mask] = self.mp_quantizers[\"message_low\"](\nmsg[~edge_mask] )\nelse: out = self.mp_quantizers[\"message_low\"](msg)\naggr_kwargs = self.__distribute__(self.__aggr_params__, kwargs)\naggrs = self.aggregate(out, **aggr_kwargs) if self.training:\nout = torch.empty_like(aggrs) out[mask] = self.mp_quantizers[\"aggregate_high\"](aggrs[mask]) out[~mask] = self.mp_quantizers[\"aggregate_low\"](aggrs[~mask])\nelse: out = self.mp_quantizers[\"aggregate_low\"](aggrs)\nupdate_kwargs = self.__distribute__(self.__update_params__, kwargs) updates = self.update(out, **update_kwargs) if self.training:\nout = torch.empty_like(updates) out[mask] = self.mp_quantizers[\"update_high\"](updates[mask]) out[~mask] = self.mp_quantizers[\"update_low\"](updates[~mask])\nelse: out = self.mp_quantizers[\"update_low\"](updates)\nreturn out\nB.2.1 GCN\nclass GCNConvMultiQuant(MessagePassingMultiQuant): # Some methods missed... def forward(self, x, edge_index, mask, edge_weight=None):\n# quantizing input if self.training:\nx_q = torch.empty_like(x) x_q[mask] = self.layer_quantizers[\"inputs_high\"](x[mask]) x_q[~mask] = self.layer_quantizers[\"inputs_low\"](x[~mask])\nelse: x_q = self.layer_quantizers[\"inputs_low\"](x)\n# quantizing layer weights w_q = self.layer_quantizers[\"weights_low\"](self.weight) if self.training:\nx = torch.empty((x_q.shape[0], w_q.shape[1])).to(x_q.device) x_tmp = torch.matmul(x_q, w_q) x[mask] = self.layer_quantizers[\"features_high\"](x_tmp[mask]) x[~mask] = self.layer_quantizers[\"features_low\"](x_tmp[~mask])\nelse: x = self.layer_quantizers[\"features_low\"](torch.matmul(x_q, w_q))\nif self.normalize: edge_index, norm = self.norm(\nedge_index, x.size(self.node_dim), edge_weight, self.improved, x.dtype,\n) else:\nnorm = edge_weight\nnorm = self.layer_quantizers[\"norm\"](norm) return self.propagate(edge_index, x=x, norm=norm, mask=mask)" } ]
2,021
DEGREE-QUANT: QUANTIZATION-AWARE TRAINING FOR GRAPH NEURAL NETWORKS
SP:031cbb9fd369d00fa867901cf650c777d356d853
[ "Cluster-former is the latest proposal for enabling transformers to deal with long input sequences. Such sequences are particularly problematic for problems like question answering, QA, (or summarization), where the context can be arbitrarily long, and effectively open-ended when the setup includes a context retrieval component (e.g., as in OpenQA). Cluster-Former combines local information by encoding sequence chunks separately with a sliding window, then injects clustering layers, that use k-means to compute centroids to cluster hidden states and capture global information. The approach yields state-of-the-art, and top-of-leaderboard, results on Natural Questions (long answers). " ]
Transformer has become ubiquitous in the deep learning field. One of the key ingredients that destined its success is the self-attention mechanism, which allows fully-connected contextual encoding over input tokens. However, despite its effectiveness in modeling short sequences, self-attention suffers when handling inputs with extreme long-range dependencies, as its complexity grows quadratically w.r.t. the sequence length. Therefore, long sequences are often encoded by Transformer in chunks using a sliding window. In this paper, we propose Cluster-Former, a novel clustering-based sparse Transformer to perform attention across chunked sequences. The proposed framework is pivoted on two unique types of Transformer layer: Sliding-Window Layer and Cluster-Former Layer, which encode local sequence information and global context jointly and iteratively. This new design allows information integration beyond local windows, which is especially beneficial for question answering (QA) tasks that rely on long-range dependencies. Experiments show that Cluster-Former achieves state-of-the-art performance on several major QA benchmarks.
[]
[ { "authors": [ "Joshua Ainslie", "Santiago Ontanon", "Chris Alberti", "Philip Pham", "Anirudh Ravula", "Sumit Sanghai" ], "title": "Etc: Encoding long and structured data in transformers", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2020 }, { "authors": [ "Chris Alberti", "Kenton Lee", "Michael Collins" ], "title": "A bert baseline for the natural questions", "venue": "arXiv preprint arXiv:1901.08634,", "year": 2019 }, { "authors": [ "Akari Asai", "Kazuma Hashimoto", "Hannaneh Hajishirzi", "Richard Socher", "Caiming Xiong" ], "title": "Learning to retrieve reasoning paths over wikipedia graph for question answering", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Iz Beltagy", "Matthew E Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "arXiv preprint arXiv:2004.05150,", "year": 2020 }, { "authors": [ "Danqi Chen", "Adam Fisch", "Jason Weston", "Antoine Bordes" ], "title": "Reading Wikipedia to answer opendomain questions", "venue": "In Association for Computational Linguistics (ACL),", "year": 2017 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": "arXiv preprint arXiv:1904.10509,", "year": 2019 }, { "authors": [ "Eunsol Choi", "Daniel Hewlett", "Jakob Uszkoreit", "Illia Polosukhin", "Alexandre Lacoste", "Jonathan Berant" ], "title": "Coarse-to-fine question answering for long documents", "venue": "In Association for Computational Linguistics (ACL),", "year": 2017 }, { "authors": [ "Krzysztof Choromanski", "Valerii Likhosherstov", "David Dohan", "Xingyou Song", "Jared Davis", "Tamas Sarlos", "David Belanger", "Lucy Colwell", "Adrian Weller" ], "title": "Masked language modeling for proteins via linearly scalable long-context transformers", "venue": null, "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In North American Chapter of the Association for Computational Linguistics (NAACL),", "year": 2019 }, { "authors": [ "Bhuwan Dhingra", "Kathryn Mazaitis", "William W Cohen" ], "title": "Quasar: Datasets for question answering by search and reading", "venue": "arXiv preprint arXiv:1707.03904,", "year": 2017 }, { "authors": [ "Yufei Ding", "Yue Zhao", "Xipeng Shen", "Madanlal Musuvathi", "Todd Mytkowicz" ], "title": "Yinyang k-means: A drop-in replacement of the classic k-means with consistent speedup", "venue": "In International conference on machine learning (ICML),", "year": 2015 }, { "authors": [ "Matthew Dunn", "Levent Sagun", "Mike Higgins", "V Ugur Guney", "Volkan Cirik", "Kyunghyun Cho" ], "title": "Searchqa: A new q&a dataset augmented with context from a search engine", "venue": "arXiv preprint arXiv:1704.05179,", "year": 2017 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Computation,", "year": 1997 }, { "authors": [ "Angelos Katharopoulos", "Apoorv Vyas", "Nikolaos Pappas", "François Fleuret" ], "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "venue": "arXiv preprint arXiv:2006.16236,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Nikita Kitaev", "Łukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Tom Kwiatkowski", "Jennimaria Palomaki", "Olivia Redfield", "Michael Collins", "Ankur Parikh", "Chris Alberti", "Danielle Epstein", "Illia Polosukhin", "Jacob Devlin", "Kenton Lee" ], "title": "Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics (TACL), 2019", "venue": null, "year": 2019 }, { "authors": [ "Zhenzhong Lan", "Mingda Chen", "Sebastian Goodman", "Kevin Gimpel", "Piyush Sharma", "Radu Soricut" ], "title": "Albert: A lite bert for self-supervised learning of language representations", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "venue": null, "year": 2019 }, { "authors": [ "Yankai Lin", "Haozhe Ji", "Zhiyuan Liu", "Maosong Sun" ], "title": "Denoising distantly supervised opendomain question answering", "venue": "In Association for Computational Linguistics (ACL),", "year": 2018 }, { "authors": [ "Dayiheng Liu", "Yeyun Gong", "Jie Fu", "Yu Yan", "Jiusheng Chen", "Daxin Jiang", "Jiancheng Lv", "Nan Duan" ], "title": "Rikinet: Reading wikipedia pages for natural question answering", "venue": "In Association for Computational Linguistics (ACL),", "year": 2020 }, { "authors": [ "Yinhan Liu", "Myle Ott", "Naman Goyal", "Jingfei Du", "Mandar Joshi", "Danqi Chen", "Omer Levy", "Mike Lewis", "Luke Zettlemoyer", "Veselin Stoyanov" ], "title": "Roberta: A robustly optimized bert pretraining approach", "venue": null, "year": 1907 }, { "authors": [ "Matt Mahoney" ], "title": "Large text compression benchmark", "venue": "URL: http://www. mattmahoney. net/text/text. html,", "year": 2011 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Lin Pan", "Rishav Chakravarti", "Anthony Ferritto", "Michael Glass", "Alfio Gliozzo", "Salim Roukos", "Radu Florian", "Avirup Sil" ], "title": "Frustratingly easy natural question answering", "venue": null, "year": 1909 }, { "authors": [ "Jack W Rae", "Anna Potapenko", "Siddhant M Jayakumar", "Timothy P Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Aurko Roy", "Mohammad Saffar", "Ashish Vaswani", "David Grangier" ], "title": "Efficient content-based sparse attention with routing transformers. Transactions of the Association for Computational Linguistics (TACL), 2020", "venue": null, "year": 2020 }, { "authors": [ "Yi Tay", "Anh Tuan Luu", "Siu Cheung Hui", "Jian Su" ], "title": "Densely connected attention propagation for reading comprehension", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Yi Tay", "Dara Bahri", "Donald Metzler", "Da-Cheng Juan", "Zhe Zhao", "Che Zheng" ], "title": "Synthesizer: Rethinking self-attention in transformer models", "venue": "arXiv preprint arXiv:2005.00743,", "year": 2020 }, { "authors": [ "Yi Tay", "Dara Bahri", "Liu Yang", "Donald Metzler", "Da-Cheng Juan" ], "title": "Sparse sinkhorn attention", "venue": "arXiv preprint arXiv:2002.11296,", "year": 2020 }, { "authors": [ "Yi Tay", "Mostafa Dehghani", "Dara Bahri", "Donald Metzler" ], "title": "Efficient transformers: A survey", "venue": "arXiv preprint arXiv:2009.06732,", "year": 2020 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Shuohang Wang", "Mo Yu", "Xiaoxiao Guo", "Zhiguo Wang", "Tim Klinger", "Wei Zhang", "Shiyu Chang", "Gerry Tesauro", "Bowen Zhou", "Jing Jiang" ], "title": "Reinforced ranker-reader for open-domain question answering", "venue": "In AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Sinong Wang", "Belinda Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": "arXiv preprint arXiv:2006.04768,", "year": 2020 }, { "authors": [ "Zhiguo Wang", "Patrick Ng", "Xiaofei Ma", "Ramesh Nallapati", "Bing Xiang" ], "title": "Multi-passage bert: A globally normalized bert model for open-domain question answering", "venue": "In Empirical Methods in Natural Language Processing (EMNLP),", "year": 2019 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontanon", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang" ], "title": "Big bird: Transformers for longer sequences", "venue": null, "year": 2007 }, { "authors": [ "Junru Zhou", "Hai Zhao" ], "title": "Head-driven phrase structure grammar parsing on penn treebank", "venue": "In Association for Computational Linguistics (ACL),", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Long-range contextual understanding has proven critical in many natural language processing (NLP) tasks. For example, the relevant context for correctly answering an open-domain question can arch over thousands of words. Encoding long sequences via deep neural networks, however, has remained an expensive and challenging task due to high demand on training time and GPU memory. Traditional sequence modeling methods (Hochreiter & Schmidhuber, 1997) encode long sequences in a chronological order, which suffers high latency. In the place of sequential encoding, recent models such as Transformer (Vaswani et al., 2017) use simultaneous self-attention over the entire input instead, which has been successfully adopted in many NLP tasks such as textual entailment (Devlin et al., 2019), dependency parsing (Zhou & Zhao, 2019), and summarization (Lewis et al., 2019). A caveat with Transformer though is that building full connections over long sequences translates to quadratic growth on memory demand and computational complexity w.r.t. sequence length.\nOne way to efficiently encode long sequences is to first chunk a sequence into much shorter ones with a sliding window, then build connections between the shorter sequences (Figure 1(a)). For example, Child et al. (2019), Beltagy et al. (2020) and Zaheer et al. (2020) apply sparse attention to chunked sequences in hand-designed patterns in order to gather information from the chunks (Figure 1(b)). Choi et al. (2017) and Wang et al. (2019) first use a simpler model to filter chunked sequences, then process selected sequences with fully-connected self-attention. Rae et al. (2019) makes use of the shared memory of chunked sequences to build connections between them. However, these methods cannot encode long-range dependencies with as much flexibility or accuracy as fully-connected selfattention, due to their dependency on hand-designed patterns.\nRecently, several studies (Kitaev et al., 2020; Tay et al., 2020b) propose to further improve the sparse attention mechanism by hashing or sorting the hidden states into different buckets (Figure 1(c)). These works mainly explore tasks with relatively short sequences, such as sentence-level Machine Translation (MT), where the number of hashing vectors is relatively small (less than 16 in Kitaev et al. (2020)), allowing randomly initialized hashing vectors to hash hidden states into correct buckets. However, how to use hashing-based attention in the context of long sequences (e.g.,, up to thousands of words) is still an unexplored territory.\nOur proposed framework for efficient long sequence encoding, Cluster-Former, marries both sliding-window and hashing-based methods to achieve effective local and long-range dependency encoding. Cluster-Former consists of two types of encoding layer. The first one (noted as SlidingWindow Layer) focuses on extracting local information within a sliding window. It applies Transformer to the hidden states of each chunked sequence independently, as shown in Figure 1(a). The other one (noted as Cluster-Former Layer) learns to encode global information beyond the initial chunked sequences. Specifically, we first apply clustering to the input hidden states so that similar hidden states are assigned to the same cluster, as shown in Figure 1(d). The clustered and sorted input is then divided uniformly into chunks, each encoded by a Transformer layer. Note that to make model training more efficient, the cluster centroids are not computed online but updated periodically (every epoch or a few epochs). We accumulate the hidden states from the layer prior to the Cluster-Former layer in a memory bank, and apply the K-Means algorithm to form cluster centroids during each update cycle. Compared to previously discussed sparse attention based on pre-selected positions (Figure 1(b)) or randomly-initialized hashing vectors (Figure 1(c)), experimental results show that our method can encode dependency across chunked sequences more effectively.\nOur contributions can be summarized as follows. (i) We propose Cluster-Former, a novel approach to capturing long-range dependencies more effectively than locality-sensitive hashing method. (ii) We propose a new Transformer-based framework to process long sequences by combining SlidingWindow and Cluster-Former layers to extract both local and global contextual information. (iii) Our model achieves the best performance on question answering datasets of Natural Questions (long answer), SearchQA, and Quasar-T." }, { "heading": "2 RELATED WORK", "text": "Efficient Transformers With Transformer models growing larger and larger, how to handle longer sequences arises as a critical challenge. Many works have been proposed to improve the computational and memory efficiency of Transformers, including Sparse Transformer (Child et al., 2019), Routing Transformer (Roy et al., 2020), Reformer (Kitaev et al., 2020), Sinkhorn Transformer (Tay et al., 2020b), Longformer (Beltagy et al., 2020), ETC (Ainslie et al., 2020), Synthesizer (Tay et al., 2020a), Performer (Choromanski et al., 2020), Linformer (Wang et al., 2020), Linear Transformer (Katharopoulos et al., 2020), and BigBird (Zaheer et al., 2020). Tay et al. (2020c) provided an excellent literature survey on this emerging topic. Our method falls into the setting of learnable sparse-attention patterns including Routing Transformer, Reformer and Sinkhorn Transformer. Our method is closer to Routing Transformer (Roy et al., 2020) which also uses cluster centroids to learn patterns, while we are targeting on quite different tasks (language modeling VS question answering) which leads to the significant difference on frameworks. Moreover, our cluster centroids are updated in very different ways (online exponentially moving centroids VS periodical centroids update by KMeans).\nLong Sequence in Question Answering For tasks such as open-domain question answering (Chen et al., 2017), a large volume of documents or paragraphs is usually retrieved to infer the answer, yielding extremely long context content. Despite the fact that state-of-the-art NLP models are capable of extracting answers amid complex context, they still struggle with extremely long input sequences. Recent advances that advocate the use of large-scale pre-trained models (Lewis et al., 2019; Liu et al., 2019; Lan et al., 2020) for question answering make this problem more prominent, due to tremendous memory consumption. To process long sequences, the most widely-used method is to first use a lightweight model to filter out redundant text, then use sliding-window-based approaches to encode the remaining sequences with a more sophisticated model. Chen et al. (2017) integrated bi-gram features into Information Retrieval (IR) methods to retrieve related documents more accurately. Wang et al. (2018) trained a paragraph selector using as the reward whether the entire system can obtain the correct answer or not . Lin et al. (2018) proposed to use a paragraph ranking model to curate data that are required for training reading comprehension models. Wang et al. (2019) trained a ranker to merge paragraphs for multi-passage reasoning. Asai et al. (2020) trained a recurrent retriever to select paragraphs for multi-hop question answering. Besides the above methods, directly applying Efficient Transformers to process long sequences in question answering is another option. In this paper, we focus on this direction by directly training our Cluster-Former on the long context without using lightweight model for context filtering." }, { "heading": "3 PROPOSED APPROACH", "text": "The proposed framework to handle long sequences is pivoted on two types of Transformer layer: (i) Sliding-Window Layer; and (ii) Cluster-Former Layer. The former focuses on encoding local sequence information, while the latter is on encoding global context and always built on top of the former layer. An overview of the two layers is illustrated in Figure 2." }, { "heading": "3.1 SLIDING-WINDOW LAYER", "text": "Despite that our focus is on capturing long-range dependencies for global context, local information also plays a critical role for knowledge propagation. Therefore, in the lower section of our network, we adopt the traditional sliding-window encoding mechanism. A sliding window segments a long sequence X into short, overlapping ones with window size l and stride m, as illustrated in Figure 2(a). Note that in this paper, we focus on question answering tasks, for which we concatenate the question Q with each sequence chunked from the document:\nH0k = [Q;X [m× k : (m× k + l)]] , (1)\nwhere Q ∈ Rq×d denotes question embeddings given a QA task, q is the number of tokens in the question, and X ∈ Rx×d is the embeddings for all context, x is the number of tokens in context. k is the ID of the chunked sequence, l is the window size, and m is the stride of the sliding window. [idx1 : idx2] indicates selecting rows between the index of idx1 and idx2 of the matrix. [·; ·] means concatenating the matrices along the row. We first use Transformer to encode each sequence in sliding window as follows:\nHn+1k = Transformer(H n k ), (2)\nwhere Hn+1k ∈ R(q+l)×d is the output of Transformer on the k-th sequence in the n-th layer. While it is not the final output of the n-th layer. As we expect the neighbouring sequences to share useful information in hidden states as well, we always set m < l to allow overlapping between sequences. We use the mean values of the Transformer hidden states at the overlapped tokens between windows as final outputs. To merge the representations from (k − 1)-th sequence:\nHn+1k [q : q + l −m] + = H n+1 k−1 [q + m : end],\nHn+1k [q : q + l −m] / = 2,\nand merge representations from (k + 1)-th sequence:\nHn+1k [q + m : end] + = H n+1 k+1 [q : q + l −m],\nHn+1k [q + m : end] / = 2, (3)\nwhere + = is to add matrices in-place and / = is to divide a matrix by a scalar value in-place. The merged hidden states Hn+1k ∈ R(q+l)×d are the final outputs of the n-th layer. If the next layer is Cluster-Former, the output hidden states in this layer Hn+1k will be saved into memory bank for computing the cluster centroids." }, { "heading": "3.2 CLUSTER-FORMER LAYER", "text": "Algorithm 1 Cluster Centroids Update 1: Initialize Memory = Queue() 2: Centroids = GETCENTROIDS(RandomVector) 3: 4: function TRAIN(Inputs) 5: for i = 1, 2,. . . , IterationNum do 6: States = Sliding-Transformer(Inputs[i]) 7: Memory.add(States) 8: while len(Memory) > M do 9: Memory.pop() 10: end while 11: if i % ClusterUpdateFrequency == 0 then 12: Centroids = GETCENTROIDS(Memory) 13: end if 14: Clusters = cluster States by Centroids 15: States = Cluster-Former(Clusters) 16: end for 17: end function 18: 19: function GETCENTROIDS(HiddenStates) 20: Centroids = K-Means(HiddenStates) 21: Outputs = List() 22: Outputs[1] = Centroids[1] 23: for i = 2, 3,. . . , ClusterNum do 24: Outputs[i] = centroid from Centroids\nthat is closest to Outputs[i− 1] but not in Outputs\n25: end for 26: return Outputs 27: end function\nWe introduce a Cluster-Former layer to add global representational power to Transformer beyond sliding windows. An in-depth visualization of the layer is illustrated in Figure 2(b).\nThe input of the Cluster-Former layer comes from the hidden states of the prior layer (in our case a Sliding-Window layer). After merging the overlaps between sequence chunks, the input of this layer is defined as:\nH̄n = [Hn0 [0 : q + m]; ...;H n k [0 : q + m]] , (4) where H̄n ∈ R(qdx/me+x)×d is the hidden states to cluster, x is the number of tokens in the context.\nAs the hidden states with larger cosine similarity are more likely to have higher attention weights, we build sparse selfattention only on the hidden states in the same cluster. In this work, we use KMeans as the chosen clustering method for simplicity. More advanced clustering algorithms have the potential of yielding better performance. Since running KMeans on the fly in each training iteration is computationally expensive, we decide to re-compute the cluster centroids with low frequency (every epoch or a few epochs).\nIn addition, to avoid dramatic changes in the cluster centroids due to limited hidden state inputs, we maintain a memory bank for the most recent hidden states. The entire procedure is depicted in Algorithm 1. Once we compute the cluster centroids, we can directly use them for hidden state clustering as follows:\nvn = argmax ( Hn(Cn)T ||Hn||2||Cn||2 ) , (5)\nwhere Cn ∈ Rp×d are the cluster centroids for layer n, and p is the pre-defined number of clusters. The function argmax(·) performs on the last dimension and assigns all the input hidden states into different clusters based on the max value of cosine similarity between the hidden states and cluster centroids. vn ∈ R(qdx/me+x) is the assigned cluster IDs of all the input hidden states. Since the number of hidden states in different clusters can vary substantially, padding them to the maximum length for Transformer training will significantly increase the computational time. To make the extraction of global context more efficient, we greedily pick the cluster centroids based on the nearest neighbour (measured by cosine similarity) as shown in the function GETCENTROIDS in Algorithm 1. Thus, the hidden states with similar cluster IDs are also close to each other. Then, we can directly sort the cluster IDs of hidden states and uniformly chunk the hidden states (same window size and stride m):\nun = argsort(vn), ank = u n[mk : m(k + 1)], Enk = H n[ank ], (6)\nwhere the function argsort(·) is to obtain the indexes of input values sorted in order (same values sorted by the corresponding position of hidden states). ank ∈ Rm is the chunked indexes of the hidden states. Enk ∈ Rm×d is the k-th clustered hidden states, and we will run Transformer on top of it to build the connection beyond the words in the initial sliding window as follows:\nEn+1k = Transformer(E n k ). (7)\nAfter updating the hidden states, we map them back to the order before clustering:\nH̄n+1 = [En+10 ;E n+1 1 ; ...;E n+1 K ],\nān = [an0 ;a n 1 ; ...;a n K ], (8)\nH̄n+1[ān] = clone(H̄n+1), (9)\nwhere H̄n+1 is the final output hidden state of this layer and has the same word order as the input H̄n. In experiments, we stack these two types of layer interchangeably to capture both global and local context efficiently.\n4 EXPERIMENTS\n4.1 DATASETS\nWe evaluate our proposed approach on multiple question answering benchmarks. The statistics of all the datasets are summarized in Table 1.\nQuasar-T1 (Dhingra et al., 2017): The goal of this task is to answer open-domain questions from Trivia Challenge. All the passages harvested through information retrieval can be used to answer questions. The task requires the\nmodel to generate answers in phrases. The evaluation metric on this dataset is based on Exact Match and F1 score of the bag-of-words matching. Our evaluation tool2 comes from the SQuAD dataset.\nSearchQA3 (Dunn et al., 2017): The setting of this dataset is the same as Quasar-T, except that the questions are sourced from Jeopardy! instead.\n1https://github.com/bdhingra/quasar 2https://rajpurkar.github.io/SQuAD-explorer/ 3https://github.com/nyu-dl/dl4ir-searchQA\nNatural Questions4 (Kwiatkowski et al., 2019): This task aims to answer questions based on a given Wikipedia document, and has two settings. (i) Long answer: select a paragraph that can answer the question based on the Wikipedia document if any. (ii) Short answer: extract an answer phrase from the document if the document contains the answer. As the given document may not contain answer, we can either predict an answer or predict no answer. The evaluation metric on this dataset is the F1 score, where true positives are exactly correct answers, false positives are incorrect answer predictions, and false negatives are incorrect “no answer” predictions. As the test set is hidden, we split 5% of the training set for validation, and use the original validation set for testing. We use the official tool from the dataset to evaluate our models. We also submit our best model to the leaderboard." }, { "heading": "4.2 IMPLEMENTATION DETAILS", "text": "All the models are trained on 8 Nvidia V100 GPUs. For clustering, we adopt “Yinyang kmeans ”(Ding et al., 2015)5 which takes less than 5 seconds for clustering in all of our experiment settings. We set the memory size for clustering M = 100, 000 in Algorithm 1. We use cluster centroids that perform the best on the validation set for test set experiments. We initialize our models with RoBERTa-large (Liu et al., 2019). As the number of position embeddings of RoBERTa is limited to 512, we cannot assign different position embeddings to all tokens. Instead, we assign the same position embeddings to each chunked sequence. The majority of our model is made up of Sliding-Window Layers, as the local information is essential for QA tasks. We adopt the proposed Cluster-Former Layer in layers 15 and 20 to further capture long-range information. We set the sliding window size l to 256, stride m to 224, and change the number of clusters in {64, 256, 512} to analyze its impact on the final performance. We prepend a special token to the beginning of all the given/retrieved paragraphs and directly concatenate all the paragraphs as the final context sequence. Due to memory constraints, we set the max length to be 5000 during training and 10000 during inference. During dataset finetuning, we use Adam (Kingma & Ba, 2015) to optimize the model. We set warm-up updates to 2,220, maximal updates to 22,200, learning rate to 5×10−5, and batch size to 160. We tune dropout rate from {0.1, 0.15, 0.2} for all methonds including baselines and report the best results. The model converges in one day for all the QA datasets.\nFor Quasar-T and SearchQA, we predict the start and end positions of the answer. For Natural Question, we first identify whether the question has short/long answers or not based on the mean values of the first hidden state of all the chunked sequences, 1K ∑K k=1 H N k [0] , where K is the number of chunks and N is the number of layers. If answerable, we rank all the candidates for long answer selection, and predict the start and end positions of short answers. Our model submitted to Natural Question Leaderboard ensembled 3 models with 512 clusters, and only these models are firstly trained on SQuAD2.0 and then finetuned on Natural Question dataset." }, { "heading": "4.3 BASELINE", "text": "We compare our models with several strong baselines, including:\nR3 (Wang et al., 2018) proposes to use reinforcement learning to jointly train passage ranker and reader. DS-QA (Lin et al., 2018) proposes to first use paragraph selection to filter the noisy data and then trained model on denoised data. Multi-passage BERT (Wang et al., 2019) proposes to filter the passages and then merge multiple useful passages into one sequence, which can be encoded by BERT. DrQA (Chen et al., 2017) makes use of attention mechanism across the question and the document for answer phrase extraction. DecAtt and DocReader (Kwiatkowski et al., 2019) is based on a pipeline approach that first uses a simpler model to select long answers and then a reading comprehension model to extract short answers from the long answers. BERTjoint (Alberti et al., 2019) jointly trains short and long answer extraction in a single model rather than using a pipeline approach. BERTwwm+SQuAD2 (Pan et al., 2019) makes use of multi-task learning to further boost performance. RikiNet-RoBERTa (Liu et al., 2020) proposes a dynamic paragraph dual-attention reader and a multi-level cascaded answer predictor. BigBird-ETC (Zaheer et al., 2020) makes use of a sparse attention mechanism to encode long sequences.\n4https://ai.google.com/research/NaturalQuestions 5https://github.com/src-d/kmcuda\nWe also re-implement several strong baselines which have not been applied to process long context in question answering tasks:\n• Sliding Window: The original method is fully made up of Sliding-Window Layers and can only attend to local information. To make a fair comparison among different methods on long-range information collection, we replace several layers of this sliding window baseline with Sparse Attention, Locality-Sensitive Hashing, and Cluster-Former.\n• Sparse Attention (Child et al., 2019): This method replaces several layers in the previous baseline by training a Transformer layer across sequences on pre-selected positions. We run this sparse Transformer on all the hidden states in the same position across sequences, so that the output of sparse Transformer can merge the information from different sequences.\n• Locality-Sensitive Hashing (Kitaev et al., 2020): This method hashes hidden states into different buckets determined by randomly-initialized hashing vectors. A Transformer layer is then applied across buckets to build Sparse Attention across the whole sequence. Note that this method cannot be directly used for question answering without adding Sliding-Window layer, as our QA model is initialized by RoBERTa that only has 512 position embeddings." }, { "heading": "4.4 EXPERIMENTAL RESULTS", "text": "State-of-the-Art Results on QA Table 2 and 3 show that our proposed method outperforms several strong baselines, thanks to its ability to encode both local and global information. Cluster-Former with 512 clusters achieves new state-of-the-art results on Quasar-T, SearchQA and Natural Question (long answer).\nEffect of Cluster-Former We also test the ability of Cluster-Former on modeling long-range dependencies. Note that Sparse Attention (Child et al., 2019) and Locality-Sensitive Hashing (Kitaev et al., 2020) have never been tested on question answering tasks with long con-\ntext. For fair comparison, we set the layers 15 and 20 as either Sparse Attention, LocalitySensitive Hashing or our Cluster-Former, and the left layers are Sliding Window layers.\nAs shown, Sparse Attention performs worse than our Cluster-Former. The loss may come from the noise introduced by pre-selected positions, the corresponding words of which may not be related. We set the number of hashing vectors in Locality-Sensitive Hashing (LSH) to 64, the same as the number of clusters in Cluster-Former. LSH outperforms the baseline slightly on QA and consistently underperforms our Cluster-Former (#C=64). Overall, our Cluster-Former performs the best.\nEffect of Number of Cluster Centroids We also test the effect of different numbers of cluster centroids (C) on model performance. We observe that the model with 512 clusters works significantly better than the model with 64 clusters on most of the tasks. However, for Natural Questions Long Answer setting, the improvement is marginal. As we mainly rely on the hidden state of special tokens “<s>” for long answer selection, and the same tokens can be assigned into same chunk more easily even with a smaller number of clusters.\nSelection of Cluster-Former Layers We also have an analysis on which layers are better used for Cluster-Former layer. As shown in Table 4, we conduct a hyper-parameter search. And find that it can get better performance with at least one Cluster-Former layers in the middle layer (8-16). The worst results come from only one Cluster-Former layer in the layer of 22 or 23.\nLanguage Modeling Although we focus on QA tasks in this paper, to demonstrate the versatility of Cluster-Former, we further conduct additional experiments on language modeling using the Wikitext-103 (Merity et al., 2017) and Enwik8 (Mahoney, 2011) benchmarks. Implementation details are provided in Appendix. As shown in Table 5, Cluster-Former outperforms strong state-ofthe-art baselines." }, { "heading": "4.5 QUALITATIVE ANALYSIS", "text": "We perform qualitative analysis on how the hidden states are clustered, by visualizing the corresponding words and positions of the hidden states in Table 6. From the first row, we can see that the special tokens “<s>” tend to belong to the same cluster. Note that “<s>” is the start token of each long answer candidate, and its hidden state is used for final long answer selection. Therefore, Transformer on this cluster can compare across the candidates to make the final prediction.\nWe further observe that the same types of token are more likely to appear in the same cluster. For example, words from the second row to the forth row cover the topics of time, stopwords, and organization & geopolitical entities.\nFinally, we randomly sample a cluster and list the positions of clustered hidden states in the last row of the table. We find that states in long distance, such as the 50-th and 6060-th states (over 6000 tokens apart), can be in one cluster, which demonstrates the ability of Cluster-Former in detecting long-range dependencies. Further, we observe that states tend to cluster in phrases. For example, we see consecutive positions such as “49, 50, 51, 52, 53, 54, 55”, which likely results from the sliding-window encoding." }, { "heading": "5 CONCLUSION", "text": "In this paper, we present Cluster-Former, a new method to encode global information for long sequences. We achieve new state of the art on three question answering datasets: Quasar-T, SearchQA, and Natural Questions. Further, we observe that a larger number of clusters in Cluster-Former can lead to better performance on question answering tasks. Cluster-Former is a generic approach, and we believe that it can benefit other NLP tasks that rely on long-range dependencies as well." } ]
2,020
null
SP:31e67caf860b47d871f17848355cd91b65830f59
[ "This paper proposes an unpaired image-to-image translation method which applies a pre-trained auto-encoder and a latent feature transformer (single block) to perform iterative image transformation. A progressive training and warm-up strategy is used to settle the numerical exponentiation effects caused by powers of layers. In the testing phase, the discriminator is also used to adjust the inference time." ]
We propose a simple architecture to address unpaired image-to-image translation tasks: style or class transfer, denoising, deblurring, deblocking, etc. We start from an image autoencoder architecture with fixed weights. For each task we learn a residual block operating in the latent space, which is applied iteratively until the target domain is reached. A specific training schedule is required to alleviate the exponentiation effect of the iterations. At test time, it offers several advantages: the number of weight parameters is limited and the strength of the transformation can be modulated simply with the number of iterations. This is useful, for instance, when the type or amount of noise to suppress is not known in advance. Experimentally, we show that the performance of our model is comparable or better than CycleGAN and Nice-GAN with fewer parameters.
[]
[ { "authors": [ "Eirikur Agustsson", "Radu Timofte" ], "title": "Ntire 2017 challenge on single image super-resolution: Dataset and study", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2017 }, { "authors": [ "Konstantinos Bousmalis", "George Trigeorgis", "Nathan Silberman", "Dilip Krishnan", "Dumitru Erhan" ], "title": "Domain separation networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Andrew Brock", "Jeff Donahue", "Karen Simonyan" ], "title": "Large scale gan training for high fidelity natural image synthesis", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Qifeng Chen", "Jia Xu", "Vladlen Koltun" ], "title": "Fast image processing with fully-convolutional networks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Runfa Chen", "Wenbing Huang", "Binghui Huang", "Fuchun Sun", "Bin Fang" ], "title": "Reusing discriminators for encoding: Towards unsupervised image-to-image translation", "venue": "arXiv preprint arXiv:2003.00273,", "year": 2020 }, { "authors": [ "Tian Qi Chen", "Yulia Rubanova", "Jesse Bettencourt", "David Duvenaud" ], "title": "Neural ordinary differential equations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yunjey Choi", "Youngjung Uh", "Jaejun Yoo", "Jung-Woo Ha" ], "title": "Stargan v2: Diverse image synthesis for multiple domains", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Word translation without parallel data", "venue": "arXiv preprint arXiv:1710.04087,", "year": 2017 }, { "authors": [ "Huan Fu", "Mingming Gong", "Chaohui Wang", "Kayhan Batmanghelich", "Kun Zhang", "Dacheng Tao" ], "title": "Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Shangqian Gao", "Cheng Deng", "Heng Huang" ], "title": "Cross domain model compression by structurally weight sharing", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ian J. Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C. Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Song Han", "Huizi Mao", "William J. Dally" ], "title": "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding", "venue": null, "year": 2015 }, { "authors": [ "Kurt Hornik", "Maxwell Stinchcombe", "Halbert White" ], "title": "Multilayer feedforward networks are universal approximators", "venue": "Neural networks,", "year": 1989 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q. Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Jia-Bin Huang", "Abhishek Singh", "Narendra Ahuja" ], "title": "Single image super-resolution from transformed self-exemplars", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Stanislaw Jastrzebski", "Devansh Arpit", "Nicolas Ballas", "Vikas Verma", "Tong Che", "Yoshua Bengio" ], "title": "Residual connections encourage iterative inference", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Younahan Jeon", "Minsik Lee", "Jin Young Choi" ], "title": "Differentiable fixed-point iteration layer", "venue": "arXiv preprint arXiv:2002.02868,", "year": 2020 }, { "authors": [ "Tero Karras", "Timo Aila", "Samuli Laine", "Jaakko Lehtinen" ], "title": "Progressive growing of gans for improved quality, stability, and variation", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "arXiv preprint arXiv:1912.04958,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Neil Zeghidour", "Nicolas Usunier", "Antoine Bordes", "Ludovic Denoyer" ], "title": "Fader networks: Manipulating images by sliding attributes", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jingyuan Li", "Ning Wang", "Lefei Zhang", "B. Du", "Dacheng Tao" ], "title": "Recurrent feature reasoning for image inpainting", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Wallace P. Lira", "Johannes Merz", "D. Ritchie", "D. Cohen-Or", "Hao Zhang" ], "title": "Ganhopper: Multi-hop gan for unsupervised image-to-image translation", "venue": "European Conference on Computer Vision,", "year": 2020 }, { "authors": [ "Ming-Yu Liu", "Oncel Tuzel" ], "title": "Coupled generative adversarial networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Ming-Yu Liu", "Xun Huang", "Arun Mallya", "Tero Karras", "Timo Aila", "Jaakko Lehtinen", "Jan Kautz" ], "title": "Few-shot unsupervised image-to-image translation", "venue": "In International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Anish Mittal", "Rajiv Soundararajan", "Alan C. Bovik" ], "title": "Making a “completely blind” image quality analyzer", "venue": "IEEE Signal Processing Letters,", "year": 2013 }, { "authors": [ "Guido F Montúfar" ], "title": "Universal approximation depth and errors of narrow belief networks with discrete units", "venue": "Neural computation,", "year": 2014 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatially-adaptive normalization", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Philipp Krähenbühl", "Jeff Donahue", "Trevor Darrell", "Alexei A. Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Tamar Rott Shaham", "Tali Dekel", "Tomer Michaeli" ], "title": "Singan: Learning a generative model from a single natural image", "venue": "In International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Bart Thomee", "David A. Shamma", "Gerald Friedland", "Benjamin Elizalde", "Karl Ni", "Douglas Poland", "Damian Borth", "Lijia Li" ], "title": "Yfcc100m: the new data in multimedia research", "venue": null, "year": 2016 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Andreas Veit", "Serge J. Belongie" ], "title": "Convolutional networks with adaptive inference graphs", "venue": "In European Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Yuri Viazovetskyi", "Vladimir Ivashkin", "Evgeny Kashin" ], "title": "Stylegan2 distillation for feed-forward image manipulation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Ting-Chun Wang", "Ming-Yu Liu", "Jun-Yan Zhu", "Andrew Tao", "Jan Kautz", "Bryan Catanzaro" ], "title": "Highresolution image synthesis and semantic manipulation with conditional gans", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zuxuan Wu", "Tushar Nagarajan", "Abhishek Kumar", "Steven Rennie", "Larry S. Davis", "Kristen Grauman", "Rogério Schmidt Feris" ], "title": "Blockdrop: Dynamic inference paths in residual networks", "venue": "Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Zili Yi", "Hao Zhang", "Ping Tan", "Minglun Gong" ], "title": "Dualgan: Unsupervised dual learning for image-to-image translation", "venue": "International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Egor Zakharov", "Aliaksandra Shysheya", "Egor Burkov", "Victor S. Lempitsky" ], "title": "Few-shot adversarial learning of realistic neural talking head models", "venue": null, "year": 1905 }, { "authors": [ "Xiaoshuai Zhang", "Yiping Lu", "Jiaying Liu", "Bin Dong" ], "title": "Dynamically unfolding recurrent restorer: A moving endpoint control method for image restoration", "venue": "arXiv preprint arXiv:1805.07709,", "year": 2019 }, { "authors": [ "Zhendong Zhang", "Cheolkon Jung" ], "title": "Recurrent convolutions: A model compression point of view. NIPS Workshops: Compact Deep Neural Network Representation with Industrial Applications, 2018", "venue": null, "year": 2018 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks define arbitrarily complex functions involved in discriminative or generative tasks by stacking layers, as supported by the universal approximation theorem (Hornik et al., 1989; Montúfar, 2014). More precisely, the theorem states that stacking a number of basic blocks can approximate any function with arbitrary precision, provided it has enough hidden units, with mild conditions on the non-linear basic blocks. Studies on non-linear complex holomorphic functions involved in escape-time fractals showed that iterating simple non-linear functions can also construct arbitrarily complex landscapes (Barnsley et al., 1988). These functions are complex in the sense that their iso-surfaces are made arbitrarily large by increasing the number of iterations. Yet there is no control on the actual shape of the resulting function. This is why generative fractals remain mathematical curiosities or at best tools to construct intriguing landscapes.\nIn this paper, our objective is to combine the expressive power of both constructions, and we study the optimization of a function that iterates a single building block in the latent space of an auto-encoder. We focus on image translation tasks, that can be trained from either paired or unpaired data. In the paired case, pairs of corresponding input and output images are provided during training. It offers a direct supervision, so the best results are usually obtained with these methods (Chen et al., 2017; Wang et al., 2018; Park et al., 2019). We focus on unpaired translation: only two corpora of images are provided, one for the input domain A and the other for the output domain B. Therefore we do not have access to any parallel data (Conneau et al., 2017), which is a realistic scenario in many applications, e.g., image restoration. We train a function fAB : A ! B, such that the output b⇤ = F (a) for a 2 A is indiscernible from images of B. Our transformation is performed by a single residual block that is composed a variable number of times. We obtain this compositional property thanks to a progressive learning scheme that ensures that the output is valid for a large range of iterations. As a result, we can modulate the strength of the transformation by varying the number of times the transformation is composed. This is of particular interest in image translation tasks such as denoising, where the noise level is unknown at training time, and style transfer, where the user may want to select the best rendering. This “Powers of layers” (PoL) mechanism is illustrated in Figure 1 in the category transfer context (horse to zebra).\nOur architecture is very simple and only the weights of the residual block differ depending on the task, which makes it suitable to address a large number of tasks with a limited number of parameters. This proposal is in sharp contrast with the trend of current state-of-the-art works to specialize the architecture and to increase its complexity and number of parameters (Fu et al., 2019; Viazovetskyi et al., 2020; Choi et al., 2020). Despite its simplicity, our proof of concept exhibits similar or better performance than a vanilla CycleGAN architecture, all things being equal otherwise, for the\noriginal set of image-to-image translation tasks proposed in their papers, as well as for denoising, deblurring and deblocking. With significantly fewer parameters and a versatile architecture, we report competitive results confirmed by objective and psycho-visual metrics, illustrated by visualizations. We will provide the implementation for the sake of reproducibility." }, { "heading": "2 RELATED WORK", "text": "Generative adversarial networks (GANs) (Goodfellow et al., 2014) is a framework where two networks, a generator and a discriminator, are learned together in a zero-sum game fashion. The generator learns to produce more and more realistic images w.r.t. the training dataset. The discriminator learns to separate between real data and increasingly realistic generated images. GANs are used in many tasks such as domain adaptation (Bousmalis et al., 2016), style transfer (Karras et al., 2019b), inpainting (Pathak et al., 2016) and talking head generation (Zakharov et al., 2019).\nUnpaired image-to-image translation considers the tasks of transforming an image from a domain A into an image in a domain B. The training set comprises a sample of images from domains A and B, but no pairs of corresponding images. A classical approach is to train two generators (A ! B and B ! A) and two discriminators, one for each domain. When there is a shared latent space between the domains, a possible choice is to use a variational auto encoder like in CoGAN Liu & Tuzel (2016). CycleGAN Zhu et al. (2017), DualGAN Yi et al. (2017) and subsequent works (Liu et al., 2019; Fu et al., 2019; Choi et al., 2020) augment the adversarial loss induced by the discriminators with a cycle consistency constraint to preserve semantic information throughout the domain changes. All these variants have architectures roughly similar to CycleGAN: an encoder, a decoder and residual blocks operating on the latent space. They also incorporate elements of other networks such as StyleGAN Karras et al. (2019a). In our work, we build upon a simplified form of the CycleGAN architecture that generalizes over tasks easily. We applied also our method to NICE-GAN Chen et al. (2020). A concurrent work, GANHopper by Lira et al. (2020) proposes to iterate CycleGAN generators in order to perform the transformation. However, their method has many difference with ours: They iterate full generators and not a single residual block, their encoder and decoder are not fixed, their number of iteration is fixed and they have to use additional discriminators to act on intermediate transformation states. Other works (Zhang et al., 2019; Li et al., 2020) use recurrent networks to perform transformations but this is done in a paired context, which results in very different methods from ours and GANHopper.\nTransformation modulation is an interpolation between two image domains. It is a byproduct of some approaches:For instance, a linear interpolation in latent space (Brock et al., 2018; Radford et al., 2015) morphs between two images. Nevertheless, one important limitation is that the starting and ending points must both be known, which is not the case in unpaired learning. Other approaches such as the Fader networks (Lample et al., 2017) or StyleGan2 (Viazovetskyi et al., 2020) act on scalar or boolean attributes that are disentangled in the latent space (eg., age for face images, wear glasses or not, etc). Nevertheless, this results in complex models, for which dataset size and the variability of\nimages strongly impacts the performance: they fail to modulate the transform with small datasets or with large variabilities. A comparison of PoL with the Fader network is provided in Appendix C and shows that our approach is more effective.\nProgressive learning and inference time modulation are performed in multi-scale methods such as SinGAN (Rott Shaham et al., 2019) and ProgressiveGAN (Karras et al., 2017). Progressive learning obtains excellent results for high resolution images where it is more difficult to use classical approaches. The training is performed in several steps during which the size of both the images and the network are increased. The inference time of some architectures can be modulated by stopping the forward pass at some layer (Huang et al., 2016; Wu et al., 2017; Veit & Belongie, 2017).This differs from our approach, where the number of residual block compositions (“powers”) can be chosen, to shorten the inference. A by-product is a reduction of the number of network parameters.\nWeight sharing is a way of looking at our method, because the same layer is applied several times within the same network. Recurrent Neural Networks (RNN) are the most classical example weight sharing in a recursive architecture. Besides RNNs, weight sharing is mainly used for model compression, sequential data and ordinary differential equations (ODE) (Gao et al., 2019; Han et al., 2015; Chen et al., 2018).A few works (Jastrzebski et al., 2017; Zhang & Jung, 2018) apply weight sharing to unfold a ResNet and evaluate its performance in classification tasks. The optimization is inherently difficult, so they use independent batch normalization for each shared layer. With PoL we observe the same optimization issues, that we solve by a progressive training strategy, see Section 3.3. Recent work (Jeon et al., 2020) are interested in the composition of the same block by considering the parallel with the fixed point theorem, nevertheless their application remains to rather simple problems compared to our unpaired image-to-image translation tasks." }, { "heading": "3 POWER OF LAYERS", "text": "We adopt the same context as CycleGANand focus on unpaired image-to-image translation: the objective is to transform an image from domain A into an image from domain B. In our case the domains can be noise levels, painting styles, blur, JPEG artifacts, or simply object classes that appear in the image. The training is unpaired: we do not have pairs of corresponding images at training time. CycleGAN is simple, adaptable to different tasks and allows a direct comparison in Sections 4 and 5.\nWe learn two generators and two discriminators. The generator GAB : I ! I transforms an element of A into an element of B, and GBA goes the other way round, I being the fixed-resolution image space. The discriminators DA : I ! [0, 1] (resp. DB) predicts whether an element belongs to domain A (resp B). We use the same losses as commonly used in unpaired image-to-image translation:\nLTotal = AdvLAdv + CycLCyc + IdLId, (1) where LAdv(GAB , DB) = Eb⇠B[logDB(b)] + Ea⇠A[log(1 DB(GAB(a)))],\nLCyc(GAB , GBA) = Eb⇠B[kGAB(GBA(b)) bk1] + Ea⇠A[kGBA(GAB(a)) ak1], LId(GAB , GBA) = Eb⇠B[kGAB(b) bk2] + Ea⇠A[kGBA(a) ak2].\nThe Adversarial loss LAdv(GAB , DB) verifies that the generated images are in the correct domain. The Cycle Consistency loss LCyc(GAB , GBA) ensures a round-trip through the two generators reconstructs the initial image, and the identity loss LId(GAB , GBA) penalizes the generators transforming images that are already in their target domain. We keep the same linear combination coefficients as in CycleGAN: Adv = 1, Cyc = 10, Id = 5." }, { "heading": "3.1 NETWORK ARCHITECTURE", "text": "We start from the CycleGAN architecture.The encoder and decoder consist of 2 layers and a residual block. The embedding space E of our model is 256⇥ 64⇥ 64: its spatial resolution is 1/4 the input image resolution of 256⇥ 256 and it has 256 channels. All translation operations take place in the fixed embedding space E . The encoder Enc : I ! E produces the embedding and consists of two convolutions. The decoder Dec : E ! I turns the embedding back to image space and consists of two transposed convolutions.\nPre-training of the auto-encoder. We train the encoder and decoder of our model on a reconstruction task using an `2 reconstruction loss in pixel space. We use the Adam optimizer. Our\ndata-augmentation consists of an image resizing, a random crop and a random horizontal flip. Both the encoder and decoder weights are fixed for all the other tasks, only the residual block is adapted (and the discriminator in case we use it for the stopping criterion).\nThere are two choices to train the auto-encoder: (1) train with 6M unlabeled images randomly drawn from the YFCC100M dataset (Thomee et al., 2016) during one epoch, which makes it independent of the translation training dataset; (2) train on the dataset of our unpaired image-to-image translation task. For simplicity, we choose the first option and use a single encoder and decoder for all the tasks presented in this article. For the NiceGAN experiments, the encoder and decoder are trained jointly with the discriminator, so the auxiliary dataset YFCC100M is not used.\nThe embedding transformer – single block. The transformation between domains is based on a residual block fAB, similar to the feed-forward network used in transformers (Vaswani et al., 2017). It writes:\nfAB(x) = x+ resAB(x), 8x 2 A. (2)\nThere is a dimensionality expansion factor K between the two convolutions in the residual block (see Figure 1). Adjusting K changes the model’s capacity. We adopt the now standard choice of the original transformer paper (K = 4). The full generator writes\nGAB(x) = Dec(fAB(Enc(x))), 8x 2 A. (3)\nThe other direction, with fBA and resBA, is defined accordingly.\nPowers of layers. We start from the architecture above and augment its representation capacity. There are two standard ways of doing this: (1) augmenting the capacity of the fAB block by increasing K; (2) increasing the depth of the network by chaining several instances of fAB, since the intermediate representations are compatible. In contrast to these fixed architectures, PoL iterates the fAB block n 1 times, which amounts to sharing the weights of a deeper network:\nGAB(x) = Dec(f n AB(Enc(x))), 8x 2 A. (4)" }, { "heading": "3.2 OPTIMIZATION IN A RESIDUALS BLOCKS WEIGHT SHARING CONTEXT", "text": "In the following, we drop the AB suffix from fAB, since powers of layers operates in the same way on fAB and fBA. Thus, f : E ! E is f(x) = x + res(x). The parameters of f are collected in a vector w. The embedding x 2 E is a 3D activation map, but for the sake of the mathematical derivation we linearize it to a vector. The partial derivatives of f are @f@x = @res @x + Id and @f @w = @res @w . We compose the f function n times as\n@fn\n@x (x) =\n0Y\ni=n 1\n@f @x (f i(x)) and\n@fn @w (x) =\n1Y\ni=n 1\n@f\n@x\nf i(x) @f @w (x). (5)\nThe stability of the SGD optimization depends on the magnitude and conditioning of the matrix:\nMn = 1Y\ni=n 1\n@f\n@x\nf i(x) =\n1Y\ni=n 1\n✓ @res\n@x\nf i(x) + Id ◆ , (6)\nwhich is sensitive to initialization during the first optimization epochs. Indeed, the length of the SGD steps on w depends on the eigenvalues of Mn. When simplifying the basic residual block to a linear transformation L 2 Rd⇥d (i.e., ignoring the normalization and the ReLU non-linearity), we have Mn = (L+Id)n 1. The eigenvalues of Mn are ( i +1)n 1, where 1, ..., d are the eigenvalues of L. At initialization, the components of L are sampled from a random uniform distribution. To reduce the magnitude of i, one option is to make the entries of L small. However, to decrease ( i + 1)n 1 sufficiently, i must be so small that it introduces floating-point cancellations when the residual block is added back to the shortcut connection. This is why we prefer to adjust n, as detailed next." }, { "heading": "3.3 PROGRESSIVE TRAINING", "text": "We adopt a progressive learning schedule in a “warm up” phase: we start the optimization with a single block and add one iteration at every epoch until we reach the required n blocks. This is\nGaussian noise (std=30) Gaussian blur ( =4) ntr POL ind POL ind\n1 23.3 23.3 18.6 18.6 4 24.4 23.2 19.2 19.2 8 23.9 22.3 19.0 19.3 12 23.9 22.5 19.7 18.8 16 23.9 22.5 19.0 18.1 18 24.2 _ 19.0 _ 30 23.5 _ 19.0 _\nTable 1: Denoising: PSNR on Urban-100 Huang et al. (2015). Comparison between Power of layers (POL) and independent (ind) blocks for different maximum number of composition / residual blocks. The best value for each column is in Bold. We could not fit more than 16 independent blocks in memory in our experiments. Appendix A reports standard deviations and more results.\npossible because the blocks operate in the same embedding space E and because their weights are shared, so all blocks are still in the same learning schedule. This approach avoids the numerical exponentiation effects at initialization mentioned in the previous paragraph. After the warm-up phase, the gradient descent starts to converge, so these numerical effects become unlikely. In addition, this approach allows the discriminator to improve progressively during the training. For example, in the case of the transformation horse ! zebra, a slightly whitened horse fools the discriminator at the beginning of the training, but a texture with stronger stripes is required later on.\nTraining for modulation. If the network is trained with a fixed number of compositions, the intermediate states do not correspond to modulations of the transformation that “look right” (see Appendix A). Therefore, in addition to this scheduled number of iterations during the first n epochs of warm-up, we also randomize the number of iterations in subsequent epochs. This forces the generator to also produce acceptable intermediate states, and enables modulating the transform.\nStopping criterion at inference time. Each image of domain A is more or less close to domain B. For example, when denoising, the noise level can vary so the denoising strength should adapt to the input. Similarly for horse!zebra: a white horse is closer to a zebra than a brown horse. Therefore, at inference time, we can adjust the number of compositions as well. In particular, for each test image, we select n that best deceives the discriminator, thus effectively adapting the processing to the relative distance to the target domain." }, { "heading": "4 ANALYSIS", "text": "In this section we study the impact of the main training and design choices and on the performance of powers of layers. Appendices A and B provide complementary analysis for training- and inferencetime choices, respectively. For this preliminary analysis, we focus on denoising tasks for which the performance is easily measurable. We add three types of noise to images: Gaussian noise, Gaussian blur and JPEG artifacts. The noise intensity is quantified by the noise standard deviation, the blur radius and the JPEG quality factor, respectively. We generate transformed images and measure how well our method recovers the initial image. This provides a more reliable accuracy measure than a purely unpaired evaluation, that will be carried out in the experimental section.\nNote that, in the literature, these tasks are best addressed by providing (original, noisy) pairs if images. Our objective is to remain in a completely unpaired setting during the training phase. It corresponds to the case where parallel data is not available (like for the restoration of ancient movies), and also better reflects the situation where the noise strength is not known in advance. Therefore, the original image is solely employed to measure the performance.\nExperimental protocol. To train, we sample 800 domain A images from the high-resolution Div2K dataset (Agustsson & Timofte, 2017). In the baseline training procedure, the warm-up phase starts from a single block and increases the number of compositions at every epoch, until we reach epoch ntr. Then we keep the number of compositions fixed.\nWe test on the Urban-100 (Huang et al., 2015) dataset. Unless specified otherwise, we set the number of compositions to nte = ntr and measure the Peak Signal to Noise Ratio (PSNR) of our model on the dataset images, degraded with the same intensity as at training time. For the JPEG case we use the Naturalness Image Quality Evaluator metric (NIQE, lower=better) instead, because it is more sensitive to JPEG artifacts (Mittal et al., 2013). NIQE is a perceptual metric that does not take the original image into account (i.e. for JPEG we emphdo use an unpaired metric).\nBlock composition or independent successive blocks? Table 1 compares PoL’s composition of the same block versus using independent blocks with distinct weights. In spite of the much larger\nFi xe\nd n tr = 30 R an do m n tr 2 [[2 0, 30 ]]\ncapacity offered by independent blocks, the noise reduction operated by Power of layers is stronger. Our interpretation is that the model is easier to train.\nAnalysis of the progressive training strategy. Table 1 also evaluates the impact of the maximum number of compositions ntr. Having several compositions clearly helps. Since we choose the number of compositions nte at inference time (see next paragraph), it may be relevant to vary ntr at training time to minimize the discrepancy between the train- and test-time settings. For this, we tried different intervals to randomly draw the maximum number of compositions for each epoch, after the warm-up phase. If nte is fixed, the optimal choice is ntr = nte. However, if we use an adaptive nte, the best range is ntr 2 [[20, 30]], and the adaptive case with randomised training gives the best performance for denoising and debluring. Appendix A reports results obtained with different ntr ranges.\nStopping criterion. We consider two cases: either we use a fixed nte, or we use the discriminator to evaluate the transformation quality: it selects the value nte maximizing the target discriminator error for a given image. Figure 2 shows that setting a fixed ntr causes the discriminator to select nte = ntr as the best iteration at inference time. By selecting the best nte for each image we obtain on average a PSNR improvement of +1.36dB for a Gaussian noise of standard deviation 30, compared to fixing nte. In Appendix B, we compare it with the best possible stopping criterion: an oracle that selects nte directly on the PSNR. Our adaptive strategy significantly tightens the gap to this upper bound.\nComparison with CycleGAN. We use CycleGAN as a baseline. The differences between CycleGAN and powers of layers are (1) we use a single encoder and decoder trained in advance and common to all tasks; (2) CycleGAN has 9 residual blocks, PoL iterates a single residual block an arbitrary number of times. The inference time of PoL depends on the number of compositions nte but the number of parameters does not:\nencoder + decoder residual block discriminators total PoL 1⇥ 1.7M 2⇥K ⇥ 1.1M 2⇥2.7M 15.9M CycleGAN 2⇥ 11.4M 2⇥2.7M 28.2M\nFigure 3 compares the performance obtained by the two methods on denoising tasks with varying noise intensities. PoL gives better results than CycleGAN in terms of objective metrics, and overall the images produced by our method look as realistic and/or accurate.\nTraining with few images. Figure 3 also compares our method with CycleGAN when training in a data-starving scenario. Whatever the number of training images, PoL outperforms CycleGAN, but the gap is particularly important with very few images. This is expected for two reasons. Firstly, our approach has fewer parameters to learn than CycleGAN. Secondly, it only requires to learn the transformation because the encoder and decoder are pre-learned as a vanilla auto-encoder, while CycleGAN needs to learn how to encode and decode images.\nParametrization: remarks. Beyond the settings inherited from CycleGAN, the main training parameters of Powers of layers are the maximum number of compositions ntr and the range from which they are randomly sampled. The number of compositions at inference time nte is also important but the discriminator criterion can be used to set it automatically." }, { "heading": "5 EXPERIMENTS", "text": "We now run experiments on two image generation applications. We refer to Section 3.1 for the architecture and training protocol. In Appendix C we also give a comparison with the Fader network for the capacity to modulate a transformation, and more visual examples in Appendix D.\nUnpaired image-to-image translation. We report results for 6 of the 8 unpaired image-to-image translation tasks introduced in the CycleGAN (Zhu et al., 2017) paper (the two remaining ones lead to the same conclusions) and we used the datasets from their website. We compare the Frechet Inception Distance (FID) of CycleGAN, NiceGAN and our approach applied to this method in Figure 4. The FID measures the similarity between two datasets of images, we use it to compare the target dataset with the transformed dataset. It is a noisy measure for which only large deviations are significant. Yet the results and visualization show that our method has results comparable to those of CycleGAN and NiceGAN, achieved with much fewer parameters.\nHigh resolution experiments. PoL is fully convolutional architecture, therefore it is technically possible to apply models trained in a low resolution to high resolution images. However, the results\nare not always convincing, as shown in Figure 5 (right) where the model trained on low resolution images does not create stripes at the “right” scale on zebras. To circumvent this problem, CycleGAN trains on patches taken from high resolution images. This works for transformations affecting the whole image (painting$photo), but this is not applicable in the case where only a part of the image is affected (horse!zebra). In contrast, our proposal can adapt the memory used by changing its number of compositions, so we can apply it to very large images without running out of memory. Figure 5 (left) depicts results obtained with our method trained on high resolution images.\nAdjusting nte at inference time. Each image is more or less distant from the target domain, so we explore adapting the transformation to each image rather than applying a fixed transformation. For example, depending on the amount of noise, we may want to adjust the strength of the denoising. By modulating nte we can adapt the transformation to each image. Figure 6 shows that the more noisy the input image is, the more we should compose to best denoise with Powers-of-Layers.\nIn the same way, Figure 7 shows the progressive transformations obtained on different unpaired image-to-image translation tasks. It shows that progressive transformations are realistic for most tasks.\nCombining transformation. The different blocks associated with different transformations operate in the same embedding space for different tasks. Hence we can compose transformations, each being realized by one residual block. We train Transform #1 in the usual way, then freeze its residual block. Transform #2 is trained on the output of #1. Visual results are in Figure 8. The composition in the embedding space gives better results than decoding/encoding to image space mid-way." }, { "heading": "6 CONCLUSION", "text": "Powers of layers iterates a residual block to learns a complex transformation with no direct supervision. On various tasks, It gives similar performance to CycleGAN and NiceGAN with fewer parameters. The flexibility offered by the common embedding space allows the modulation of the transformation strength, or to compose several transformations. While in most examples the discriminator is only used for training, Powers of layers can also exploit it to adjust the transformation to the input image at inference time." } ]
2,020
null
SP:26f9757d1a510fe264bda0570cea50301383c39a
[ "This paper describes an SMC algorithm to sample the posterior distribution of latent states $p_\\theta(z_{1:T}|x_{1:T})$ in a latent variable models $p_\\theta(x_{1:T}, z_{1:T})$. The authors consider a completely general setting (the authors assume Eq.(1) but clearly there is nothing to assume here, this the standard Bayes rule). It is well known that the vanilla SMC sampler is a good candidate for ELBO because it provides an unbiased estimator of the likelihood. But the authors prefer here to use a more sophisticated version of the SMC algorithm, which features a partial rejection algorithm, which amounts to eliminate proposed particles which are \"large enough\" likelihood." ]
Effective variational inference crucially depends on a flexible variational family of distributions. Recent work has explored sequential Monte-Carlo (SMC) methods to construct variational distributions, which can, in principle, approximate the target posterior arbitrarily well, which is especially appealing for models with inherent sequential structure. However, SMC, which represents the posterior using a weighted set of particles, often suffers from particle weight degeneracy, leading to a large variance of the resulting estimators. To address this issue, we present a novel approach that leverages the idea of partial rejection control (PRC) for developing a robust variational inference (VI) framework. In addition to developing a superior VI bound, we propose a novel marginal likelihood estimator constructed via a dice-enterprise: a generalization of the Bernoulli factory to construct unbiased estimators for SMC-PRC. The resulting variational lower bound can be optimized efficiently with respect to the variational parameters and generalizes several existing approaches in the VI literature into a single framework. We show theoretical properties of the lower bound and report experiments on various sequential models, such as the Gaussian state-space model and variational RNN, on which our approach outperforms existing methods.
[ { "affiliations": [], "name": "ROBUST VARIA" } ]
[ { "authors": [ "Christophe Andrieu", "Nando De Freitas", "Arnaud Doucet", "Michael I Jordan" ], "title": "An introduction to MCMC for machine learning", "venue": "Machine learning,", "year": 2003 }, { "authors": [ "Søren Asmussen", "Peter W Glynn", "Hermann Thorisson" ], "title": "Stationarity detection in the initial transient problem", "venue": "ACM Transactions on Modeling and Computer Simulation (TOMACS),", "year": 1992 }, { "authors": [ "Jean Bérard", "Pierre Del Moral", "Arnaud Doucet" ], "title": "A lognormal central limit theorem for particle approximations of normalizing constants", "venue": "Electronic Journal of Probability,", "year": 2014 }, { "authors": [ "David M Blei", "Alp Kucukelbir", "Jon D McAuliffe" ], "title": "Variational inference: A review for statisticians", "venue": "Journal of the American Statistical Association,", "year": 2017 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "arXiv preprint arXiv:1509.00519,", "year": 2015 }, { "authors": [ "Frédéric Cérou", "Pierre Del Moral", "Arnaud Guyader" ], "title": "A nonasymptotic theorem for unnormalized Feynman-Kac particle models", "venue": "In Annales de l’IHP Probabilités et statistiques,", "year": 2011 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Arnaud Doucet", "Adam M Johansen" ], "title": "A tutorial on particle filtering and smoothing", "venue": "Fifteen years later. Handbook of nonlinear filtering,", "year": 2009 }, { "authors": [ "Shaddin Dughmi", "Jason D Hartline", "Robert Kleinberg", "Rad Niazadeh" ], "title": "Bernoulli factories and black-box reductions in mechanism design", "venue": "In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing,", "year": 2017 }, { "authors": [ "Flávio B Gonçalves", "Krzysztof Łatuszyński", "Gareth O Roberts" ], "title": "Barker’s algorithm for Bayesian inference with intractable likelihoods", "venue": "Brazilian Journal of Probability and Statistics,", "year": 2017 }, { "authors": [ "Flávio B Gonçalves", "Krzysztof G Łatuszyński", "Gareth O Roberts" ], "title": "Exact Monte Carlo likelihoodbased inference for jump-diffusion processes", "venue": "arXiv preprint arXiv:1707.00332,", "year": 2017 }, { "authors": [ "Aditya Grover", "Ramki Gummadi", "Miguel Lazaro-Gredilla", "Dale Schuurmans", "Stefano Ermon" ], "title": "Variational rejection sampling", "venue": "arXiv preprint arXiv:1804.01712,", "year": 2018 }, { "authors": [ "Ramki Gummadi" ], "title": "Resampled belief networks for variational inference", "venue": "In NIPS Workshop on Advances in Variational Inference,", "year": 2014 }, { "authors": [ "Matthew D Hoffman" ], "title": "Learning deep latent Gaussian models with Markov chain Monte Carlo", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jan Kudlicka", "Lawrence M Murray", "Thomas B Schön", "Fredrik Lindsten" ], "title": "Particle filter with rejection control and unbiased estimator of the marginal likelihood", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Dieterich Lawson", "George Tucker", "Christian A Naesseth", "Chris J Maddison", "Ryan P Adams", "Yee Whye Teh" ], "title": "Twisted variational sequential Monte Carlo", "venue": "In Third workshop on Bayesian Deep Learning (NeurIPS),", "year": 2018 }, { "authors": [ "Tuan Anh Le", "Maximilian Igl", "Tom Rainforth", "Tom Jin", "Frank Wood" ], "title": "Auto-encoding sequential Monte Carlo", "venue": "arXiv preprint arXiv:1705.10306,", "year": 2017 }, { "authors": [ "Jun S Liu", "Rong Chen", "Wing Hung Wong" ], "title": "Rejection control and sequential importance sampling", "venue": "Journal of the American Statistical Association,", "year": 1998 }, { "authors": [ "Chris J Maddison", "John Lawson", "George Tucker", "Nicolas Heess", "Mohammad Norouzi", "Andriy Mnih", "Arnaud Doucet", "Yee Teh" ], "title": "Filtering variational objectives", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Giulio Morina", "Krzysztof Latuszynski", "Piotr Nayar", "Alex Wendland" ], "title": "From the Bernoulli factory to a dice enterprise via perfect sampling of Markov chains", "venue": null, "year": 1912 }, { "authors": [ "Christian A Naesseth", "Fredrik Lindsten", "Thomas B Schön" ], "title": "Nested sequential Monte Carlo methods", "venue": "arXiv preprint arXiv:1502.02536,", "year": 2015 }, { "authors": [ "Christian A Naesseth", "Scott W Linderman", "Rajesh Ranganath", "David M Blei" ], "title": "Variational sequential Monte Carlo", "venue": "arXiv preprint arXiv:1705.11140,", "year": 2017 }, { "authors": [ "Christian A Naesseth", "Fredrik Lindsten", "Thomas B Schön" ], "title": "Elements of sequential Monte Carlo", "venue": "arXiv preprint arXiv:1903.04797,", "year": 2019 }, { "authors": [ "Radford M Neal" ], "title": "MCMC using Hamiltonian dynamics. Handbook of markov chain monte carlo", "venue": null, "year": 2011 }, { "authors": [ "Gareth W Peters", "Yanan Fan", "Scott A Sisson" ], "title": "On sequential Monte Carlo, partial rejection control and approximate Bayesian computation", "venue": "Statistics and Computing,", "year": 2012 }, { "authors": [ "Christian Robert", "George Casella" ], "title": "Monte Carlo statistical methods", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Francisco JR Ruiz", "Michalis K Titsias" ], "title": "A contrastive divergence for combining variational inference and MCMC", "venue": "arXiv preprint arXiv:1905.04062,", "year": 2019 }, { "authors": [ "Tim Salimans", "Diederik Kingma", "Max Welling" ], "title": "Markov chain Monte Carlo and variational inference: Bridging the gap", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Sebastian M Schmon", "Arnaud Doucet", "George Deligiannidis" ], "title": "Bernoulli race particle filters", "venue": "arXiv preprint arXiv:1903.00939,", "year": 2019 }, { "authors": [ "Dootika Vats", "Flávio Gonçalves", "Krzysztof Łatuszyński", "Gareth O Roberts" ], "title": "Efficient Bernoulli factory MCMC for intractable posteriors", "venue": "arXiv preprint arXiv:2004.07471,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Exact inference in latent variable models is usually intractable. Markov Chain Monte-Carlo (MCMC) (Andrieu et al., 2003) and variational inference (VI) methods (Blei et al., 2017), are commonly employed in such models to make inference tractable. While MCMC has been the traditional method of choice, often with provable guarantees, optimization-based VI methods have also enjoyed considerable recent interest due to their excellent scalability on large-scale datasets. VI is based on maximizing a lower bound constructed through a marginal likelihood estimator. For latent variable models with sequential structure, sequential Monte-Carlo (SMC) (Doucet & Johansen, 2009) returns a much lower variance estimator of the log marginal likelihood than importance sampling (Bérard et al., 2014; Cérou et al., 2011). In this work, we focus our attention on designing a low variance, unbiased, and computationally efficient estimator of the marginal likelihood.\nThe performance of SMC based methods is strongly dependent on the choice of the proposal distribution. Inadequate proposal distributions propose values in low probability areas under the target, leading to particle depletion (Doucet & Johansen, 2009). An effective solution is to use rejection control (Liu et al., 1998; Peters et al., 2012) which is based on an approximate rejection sampling step within SMC to reject samples with low importance weights.\nIn this work, we leverage the idea of partial rejection control (PRC) within the framework of SMC based VI for sequential latent variable models. To this end, we construct a novel lower bound, VSMC-PRC, and propose an efficient optimization strategy for selecting the variational parameters. Compared to other recent SMC based VI approaches (Naesseth et al., 2017; Maddison et al., 2017; Le et al., 2017), our approach consists of an inbuilt accept-reject mechanism within SMC to prevent particle depletion. The use of accept-reject within SMC makes the particle weight intractable, therefore, we use a generalization of the Bernoulli factory (Asmussen et al., 1992) to construct unbiased estimators of the marginal likelihood for SMC-PRC.\nAlthough the idea of combining VI with an inbuilt accept-reject mechanism is not new (Salimans et al., 2015; Ruiz & Titsias, 2019; Grover et al., 2018; Gummadi, 2014), a key distinction of our approach is to incorporate an accept-reject mechanism along with a resampling framework. In contrast to standard sampling algorithms that may reject the entire stream of particles, we use a partial accept-reject on the most recent update, increasing the sampling efficiency. Further, the variational framework of SMC-PRC is interesting in itself as it combines accept-reject with particle filter methods. Therefore, our proposed bound VSMC-PRC generalizes several existing approaches for example: Variational Rejection Sampling (VRS) (Grover et al., 2018), FIVO (Maddison et al., 2014), IWAE (Burda et al., 2015), and standard variational Bayes (Blei et al., 2017).\nAnother key distinction is that, while existing approaches using Bernoulli factory are limited to niche one-dimensional toy examples, our proposed approach is scalable. To the best of our knowledge, there is no prior work that has used Bernoulli factories for such a general case like variational recurrent neural networks (VRNN); therefore, we believe this aspect to be a significant contribution as well. The rest of the paper is organized as follows: In Section 2, we provide a brief review on SMC, partial rejection control, and dice enterprise. In Section 3, we introduce our VSMC-PRC bound and provide new theoretical insights into the Monte-Carlo estimator and design efficient ways to optimize it. Finally, we discuss related work and present experiments on the Gaussian state-space model (SSM) and VRNN." }, { "heading": "2 BACKGROUND", "text": "We denote a sequence of T real-valued observations as x1:T = (x1, x2, . . . , xT ), and assume that there is an associated sequence of latent variables z1:T = (z1, z2, . . . , zT ). We are interested in inferring the posterior distribution of the latent variables, i.e., p(z1:T |x1:T ). The task is, in general, intractable. For the rest of the paper we have used some common notations from SMC and VI literature where zit: i th particle at time t; Ait−1: ancestor variable for the i th particle at time t; θ and φ: model and variational parameters, respectively." }, { "heading": "2.1 SEQUENTIAL MONTE CARLO WITH PARTIAL REJECTION CONTROL", "text": "An SMC sampler approximates a sequence of densities {pθ(z1:t|x1:t)}Tt=1 through a set of N weighted samples generated from a proposal distribution. Let the proposal density be\nqφ(z1:T |x1:T ) = T∏ t=1 qφ(zt|x1:t, z1:t−1). (1)\nConsider time t−1 at which we have uniformly weighted samples {N−1, zi1:t−1, Ait−1}Ni=1 estimating pθ(z1:t−1|x1:t−1). We want to estimate pθ(z1:t|x1:t) such that particles with a low importance weight are automatically rejected. PRC achieves this by using an approximate rejection sampling step (Liu et al., 1998; Peters et al., 2012). The overall procedure is as follows:\n1. Generate zit ∼ qφ(zt|x1:t, z Ait−1 1:t−1) where i = 1, 2, . . . , N .\n2. Accept zit with probability\naθ,φ(z i t|z Ait−1 1:t−1, x1:t) = 1 + M(i, t− 1)qφ(zit|x1:t, zAit−11:t−1) pθ(xt, zit|x1:t−1, z Ait−1 1:t−1) −1 , (2) where M(i, t− 1) is a hyperparameter controlling the acceptance rate (see Proposition 3 and Section 3.3 for more details). Note that PRC applies accept-reject only on zit , not on the entire trajectory.\n3. If zit is rejected go to step 1. 4. The new incremental importance weight of the accepted sample is\nαt(z i 1:t) = c i tZ(z Ait−1 1:t−1, x1:t), (3)\nwhere cit is\ncit = pθ(xt, z\ni t|x1:t−1, z Ait−1 1:t−1)\nqφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|z Ait−1 1:t−1, x1:t)\n, (4)\nand the intractable normalization constant Z(.) (For simplicity of notation, we have ignored the dependence of Z(.) on M(i, t− 1))\nZ(z Ait−1 1:t−1, x1:t) = ∫ aθ,φ(zt|z Ait−1 1:t−1, x1:t)qφ(zt|x1:t, z Ait−1 1:t−1)dzt. (5)\n5. Compute Monte-Carlo estimator of unnormalized weight\nw̃it = pθ(xt, z\ni t|x1:t−1, z Ait−1 1:t−1) 1 K ∑K k=1 aθ,φ(δ i,k t |z Ait−1 1:t−1, x1:t)\nqφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|z Ait−1 1:t−1, x1:t)\n, (6)\nwhere δi,kt ∼ qφ(zt|x1:t, z Ait−1 1:t−1) and k = 1, 2, . . . ,K. Note that w̃ i t would be essential for\nconstructing unbiased estimator for pθ(x1:T ). 6. Generate ancestor variables Ait through dice-enterprise and set new weights w i t = N\n−1 for i = 1, 2, . . . , N :\nAit ∼ Categorical\n( αt(z\n1 1:t)∑N\nj=1 αt(z j 1:t)\n, αt(z 2 1:t)∑N\nj=1 αt(z j 1:t)\n, . . . , αt(z N 1:t)∑N\nj=1 αt(z j 1:t)\n) . (7)" }, { "heading": "2.2 DICE ENTERPRISE", "text": "Simulation of ancestor variables in Eq. 7 is non-trivial due to intractable normalization constants in the incremental importance weight (see Eq. 3). Vanilla Monte-Carlo estimation of αt(.) yields biased samples of ancestor variables from Eq. 7. To address this issue, we leverage a generalization of Bernoulli factory, called dice-enterprise (Morina et al., 2019). Note that multinoulli extensions of Bernoulli factory (Dughmi et al., 2017) have also been used for resampling within intractable SMC before (Schmon et al., 2019), a key distinction of our approach is to design a scalable Bernoulli factory methodology especially useful for VI applications.\nSuppose we can simulate Bernoulli(pit) outcomes where p i t is intractable. Bernoulli factory problem simulates an event with probability f(pit), where f(.) is some desired function. In our case, the intractable coin probability pit is the intractable normalization constant,\npit = Z(z Ait−1 1:t−1, x1:t) = ∫ aθ,φ(zt|z Ait−1 1:t−1, x1:t)qφ(zt|x1:t, z Ait−1 1:t−1)dzt. (8)\nSince pit ∈ [0, 1] and we can easily simulate this coin, we obtain the dice-enterprise algorithm below.\n1. Required: Constants {cit}Ni=1 see Eq. 4. 2. Sample C ∼ Categorical (\nc1t∑N j=1 c j t , c2t∑N j=1 c j t , . . . , cNt∑N j=1 c j t ) 3. If C = i, generate Ui ∼ U [0, 1] and zt ∼ qφ(zt|x1:t, z Ait−1 1:t−1)\n• If Ui < aθ,φ(zt|z Ait−1 1:t−1, x1:t) output i • Else go to step 2\nThe dice-enterprise produces unbiased ancestor variables. Note that we can easily control the efficiency of the proposed dice-enterprise through the hyper-parameter M (similar to Eq. 2) in contrast to existing Bernoulli factory algorithms (Schmon et al., 2019). For details on efficiency and correctness, please refer to Section 3.1 and Section 3.3.\nOur proposed VSMC-PRC bound is constructed through a marginal likelihood estimator obtained by combining the SMC sampler with a PRC step and dice-enterprise. The variance of estimators obtained through SMC-PRC particle filter is usually low (Peters et al., 2012). Therefore, we expect VSMC-PRC to be a tighter bound compared to the standard SMC based bounds used in recent works (Maddison et al., 2017; Naesseth et al., 2017; Le et al., 2017). Algorithm 1 summarizes the generative process to simulate the VSMC-PRC bound. Please see Figure 1 to visualize the generative process for VSMC-PRC.\nAlgorithm 1 Estimating the VSMC-PRC lower bound\n1: Required: N , K, and M 2: for t ∈ {1, 2, . . . , T} do 3: for i ∈ {1, 2, . . . , N} do 4: zit, c i t, w̃ i t ∼ PRC (q, p,M(i, t− 1)) 5: zi1:t = (z Ait−1 1:t−1, z i t) 6: end for 7: for i ∈ {1, 2, . . . , N} do 8: Ait = DICE-ENT ( {cit, zi1:t}Ni=1\n) 9: end for\n10: end for 11: return log ∏T t=1 ( 1 N ∑N i=1 w̃ i t ) 12: 13: PRC (q, p,M(i, t− 1)) 14: while sample not accepted do 15: Generate zit ∼ qφ(zt|x1:t, z Ait−1 1:t−1) 16: Accept zit with probability\naθ,φ(z i t|z Ait−1 1:t−1, x1:t)\n17: end while 18: Sample {δi,kt }Kk=1 ∼ qφ(zt|x1:t, z Ait−1 1:t−1) 19: Calculate w̃it from Eq. 6 20: Calculate cit from Eq. 4 21: return ( zit, c i t, w̃ i t\n) 22: 23: DICE-ENT ( {cit, zi1:t}Ni=1\n) 24: Sample C ∼ Multinoulli ( cit∑N j=1 c j t )N i=125: if C == i then 26: Sample Ui ∼ U [0, 1] 27: zit ∼ qφ(zt|x1:t, z Ait−1 1:t−1) 28: end if 29: if Ui < aθ,φ(zit|z Ait−1 1:t−1, x1:t) then 30: return (i) 31: else 32: return DICE-ENT ( {cit, zi1:t}Ni=1\n) 33: end if" }, { "heading": "3 PARTIAL REJECTION CONTROL BASED VI FOR SEQUENTIAL LATENT VARIABLE MODELS", "text": "We now show how to leverage PRC to develop a robust VI framework for sequential latent variable models. Our framework is based on the VSMC-PRC bound presented below. The complete sampling distribution of Algorithm 1 is as follows.\nQVSMC-PRC ( z1:N1:T , A 1:N 1:T−1, δ 1:N,1:K 1:T ) = ( K∏ k=1 N∏ i=1 qφ(δ i,k 1 |x1) T∏ t=2 N∏ i=1 K∏ k=1 qφ(δ i,k t |x1:t, z Ait−1 1:t−1) ) ×\n( N∏ i=1 qφ(z i 1|x1)aθ,φ(zi1|x1) Z(x1) T−1∏ t=1 N∏ i=1 Discrete(Ait|αt) qφ(z i t+1|x1:t+1, z Ait 1:t)aθ,φ(z i t+1|z Ait 1:t , x1:t+1) Z(z Ait 1:t , x1:t+1) ) (9)\nThe normalization constants Z(.) in Eq. 9 are intractable and have to be estimated while calculating the weights. Therefore, we introduce an extra parameter K, denoting the number of Monte-Carlo samples used to estimate Z(.). The Monte-Carlo estimator of VSMC-PRC bound is\nL̂VSMC-PRC(θ, φ;x1:T ,K) = T∑ t=1 log\n( 1\nN N∑ i=1 w̃it\n) . (10)\nWe maximize the VSMC-PRC bound with respect to model parameters θ and variational parameters φ. This requires estimating the gradient the details of which are provided in Section 3.2." }, { "heading": "3.1 THEORETICAL PROPERTIES", "text": "We now present properties of the Monte-Carlo estimator L̂VSMC-PRC. The key variables that affect this bound are N (number of samples), hyper-parameter M , and the number of Monte-Carlo samples used to compute the normalization constant Z(.), i.e., K. As discussed by Bérard et al. (2014); Naesseth et al. (2017), as N increases, we expect the VSMC-PRC bound to get tighter. Hence, we will focus our attention on M and K. All the proofs can be found in the appendix. Proposition 1. Dice-enterprise produces unbiased ancestor variables. Further, let Λt be the number of iterations required for generating one ancestor variable, then Λt ∼ Geom ( E[Λt]−1 ) where\nE[Λt] = ∑N i=1 c\ni t∑N\ni=1 c i tZ(z Ait−1 1:t−1, x1:t)\n.\nAs evident from Proposition 1, the computational efficiency of the dice-enterprise clearly relies on the normalization constantZ(.). Note that the value ofZ(.) could be interpreted as the average acceptance rate of PRC which depends on the hyper-parameter M(i, t− 1). If the average acceptance rate for PRC for all particles is γ, then we can express the expected number of iterations as E[Λit] = γ−1. Therefore, the computational efficiency of dice-enterprise is similar to the PRC step and depends crucially on the hyper-parameter M .\nProposition 2. For all K, exp(L̂VSMC-PRC) is unbiased, i.e., E [ exp(L̂VSMC-PRC) ] = pθ(x1:T ). Fur-\nther, E[L̂VSMC-PRC] is non-decreasing in K. The use of Monte-Carlo estimator in place of the true value of Z(.) creates an inefficiency, as depicted by Proposition 2. The bound monotonically increases as we increase K despite the use of resampling operation. It is important to note that Algorithm 1 produces an unbiased estimator of the marginal likelihood for all values of K. Proposition 3. Let the sampling distribution of the ith particle (generated via PRC) at time t be rθ,φ(zt|z Ait−1 1:t−1, x1:t), then\nKL ( rθ,φ(zt|z Ait−1 1:t−1, x1:t) ‖ pθ(zt|z Ait−1 1:t−1, x1:t) ) ≤ KL ( qφ(zt|z Ait−1 1:t−1, x1:t) ‖ pθ(zt|z Ait−1 1:t−1, x1:t) ) .\nProposition 3 implies that the use of the accept-reject mechanism within SMC refines the sampling distribution. Instead of accepting all samples, the PRC step ensures that only high-quality samples are accepted, leading to a tighter bound for VSMC in general (not always). We show in the appendix that when M(i, t − 1)→∞, the PRC step reduces to pure rejection sampling (Robert & Casella, 2013). On the other hand, M(i, t− 1)→ 0 implies that all samples are accepted from the proposal. Recall, M(i, t− 1) is a hyperparameter that can be tuned to control the acceptance rate. For more details on tuning M , see Section 3.3." }, { "heading": "3.2 GRADIENT ESTIMATION", "text": "For tuning the variational parameters, we use stochastic optimization. Algorithm 1 produces the marginal likelihood estimator by sequentially sampling the particles, ancestor variables, and particles for the normalization constant (z1:N1:T , A 1:N 1:T−1, δ 1:N,1:K 1:T ).\nWhen the variational distribution qφ(.) is reparameterizable, we can make the sampling of δ i,k t independent of the model and variational parameters. However, the generated particles zit are not reparametrizable due to the PRC step. Finally, the ancestor variables are discrete and, therefore, cannot be reparameterized. The complete gradient can be divided into three core components (assuming qφ(.) is reparametrizable):\n∇θ,φE[L̂VSMC-PRC] = EQVSMC-PRC [ ∇θ,φL̂VSMC-PRC(θ, φ;x1:T ,K) ] + gPRC + gRSAMP (11)\n≈ EQVSMC-PRC [ ∇θ,φL̂VSMC-PRC(θ, φ;x1:T ,K) ] . (12)\nNote that gPRC and gRSAMP denote the score gradient of PRC and resampling step, respectively. Due to high variance, we have ignored these terms for the optimization. We have derived the full gradient and explored the gradient variance issues in the appendix. Please see Figure 2 (left) comparing the convergence of biased gradient vs. unbiased gradients on a toy Gaussian SSM." }, { "heading": "3.3 LEARNING THE M MATRIX", "text": "We use M as a hyperparameter for the PRC step which controls the acceptance rate of the sampler. The basic scheme of tuning M is as follows:\n• Define a new random variable F (zt+1|z Ait 1:t , x1:t+1) = log\n( qφ(zt+1|x1:t+1,z Ait 1:t )\npθ(xt+1,zt+1|x1:t,z Ait 1:t )\n) .\n• Draw zjt+1 ∼ qφ(zt+1|x1:t+1, z Ait 1:t) for j = 1, 2, . . . , J . • Evaluate γ ∈ [0, 1] quantile value of {F (zjt+1|z Ait t , x1:t+1)}Jj=1. In general for this case the\nacceptance rate would be around γ for all particles.\nlogM(i, t) = −Q F (zt+1|z Ait 1:t ,x1:t+1) (γ). (13)\n• If M matrix is very large then use a common {M(., t)}Tt=1 for every time-step. In general, for this configuration, the acceptance rate would be greater than equal to γ for all particles:\nlogM(., t) = min { −Q\nF (zt+1|z Ait 1:t ,x1:t+1)\n(γ) }N i=1 . (14)\nThrough γ: a user parameter, we can directly control the acceptance rate. Therefore, both diceenterprise and PRC would take around (less than) γ−1 iterations to produce a sample for M value learned from Eq. 13 (see Eq. 14). For implementation details please refer to the experiments.\nNote that a similar scheme was also employed in Grover et al. (2018). We update {{M(i, t − 1)}Ni=1}Tt=1 dynamically once every F epochs to save time. To learn more on setting hyper-parameter M , see Liu et al. (1998); Peters et al. (2012)." }, { "heading": "4 RELATED WORK AND SPECIAL CASES", "text": "There is significant recent interest in developing more expressive variational posteriors for latent variable models. There are two basic schemes for constructing tighter bounds on the log marginal likelihood: sampling-based methods (MCMC, rejection sampling) (Salimans et al., 2015; Ruiz & Titsias, 2019; Hoffman, 2017; Grover et al., 2018) or multiple samples from VI distributions to increase the flexibility (IS, SMC) (Burda et al., 2015; Maddison et al., 2017; Lawson et al., 2018; Naesseth et al., 2015). In this work, we present a unified framework for combining these two approaches, utilizing the best of both worlds. Although applying sampling-based methods on VI is useful, the density ratio between the true posterior and the improved density is often intractable. Therefore, we cannot take advantage of variance-reducing schemes like resampling, which is crucial for sequential models. We solve this issue through dice-enterprise: an extension of the Bernoulli factory.\nRecently, Bernoulli factory has amassed a great interest in the area of Bayesian inference (Gonçalves et al., 2017a;b; Vats et al., 2020). Although Bernoulli factory is theoretically valuable, its applicability is severely limited due to a high rejection rate. In this paper, we have presented an approach that combines SMC with dice-enterprise for efficient implementation. A closely related work from SMC literature is Schmon et al. (2019), which also utilizies Bernoulli factories to implement unbiased resampling. However, their method is not scalable and designed particularly for partially observed diffusions. Another relevant work for unbiased estimation of the marginal likelihood is that of Kudlicka et al. (2020). In contrast to our approach, this method samples one additional particle and keeps track of the number of steps required by PRC for every time-step to obtain their unbiased estimator. The weights are tractable for Kudlicka et al. (2020) as they do not take into account the effect of the normalization constant Z(.). On the other hand, we consider the effect of Z(.) on the particle’s weight, making resampling operation infeasible. To fix this intractability, we\nuse dice-enterprise. Comparisons of our marginal likelihood estimator versus that of Kudlicka et al. (2020) would make for interesting future work.\nTo provide more clarity, we will consider some special cases of VSMC-PRC bound and relate it with existing work: Note that for N = 1 our method reduces to a special case of Gummadi (2014) which uses a constraint function Ct for every time-step and restarts the particle trajectory from ∆t (if Ct is violated). Therefore, if we use the setting Ct(z1:t) = a(zt|z1:t−1, x1:t) and ∆t = t− 1, our method reduces to a specific case of Gummadi (2014). For the special case of N = 1 and T = 1, our method reduces to VRS (Grover et al., 2018). For N,T > 1, if we remove the PRC step, our bound reduces to FIVO (Maddison et al., 2017). Finally, if we remove both the PRC step and resampling, then our method effectively reduces to IWAE (Burda et al., 2015). Please refer to Figure 1 for more details." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we evaluate our proposed algorithm on synthetic as well as real-world datasets and compare them with relevant baselines. For the synthetic data experiment, we implement our method on a Gaussian SSM and compare our approach with VSMC (Naesseth et al., 2017). For the real data experiment, we train a VRNN (Chung et al., 2015) on the polyphonic music dataset." }, { "heading": "5.1 GAUSSIAN STATE SPACE MODEL", "text": "In this experiment, we study the linear Gaussian state space model. Consider the model zt = Azt−1 + ez,\nxt = Czt + ex,\nwhere ez, ex ∼ N (0, I) and z0 = 0. We are interested in learning a good proposal for the above model. The latent variable is denoted by zt and the observed data by xt. Let the dimension of zt be dz and dimension of xt be dx. The matrix A has the elements (A)i,j = α|i−j|+1, for α = 0.42. We explore different settings of dz, dx, and matrix C. A sparse version of C matrix measures the first dx components of zt, on the other hand a dense version of C is normally distributed i.e Ci,j ∼ N (0, 1). We consider four different configurations for the experiment. For more details please refer to Figure 2.\nThe variational distribution is a multivariate Gaussian with unknown mean vector µ = {µd}dzd=1 and diagonal covariance matrix {log σ2d} dz d=1. We set N = 4 and T = 10 for all the cases:\nq(zt|zt−1) ∼ N ( zt|Azt−1 + µ, diag(σ2) ) .\nThe {{M(i, t− 1)}Ni=1}Tt=1 matrix (see Eq. 13) for approximate rejection sampling is updated once every 10 epochs with acceptance rate γ ∈ {0.8, 0.4}. For estimating the intractable normalization constants, we generate K = 3 samples. Figure 2: (left) compares the convergence of biased gradient vs unbiased gradients. Note that we get a much tighter bound as compared to VSMC (Naesseth et al., 2017)." }, { "heading": "5.2 VARIATIONAL RNN", "text": "VRNN (Chung et al., 2015) comprises of three core components: the observation xt, stochastic latent state zt, and a deterministic hidden state ht(zt−1, xt−1, ht−1), which is modeled through a RNN. For the experiments, we use a single-layer LSTM for modeling the hidden state. The conditional distributions pt(zt|.) and qt(zt|.) are assumed to be factorized Gaussians, parametrized by a single layer neural net. The output distribution gt(xt|.) depends on the dataset. For a fair comparison, we use the same model setting as employed in FIVO (Maddison et al., 2017). We evaluate our model on four polyphonic music datasets: Nottingham, JSB chorales, Musedata, and Piano-midi.de.\nEach observation xt is represented as a binary vector of 88 dimensions. Therefore, we model the observation distribution gt(xt|.) by a set of 88 factorized Bernoulli variables. We split all four datasets into the standard train, validation, and test sets. For tuning the learning rate, we use the validation test set. For a fair comparison, we use the same learning rate and iterations for all the models. Let the dimension of hidden state (learned by single layer LSTM) be dh and dimension of latent variable be dz . We choose the setting dz = dh = 64 for all the data-sets except JSB. For modeling JSB, we use dz = dh = 32. For VSMC-PRC we have considered N ∈ {4, 6} Further, for each N , we consider four settings (K, γ) ∈ {(1, 0.9), (1, 0.8), (3, 0.9), (3, 0.8)}. The M hyper-parameter for PRC step is learned from Eq. 14 due to large size. We have updated M value once every 50 epoches. Note that in this scenario, the acceptance rate for all particles would be greater than equal to γ. For more details on experiments, please refer to the appendix.\nAs discussed in Section 3.1, the PRC step and dice-enterprise have time complexity O(N/γ) for producing N samples (assuming average acceptance rate γ). Therefore, we consider dNγ−1e particles for IWAE and FIVO to ensure effectively the same number of particles, where N ∈ {4, 6} and γ = 0.8. Note, however, that the acceptance rate is ≥ γ, so this adjustment actually favors the other approaches more. For FIVO, we perform resampling when ESS falls below N/2. Table 1 summarizes the results which show whether rejecting samples provide us with any benefit or not, and as the results show, our approach, even with the aforementioned adjustment, outperforms the other approaches in terms of test log-likelihoods, while still having a similar computational cost.\nIn Sec. 3.1, we discussed the effect of K and PRC rejection rate on VSMC-PRC bound. We expect a performance improvement when K and the rejection rate is increased. Although the results for VSMC-PRC’s different configurations are almost the same, we still get the best average ranking for (K = 3, γ = 0.8). Overall, for most cases, VSMC-PRC bound performs better than FIVO (Maddison et al., 2017) and IWAE (Burda et al., 2015) for a variety of configurations.\nIn VSMC-PRC, improvement in the bound value comes at the cost of estimating the normalization constant Z(.), i.e., K. On further inspection, we can clearly see that increasing K doesn’t provide us\nwith any substantial benefits despite the increase in computational cost. Therefore, to maintain the computational trade-off (K = 1, γ > 0.8) seems to be a reasonable choice for VI practitioners.\nTable 1 signifies that rejecting samples with low importance weight is better instead of keeping a large number of particles (at least for a reasonably high acceptance rate γ). The proposed bound uses more particles (PRC step and dice-enterprise) than existing approaches like FIVO and IWAE due to intractability. Future work aims at designing a scalable implementation for VSMC-PRC bound that consumes fewer particles." }, { "heading": "6 CONCLUSION", "text": "We introduced VSMC-PRC, a novel bound that combines SMC and partial rejection sampling with VI in a synergistic manner. This results in a robust VI procedure for sequential latent variable models. Instead of using standard sampling algorithms, we have employed a partial sampling scheme suitable for high dimensional sequences. Our experimental results clearly demonstrate that VSMC-PRC outperforms existing bounds like IWAE (Burda et al., 2015) and standard particle filter bounds (Maddison et al., 2017; Naesseth et al., 2017; Le et al., 2017). The future work aims to explore partial versions of powerful sampling algorithms like Hamiltonian Monte Carlo (Neal et al., 2011) instead of rejection sampling." }, { "heading": "A PROOF OF THEORETICAL RESULTS", "text": "Proof of proposition 1 : Dice-enterprise produces unbiased ancestor variables. Let’s evaluate the probability that Dice-enterprise would output i as the ancestor index. Assume that the algorithm terminates after r steps where r ∈ {1, 2, . . . ,∞}, then the probability of output i is given as follows:\nPr(output = i) = ∞∑ r=1 Pr(output = i|after r steps)\n= citZ(z Ait−1 1:t−1, x1:t)∑N m=1 c m t ∞∑ r=1 N∑ j=1 cjt (1− Z(z Ajt−1 1:t−1, x1:t)∑N m=1 c m t r−1\n= citZ(z Ait−1 1:t−1, x1:t)∑N\nj=1 c j tZ(z Ajt−1 1:t−1, x1:t)\n.\nIt is easy to see that Λt is geometrically distributed with success probability given by Pr(getting output in a loop) = ∑N i=1 c i tZ(z Ait−1 1:t−1, x1:t)∑N\nm=1 c m t\nProof of proposition 2 : Before explaining the proof, we will first introduce Ẑ: Monte-Carlo estimator of the unknown normalization constant Z(.). Since we are using K samples\nẐ(x1:t, z Ait−1 1:t−1;K) = 1\nK K∑ k=1 aθ,φ(δ i,k t |x1:t, z Ait−1 1:t−1). (15)\nAlgorithm 1 is producing an unbiased estimator of the marginal likelihood. We will first integrate out δ1:N,1:K1:T from the marginal likelihood estimator.\nEQVSMC-PRC [ exp(L̂VSMC-PRC) ] =\n∑ A1:N1:T−1 ∫ T∏ t=1 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Ẑ(x1:t, z Ait−1 1:t−1;K) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1)\nQVSMC-PRC ( z1:N1:T , A 1:N 1:T−1, δ 1:N,1:K 1:T ) dz1:N1:T dδ 1:N,1:K 1:T\n= ∑\nA1:N1:T−1\n∫ T∏ t=1 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1) ∫ Ẑ(x1:t, z Ait−1 1:t−1;K) ∏K k=1 qφ(δ i,k t |x1:t, z Ait−1 1:t−1)dδ i,k t qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1)\nQVSMC-PRC ( z1:N1:T , A 1:N 1:T−1 ) dz1:N1:T\n= ∑\nA1:N1:T−1\n∫ T∏ t=1 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Z(x1:t, z Ait−1 1:t−1) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) QVSMC-PRC ( z1:N1:T , A 1:N 1:T−1 ) dz1:N1:T\n= ∑\nA1:N1:T−1\n∫ T∏ t=1 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1) rθ,φ(zit|x1:t, z Ait−1 1:t−1) QVSMC-PRC ( z1:N1:T , A 1:N 1:T−1 ) dz1:N1:T\n= ∑\nA1:N1:T−1\n∫ T∏ t=1 1 N N∑ i=1 wit ( N∏ i=1 rθ,φ(z i 1|x1) T−1∏ t=1 N∏ i=1 Discrete(Ait|αt)rθ,φ(zit+1|x1:t+1, z Ait 1:t) ) dz1:N1:T\n= E [ T∏ t=1 1 N N∑ i=1 wit ] . (16)\nNote that rθ,φ(zit|.) is the sampling density of PRC. It is easy to see that Eq. 16 is a standard SMC estimator for the marginal likelihood pθ(x1:T ). The proof is quite standard in SMC literature and can be found in (Naesseth et al., 2017; 2019). The key factor that makes our bound unbiased is the ability to produce unbiased ancestor samples despite the presence of intractable normalization constant. Using Jensen inequality we can easily show that VSMC-PRC bound is smaller then log marginal likelihood.\nE [ L̂VSMC-PRC;K ] = ∫ T∑ t=1 log 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Ẑ(x1:t, z Ait−1 1:t−1;K) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) QVSMC-PRC ( z1:N1:T , A 1:N 1:T−1, δ 1:N,1:K 1:T ) dz1:N1:T dδ 1:N,1:K 1:T dA 1:N 1:T−1 ≤ log pθ(x1:T ) .\n(17)\nWe will show that E[L̂VSMC-PRC;K] is non-decreasing with K. Let’s define a collection of subsets {{Ii,t}Ni=1}Tt=1 ⊂ {1, 2, . . . ,K} having elements {i1, i2, . . . , im} randomly drawn from the set {1, 2, . . . ,K} having length m ≤ K. We can easily show the following expectation:\nẐ(x1:t, z Ait−1 1:t−1;K) = EIi,t={i1,i2,...,im} [ Ẑ(x1:t, z Ait−1 1:t−1;m) ] = 1\nK K∑ k=1 aθ,φ(δ i,k t |x1:t, z Ait−1 1:t−1)\n(18)\nSubstitute the values of above expectation in equation 17. Use Jensen’s inequality i.e E[logX] ≤ logE[X] to complete the proof .\nE [ L̂VSMC-PRC;K ]\n= E T∑ t=1 logE{{Ii,t}Ni=1}Tt=1 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Ẑ(x1:t, z Ait−1 1:t−1;m) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) ≥ E\nE{{Ii,t}Ni=1}Tt=1 T∑ t=1 log 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Ẑ(x1:t, z Ait−1 1:t−1;m) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) ≥ E [ L̂VSMC-PRC;m ]\nNow we will see what happens when the limit K →∞. Using dominated convergence theorem we can write down the estimator as follows:\nlimK→+∞ E[L̂VSMC-PRC]\n= lim K→+∞\nE T∑ t=1 log 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Ẑ(x1:t, z Ait−1 1:t−1;K) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) = E\n T∑ t=1 log 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1) limK→+∞ Ẑ(x1:t, z Ait−1 1:t−1;K) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) = E\n T∑ t=1 log 1 N N∑ i=1 pθ(xt, z i t|x1:t−1, z Ait−1 1:t−1)Z(x1:t, z Ait−1 1:t−1) qφ(zit|x1:t, z Ait−1 1:t−1)aθ,φ(z i t|x1:t, z Ait−1 1:t−1) ≤ log pθ(x1:T )\nProof of proposition 3 : We will show that PRC step refines the learned distribution.\nKL ( r(zt|z Ait−1 1:t−1, x1:t)||p(zt|z Ait−1 1:t−1, x1:t) )\n= ∫ log r(zt|zAit−11:t−1, x1:t) p(zt|z Ait−1 1:t−1, x1:t) r(zt|zAit−11:t−1, x1:t)dzt = ∫ log\nq(zt|zAit−11:t−1, x1:t)a(zt|zAit−11:t−1, x1:t) p(zt|z Ait−1 1:t−1, x1:t)Z(x1:t, z Ait−1 1:t−1) r(zt|zAit−11:t−1, x1:t)dzt ≤ ∫ log\nq(zt|zAit−11:t−1, x1:t)a(zt|zAit−11:t−1, x1:t) p(zt|z Ait−1 1:t−1, x1:t)Z(x1:t, z Ait−1 1:t−1) q(zt|zAit−11:t−1, x1:t)dzt ≤ KL ( q(zt|z Ait−1 1:t−1, x1:t)||p(zt|z Ait−1 1:t−1, x1:t) ) +\n∫ log a(zt|zAit−11:t−1, x1:t) Z(x1:t, z Ait−1 1:t−1) q(zt|zAit−11:t−1, x1:t)dzt ≤ KL ( q(zt|z Ait−1 1:t−1, x1:t)||p(zt|z Ait−1 1:t−1, x1:t)\n) First, we will use the property of negatively correlated random variables. Note that the following two random variables\nX = log\n( r(zt|z Ait−1 1:t−1 ,x1:t)\np(zt|z Ai t−1\n1:t−1 ,x1:t)\n) and Y = a(zt|z Ait−1 1:t−1, x1:t),\nare negatively correlated. We know that for negatively correlated variables following identity holds E[XY ] ≤ E[X]E[Y ]. Further we have used Jensen’s inequality to show that\nE[log a(zt|z Ait−1 1:t−1, x1:t)] ≤ logZ(x1:t, z Ait−1 1:t−1)\nCase 1: M(i, t− 1)→ 0 implies r(zt|z Ait−1 1:t−1, x1:t)→ q(zt|z Ait−1 1:t−1, x1:t)\nIn this situation all samples would be accepted. Hence, we can express the sampling distribution as:\nr(zt|z Ait−1 1:t−1, x1:t) = q(z i t|x1:t, z Ait−1 1:t−1), (19)\nCase 2: M(i, t− 1)→∞ implies r(zt|z Ait−1 1:t−1, x1:t)→ p(zt|z Ait−1 1:t−1, x1:t)\nIn this case, the acceptance probability would reduce to standard rejection sampling. Therefore, the sampling distribution would become equal to the true posterior.\nr(zt|z Ait−1 1:t−1, x1:t) =\nq(zit|x1:t, z Ait−1 1:t−1)\np(zit,xt|x1:t−1,z Ait−1 1:t−1 )\nM(i,t−1)q(zit|x1:t,z Ai t−1\n1:t−1 )∫ q(zit|x1:t, z Ait−1 1:t−1) p(zit,xt|x1:t−1,z Ai t−1 1:t−1 )\nM(i,t−1)q(zit|x1:t,z Ai t−1 1:t−1 )\ndzit\n= p(zit|x1:t, z Ait−1 1:t−1)" }, { "heading": "B GRADIENT ESTIMATION", "text": "In this section, we will derive the unbiased gradients for the Monte-Carlo estimate E[L̂VSMC-PRC]. Note that we can express the complete gradient into three core components (assuming q() is reparametriz-\nable):\n∇θ,φE[L̂VSMC-PRC] = grep + gPRC + gRSAMP grep = EQVSMC-PRC [ ∇θ,φL̂VSMC-PRC(θ, φ;x1:T ,K) ] ,\ngPRC = EQVSMC-PRC L̂VSMC-PRC(θ, φ;x1:T ,K)∇θ,φ N∑ i=1 T∑ t=1 log a(zit|x1:t, z Ait−1 1:t−1) Z(x1:t, z Ait−1 1:t−1) , gRSAMP = EQVSMC-PRC [ L̂VSMC-PRC(θ, φ;x1:T ,K)∇θ,φ\nN∑ i=1 T−1∑ t=1 Ait logαt ] Note that for both gPRC and gRSAMP unbiased score gradient estimates are not available due to intractability. Therefore, we have used Monte-Carlo samples to estimate the log density for figure 3." }, { "heading": "C EXPERIMENTAL SETUP", "text": "For the real data experiment, we train a VRNN (Chung et al., 2015) on the polyphonic music dataset. Polyphonic music comprises of four datasets: : Nottingham, JSB chorales, Musedata, and Piano-midi.de. Each dataset was divided into standard train, validation, and test datset.\nThe validation data was used to tune the learning rate: we picked the learning rate from the following set: {3× 10−4, 1× 10−4, 3× 10−5, 1× 10−5}, instead of optimizing for each method, we picked the learning rate at which FIVO (Maddison et al., 2017) validation performance is the best. Once the learning rate is decided, we ran every method for the same number of iterations to ensure uniformity. We use a single-layer LSTM for modeling the hidden state having dimension dh. For a length T sequence, the variational distribution and joint data likelihood are defined as follows\nr(z1:T |x1:T ) = ∏T t=1 qt(zt|ht(zt−1, xt−1, ht−1), xt)at(zt|ht(zt−1, xt−1, ht−1), xt)∏T\nt=1 Zt(ht(zt−1, xt−1, ht−1), xt)\nNote that at(zt|.) is the acceptance probability for the PRC step and Zt(.) is the intractable normalization constant. Similarly, we can write down the joint data likelihood as\np(z1:T , x1:T ) = T∏ t=1 pt(zt|ht(zt−1, xt−1, ht−1), xt)gt(xt|ht(zt−1, xt−1, ht−1), zt)\nThe conditional distributions pt(zt|.) and qt(zt|.) are assumed to be factorized Gaussians, where dimension of the latent variable zt is dz . Note that the conditional densities are parametrized by fully connected neural networks with a single layer having size dh. The output distribution gt(xt|.) is modeled by a set of 88 iid Bernoulli variables. Please see Table 2 for more details regarding implementation details. The unknown weights and biases are initialized using a Xavier initialization. For setting up the optimization we used a batch size of 4 with adam optimizer. The unknown hyperparameter M is updated once every 50 epoches through Eq. 14 to save time." } ]
2,020
null
SP:798bd74ff3a9e5b08eba7c3d90b5c85494cb48a8
[ "This paper presents a text-to-speech synthesis system, called Flowtron which uses a normalizing flow to generate a sequence of mel-spectrogram frames. The difference between the proposed Flowtron and the previously prosed flow-based methods is, the authors argue as the main contributions, its ability to produce more diverse and expressive speech samples of specific speech attributes by sampling from the latent space. Evaluation is done using the two public datasets, and a number of experiments are performed to show that the proposed method not only achieves a good MOS score in general but also generates speech samples with variation, including the style transfer." ]
In this paper we propose Flowtron: an autoregressive flow-based generative network for text-to-speech synthesis with style transfer and speech variation. Flowtron borrows insights from Autoregressive Flows and revamps Tacotron 2 in order to provide high-quality and expressive mel-spectrogram synthesis. Flowtron is optimized by maximizing the likelihood of the training data, which makes training simple and stable. Flowtron learns an invertible mapping of data to a latent space that can be used to modulate many aspects of speech synthesis (timbre, expressivity, accent). Our mean opinion scores (MOS) show that Flowtron matches state-ofthe-art TTS models in terms of speech quality. We provide results on speech variation, interpolation over time between samples and style transfer between seen and unseen speakers. Code and pre-trained models are publicly available at https://github.com/NVIDIA/flowtron.
[ { "affiliations": [], "name": "Rafael Valle" }, { "affiliations": [], "name": "Kevin J. Shih" }, { "affiliations": [], "name": "Ryan Prenger" } ]
[ { "authors": [ "REFERENCES Kei Akuzawa", "Yusuke Iwasawa", "Yutaka Matsuo" ], "title": "Expressive speech synthesis via modeling expressions with variational autoencoder", "venue": "arXiv preprint arXiv:1804.02135,", "year": 2018 }, { "authors": [ "Sercan Arik", "Mike Chrzanowski", "Adam Coates", "Gregory Diamos", "Andrew Gibiansky", "Yongguo Kang", "Xian Li", "John Miller", "Andrew Ng", "Jonathan Raiman" ], "title": "Deep voice: Real-time neural text-to-speech", "venue": "arXiv preprint arXiv:1702.07825,", "year": 2017 }, { "authors": [ "Sercan Arik", "Gregory Diamos", "Andrew Gibiansky", "John Miller", "Kainan Peng", "Wei Ping", "Jonathan Raiman", "Yanqi Zhou" ], "title": "Deep voice 2: Multi-speaker neural text-to-speech", "venue": "arXiv preprint arXiv:1705.08947,", "year": 2017 }, { "authors": [ "Mikołaj Bińkowski", "Jeff Donahue", "Sander Dieleman", "Aidan Clark", "Erich Elsen", "Norman Casagrande", "Luis C Cobo", "Karen Simonyan" ], "title": "High fidelity speech synthesis with adversarial networks", "venue": null, "year": 1909 }, { "authors": [ "Alain De Cheveigné", "Hideki Kawahara" ], "title": "Yin, a fundamental frequency estimator for speech and music", "venue": "The Journal of the Acoustical Society of America,", "year": 1917 }, { "authors": [ "Laurent Dinh", "David Krueger", "Yoshua Bengio" ], "title": "Nice: Non-linear independent components estimation", "venue": "arXiv preprint arXiv:1410.8516,", "year": 2014 }, { "authors": [ "Laurent Dinh", "Jascha Sohl-Dickstein", "Samy Bengio" ], "title": "Density estimation using real nvp", "venue": "arXiv preprint arXiv:1605.08803,", "year": 2016 }, { "authors": [ "Andrew Gambardella", "Atılım Güneş Baydin", "Philip HS Torr" ], "title": "Transflow learning: Repurposing flow models without retraining", "venue": "arXiv preprint arXiv:1911.13270,", "year": 2019 }, { "authors": [ "Raza Habib", "Soroosh Mariooryad", "Matt Shannon", "Eric Battenberg", "RJ Skerry-Ryan", "Daisy Stanton", "David Kao", "Tom Bagby" ], "title": "Semi-supervised generative modeling for controllable speech synthesis", "venue": null, "year": 1910 }, { "authors": [ "Wei-Ning Hsu", "Yu Zhang", "Ron J Weiss", "Heiga Zen", "Yonghui Wu", "Yuxuan Wang", "Yuan Cao", "Ye Jia", "Zhifeng Chen", "Jonathan Shen" ], "title": "Hierarchical generative modeling for controllable speech synthesis", "venue": "arXiv preprint arXiv:1810.07217,", "year": 2018 }, { "authors": [ "Chin-Wei Huang", "David Krueger", "Alexandre Lacoste", "Aaron Courville" ], "title": "Neural autoregressive flows", "venue": "arXiv preprint arXiv:1804.00779,", "year": 2018 }, { "authors": [ "Pavel Izmailov", "Polina Kirichenko", "Marc Finzi", "Andrew Gordon Wilson" ], "title": "Semi-supervised learning with normalizing flows", "venue": "arXiv preprint arXiv:1912.13025,", "year": 2019 }, { "authors": [ "Jaehyeon Kim", "Sungwon Kim", "Jungil Kong", "Sungroh Yoon" ], "title": "Glow-tts: A generative flow for text-to-speech via monotonic alignment search", "venue": "arXiv preprint arXiv:2005.11129,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Prafulla Dhariwal" ], "title": "Glow: Generative flow with invertible 1x1 convolutions", "venue": "arXiv preprint arXiv:1807.03039,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Tim Salimans", "Rafal Jozefowicz", "Xi Chen", "Ilya Sutskever", "Max Welling" ], "title": "Improved variational inference with inverse autoregressive flow", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Steven R Livingstone", "Frank A Russo" ], "title": "The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in north american english", "venue": "PloS one,", "year": 2018 }, { "authors": [ "Mario Lucic", "Karol Kurach", "Marcin Michalski", "Sylvain Gelly", "Olivier Bousquet" ], "title": "Are gans created equal? a large-scale study", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "C. Miao", "S. Liang", "M. Chen", "J. Ma", "S. Wang", "J. Xiao" ], "title": "Flow-tts: A non-autoregressive network for text to speech based on flow", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Wei Ping", "Kainan Peng", "Andrew Gibiansky", "Sercan Arik", "Ajay Kannan", "Sharan Narang", "Jonathan Raiman", "John Miller" ], "title": "Deep voice 3: 2000-speaker neural text-to-speech", "venue": "arXiv preprint arXiv:1710.07654,", "year": 2017 }, { "authors": [ "Ryan Prenger", "Rafael Valle", "Bryan Catanzaro" ], "title": "Waveglow: A flow-based generative network for speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2019 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "arXiv preprint arXiv:1511.06434,", "year": 2015 }, { "authors": [ "Jonathan Shen", "Ruoming Pang", "Ron J Weiss", "Mike Schuster", "Navdeep Jaitly", "Zongheng Yang", "Zhifeng Chen", "Yu Zhang", "Yuxuan Wang", "RJ Skerry-Ryan" ], "title": "Natural tts synthesis by conditioning wavenet on mel spectrogram predictions", "venue": "arXiv preprint arXiv:1712.05884,", "year": 2017 }, { "authors": [ "Robert H Shumway", "David S Stoffer" ], "title": "Time series analysis and its applications: with R", "venue": null, "year": 2017 }, { "authors": [ "RJ Skerry-Ryan", "Eric Battenberg", "Ying Xiao", "Yuxuan Wang", "Daisy Stanton", "Joel Shor", "Ron J Weiss", "Rob Clark", "Rif A Saurous" ], "title": "Towards end-to-end prosody transfer for expressive speech synthesis with tacotron", "venue": "arXiv preprint arXiv:1803.09047,", "year": 2018 }, { "authors": [ "Guangzhi Sun", "Yu Zhang", "Ron J Weiss", "Yuan Cao", "Heiga Zen", "Yonghui Wu" ], "title": "Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),", "year": 2020 }, { "authors": [ "Noriko Umeda", "E Matsui", "Torazo Suzuki", "Hiroshi Omura" ], "title": "Synthesis of fairy tales using an analog vocal tract", "venue": "In Proceedings of 6th International Congress on Acoustics, pp. B159–162,", "year": 1968 }, { "authors": [ "Rafael Valle", "Jason Li", "Ryan Prenger", "Bryan Catanzaro" ], "title": "Mellotron github repo, 2019a. URL https://github.com/NVIDIA/mellotron", "venue": null, "year": 2019 }, { "authors": [ "Rafael Valle", "Jason Li", "Ryan Prenger", "Bryan Catanzaro" ], "title": "Mellotron: Multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens", "venue": "arXiv preprint arXiv:1910.11997,", "year": 2019 }, { "authors": [ "Oriol Vinyals", "Łukasz Kaiser", "Terry Koo", "Slav Petrov", "Ilya Sutskever", "Geoffrey Hinton" ], "title": "Grammar as a foreign language", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Yuxuan Wang", "RJ Skerry-Ryan", "Daisy Stanton", "Yonghui Wu", "Ron J Weiss", "Navdeep Jaitly", "Zongheng Yang", "Ying Xiao", "Zhifeng Chen", "Samy Bengio" ], "title": "Tacotron: A fully end-to-end text-to-speech synthesis model", "venue": "arXiv preprint arXiv:1703.10135,", "year": 2017 }, { "authors": [ "Robert L Weide" ], "title": "The CMU pronouncing dictionary", "venue": "URL: http://www.speech.cs.cmu.edu/cgibin/cmudict,", "year": 1998 }, { "authors": [ "Heiga Zen", "Viet Dang", "Rob Clark", "Yu Zhang", "Ron J Weiss", "Ye Jia", "Zhifeng Chen", "Yonghui Wu" ], "title": "Libritts: A corpus derived from librispeech for text-to-speech", "venue": null, "year": 1904 }, { "authors": [ "Wang" ], "title": "We anneal the learning rate once the generalization error starts toplateau and stop training once the the generalization error stops significantly decreasing or startsincreasing. Flowtron models with 2 steps of flow were trained on the LSH dataset for approximately1000 epochs, then fine-tuned on LibriTTS for 500 epochs. Tacotron 2 and Tacotron 2 GST are trained for approximately 500 epochs", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Current speech synthesis methods do not give the user enough control over how speech actually sounds. Automatically converting text to audio that successfully communicates the text was achieved a long time ago (Umeda et al., 1968; Badham et al., 1983). However, communicating only the text information leaves out the acoustic properties of the voice that convey much of the meaning and human expressiveness. In spite of this, the typical speech synthesis problem is formulated as a text to speech (TTS) problem in which the user inputs only text since the 1960s. This work proposes a normalizing flow model (Kingma & Dhariwal, 2018; Huang et al., 2018) that learns an unsupervised mapping from non-textual information to manipulable latent Gaussian distributions.\nTaming the non-textual information in speech is difficult because the non-textual is unlabeled. A voice actor may speak the same text with different emphasis or emotion based on context, but it is unclear how to label a particular reading. Without labels for the non-textual information, recent approaches (Shen et al., 2017; Arik et al., 2017a;b; Ping et al., 2017) have formulated speech synthesis as a TTS problem wherein the non-textual information is implicitly learned. Despite their success in recreating non-textual information in the training set, the user has limited insight and control over it.\nIt is possible to formulate an unsupervised learning problem in such a way that the user can exploit the unlabeled characteristics of a data set. One way is to formulate the problem such that the data is assumed to have a representation in some latent space, and have the model learn that representation. This latent space can then be investigated and manipulated to give the user more control over the generative model’s output. Such approaches have been popular in image generation, allowing users to interpolate smoothly between images and to identify portions of the latent space that correlate with various features (Radford et al., 2015; Kingma & Dhariwal, 2018; Izmailov et al., 2019).\nRecent deep learning approaches to expressive speech synthesis have combined text and learned latent embeddings for non-textual information (Wang et al., 2018; Skerry-Ryan et al., 2018; Hsu et al., 2018; Habib et al., 2019; Sun et al., 2020). These approaches impose an undesirable paradox: they require making assumptions before hand about the dimensionality of the embeddings when the correct dimensionality can only be determined after the model is trained. Even then, these embeddings are not guaranteed to contain all the non-textual information it takes to reconstruct speech, often\nresulting in models with dummy or uninterpretable latent dimensions and not enough capacity, as the appendices in Wang et al. (2018); Skerry-Ryan et al. (2018); Hsu et al. (2018) confirm.\nFurthermore, most models are not able to manipulate speech characteristics over time due to fixedlength embeddings. Their assumption is that variable-length embeddings are not robust to text and speaker perturbations (Skerry-Ryan et al., 2018), which we show not to be the case. Finally, although VAEs and GANs (Sun et al., 2020; Habib et al., 2019; Hsu et al., 2018; Bińkowski et al., 2019; Akuzawa et al., 2018) provide a latent embedding that can be manipulated, they may be difficult to train, are limited to approximate latent variable prediction, and rely on an implicit generative model or ELBO estimate to perform MLE in the latent space (Kingma & Dhariwal, 2018; Lucic et al., 2018; Kingma et al., 2016).\nIn this paper we propose Flowtron: an autoregressive flow-based generative network for melspectrogram synthesis with style transfer over time and speech variation. Flowtron learns an invertible function that maps a distribution over mel-spectrograms to a latent z-space parameterized by a spherical Gaussian. Figure 1 shows that acoustic characteristics like timbre and F0 correlate with portions of the z-space of Flowtron models trained without speaker embeddings.\nWith our formalization, we can generate samples containing specific speech characteristics manifested in mel-space by finding and sampling the corresponding region in z-space (Gambardella et al., 2019). Our formulation also allows us to impose a structure to the z-space and to parametrize it with a Gaussian mixture, similar to Hsu et al. (2018). In our simplest setup, we generate samples with a zero mean spherical Gaussian prior and control the amount of variation by adjusting its variance.\nCompared to VAEs and GANs and their disadvantages enumerated in Kingma & Dhariwal (2018), manipulating a latent prior in Flowtron comes at no cost in speech quality nor optimization challenges. Flowtron is able to generalize and produce sharp mel-spectrograms, even at high σ2 values, by simply maximizing the likelihood of the data while not requiring any additional Prenet or Postnet layer (Wang et al., 2017), nor compound loss functions required by most SOTA models (Shen et al., 2017; Ping et al., 2017; Skerry-Ryan et al., 2018; Wang et al., 2018; Bińkowski et al., 2019).\nIn summary, Flowtron is optimized by maximizing the exact likelihood of the training data, which makes training simple and stable. Using normalizing flows, it learns an invertible mapping from data to latent space that can be manipulated to modulate many aspects of speech synthesis. Concurrent with this work are Glow-TTS (Kim et al., 2020) and Flow-TTS (Miao et al., 2020), both of which incorporate normalizing flows into the TTS task. Our work differs from these two in that Flowtron is an autoregressive architecture where we explore the use of flow to modulate speech and style variation. In contrast, Glow-TTS and Flow-TTS are parallel architectures that focus on inference\nspeed. Our mean opinion scores (MOS) show that Flowtron matches SOTA TTS models in terms of speech quality. Further, we provide results on speech variation, interpolation between samples and interpolation between styles over time, and style transfer between seen and unseen speakers with equal or different sentences. We hope this work, the first to show evidence that normalizing flows can be used for expressive text-to-speech synthesis and style transfer, will further stimulate developments in normalizing flows." }, { "heading": "2 FLOWTRON", "text": "Flowtron is an autoregressive flow that generates a sequence of mel-spectrogram frames. A normalizing flow generates samples by first sampling a latent variable from a known distribution p(z), and applying a series of invertible transformations to produce a sample from the target distribution p(x). These invertible transformations f are known as steps of flow:\nx = f1 ◦ f2 ◦ . . .fk(z) (1)\nBecause each transformation is invertible, we can directly evaluate the exact log-likelihood of the target distribution p(x) using the change of variables:\nlog pθ(x) = log pθ(z) + k∑ i=1 log |det(J(f−1i (x)))| (2)\nz = f−1k ◦ f −1 k−1 ◦ . . .f −1 1 (x) (3)\nWhere J is the Jacobian of the inverse transform f−1i (x). By cleverly choosing the latent distribution p(z) and the invertible transformations, the exact log-likelihood becomes simple and tractable." }, { "heading": "2.1 LATENT DISTRIBUTIONS", "text": "We consider two simple distributions for the latent distribution z: a zero-mean spherical Gaussian and a mixture of spherical Gaussians with fixed or learnable parameters.\nz ∼ N (z; 0, I) or z ∼ ∑ k φ̂kN (z; µ̂k, Σ̂k) (4)\nThe zero-mean spherical Gaussian has a simple log-likelihood. The mixture of the spherical Gaussians, has inherent clusters that might result in interesting aspects of the audio information." }, { "heading": "2.2 INVERTIBLE TRANSFORMATIONS", "text": "Normalizing flows are typically constructed using coupling layers (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018). In our case, we use an autoregressive affine coupling layer (Dinh et al., 2016). The latent variable z has the same number of dimensions and frames as the resulting mel-spectrogram sample. The previous frames z1:t−1 produce scale and bias terms, st and bt respectively, that affine-transform the succeeding time step zt:\n(log st, bt) = NN(z1:t−1, text, speaker) (5) f(zt) = (zt − bt)÷ st (6) f−1(zt) = st zt + bt (7)\nHere, NN() can be any autoregressive causal transformation (Shumway & Stoffer, 2017). The affine coupling layer is a reversible transformation, even though NN() itself need not be invertible. We use a 0-vector for obtaining the scaling and bias terms what will affine transform z1. This 0-vector constant also guarantees that the first z is always known.\nWith an affine coupling layer, only the st term changes the volume of the mapping and adds a change of variables term to the loss. This term also penalizes the model for non-invertible affine mappings.\nlog |det(J(f−1coupling(x)))| = log |s| (8)\nTo evaluate the likelihood, we take the mel-spectrograms and pass them through the inverse steps of flow conditioned on the text and optional speaker ids, adding the corresponding log |s| penalties, and evaluate the result based on the Gaussian likelihoods.\nWith this setup, it is also possible to reverse the ordering of the mel-spectrogram frames in time without loss of generality. We reverse the order of frames on even steps of flow, defining a step of flow as a full pass over the input sequence. This allows the model to learn dependencies both forward and backwards in time while remaining causal and invertible." }, { "heading": "2.3 MODEL ARCHITECTURE", "text": "Our text encoder modifies the text encoder in Tacotron 2 by replacing batch-norm with instance-norm. Our decoder and NN architecture, depicted in Figure 2, removes the Prenet and Postnet layers from Tacotron previously thought to be essential (Shen et al., 2017). Please compare Figure 2 describing our architecture and Figure 8 in A.4.4 describing Tacotron’s architecture. We also provide model summary views in A.6\nWe use the content-based tanh attention described in Vinyals et al. (2015), which can be easily modified to become also location sensitive. We use the Mel Encoder described in Hsu et al. (2018) to predict the parameters of the Gaussian Mixture. Following (Valle et al., 2019b), we use speakerembeddings channel-wise concatenated with the encoder outputs at every token. We use a single shared embedding for models not conditioned on speaker id.\nThe step of flow closest to the latent variable z has a gating mechanism that prunes extra frames from the z-values provided to the model during inference. The length of z-values remains fixed on the next steps of flow." }, { "heading": "2.4 INFERENCE", "text": "Inference, given a trained model, is simply a matter of sampling z values from a spherical Gaussian, or Gaussian Mixture, and running them through the network in the forward direction f , e.g. Eq. 1. The parameters of the Gaussian mixture are either fixed or predicted by Flowtron. Training was conducted with σ2 = 1, but we explore the effects of different values for σ2 in section 3.3. In general, we found that sampling z from a Gaussian with lower standard deviation than used during training resulted in better sounding mel-spectrograms, as similarly concluded in Kingma & Dhariwal (2018) and (Parmar et al., 2018). Our inference results use σ2 = 0.5 while sampling the prior and the posterior variance while sampling the posterior." }, { "heading": "2.5 POSTERIOR INFERENCE", "text": "Figure 1 shows that several speech characteristics present in mel-spectrograms are clustered into regions of the z-space. Knowing this, we can treat the latent distribution as a prior q(z) = N (0, I) and obtain a posterior over the latent space of the flow model q(z|ζ1:m) conditioned on the evidence ζ1:m, which are m data observations xi mapped to the latent space using ζi = f\n−1(xi). We can use a Gaussian likelihood function with covariance matrix Σ to compute the posterior above analytically, q(z|ζ1:m) = N (µp,Σp). Following the approach in Gambardella et al. (2019), defining ζ̄ as the mean of ζi and using λ as a hyperparameter, we define the parameters of the posterior below. Please see A.2, Algorithm 1 and Gambardella et al. (2019) for implementation details and a full derivation.\nµp = m λ ζ̄\nm λ\n+ 1 Σp = 1 m λ + 1 I (9)" }, { "heading": "3 EXPERIMENTS", "text": "This section describes our training setup and provides quantitative and qualitative results. Our quantitative results show that Flowtron has mean opinion scores that are comparable to the state of the art. Our qualitative results demonstrate many features that are either impossible or inefficient to achieve using Tacotron, Tacotron 2 GST and Tacotron GM-VAE. These features include variation control in speech, interpolation between samples, and style transfer over time.\nWe decode all mel-spectrograms into waveforms with a WaveGlow (Prenger et al., 2019) model available on github (Valle et al., 2019a). This suggests that WaveGlow can be used as an universal decoder. In addition to our illustrated and quantitative results, we ask that the readers listen to Flowtron samples in our supplementary materials corresponding to our qualitative experiments." }, { "heading": "3.1 TRAINING SETUP", "text": "We train Flowtron, Tacotron 2 and Tacotron 2 GST models using a dataset (LSH) that combines the LJSpeech dataset (Ito et al., 2017) with two proprietary single speaker datasets with 20 and 10 hours each (Sally and Helen). We also train a Flowtron model on the train-clean-100 subset of LibriTTS (Zen et al., 2019) with 123 speakers and 25 minutes on average per speaker. Speakers with less than 5 minutes of data and files that are larger than 10 seconds are filtered out. For each dataset, we use at least 180 samples for the validation set, and the remainder for the training set.\nThe models are trained on uniformly sampled normalized text and ARPAbet encodings obtained from the CMU Pronouncing Dictionary (Weide, 1998). We do not perform any data augmentation. We adapt public Tacotron 2 and Tacotron 2 GST repos to include speaker embeddings as described in Section 2. We use the same mel-spectrogram representation used in WaveGlow (Prenger et al., 2019). We train Flowtron with a pre-trained text encoder, progressively adding steps of flow once the last step of flow has learned to attend to text. Flowtron models used in our experiments have 2 steps of flow. We forward readers to A.3 and A.4 for details on our training setup and ablation studies." }, { "heading": "3.2 MEAN OPINION SCORE (MOS) COMPARISON", "text": "We use the LJS voice as a reference and compare MOS between real samples, samples from Flowtron with 2 steps of flow, and samples from Tacotron 2. Following guidelines in (Prenger et al., 2019), we crowd-sourced MOS tests on Amazon Mechanical Turk using 30 volume normalized utterances disjoint from the training set for evaluation, and randomly chose the utterances for each subject. The scores provided in (Prenger et al., 2019) are used for real samples.\nThe mean opinion scores are shown in Table 1 with 95% confidence intervals computed over approximately 250 scores per source. The results roughly match our subjective qualitative assessment. The larger advantage of Flowtron is in the control over the amount of speech variation and the manipulation of the latent space." }, { "heading": "3.3 SAMPLING THE PRIOR", "text": "The simplest approach to generating samples with Flowtron is to sample from a prior distribution z ∼ N (0, σ2) and adjust σ2 to control the amount of variation. Whereas σ2 = 0 completely removes variation and produces outputs based on the model bias, increasing σ2 will increase the amount of variation in speech." }, { "heading": "3.3.1 SPEECH VARIATION", "text": "We illustrate the relationship between σ2 and control over variability by synthesizing Flowtron samples with σ2 ∈ {0.0, 0.5, 1.0}. All samples are generated conditioned on the speaker Sally and the text “How much variation is there?\". Despite the variability added by increasing σ2, all Flowtron-synthesized samples produce high quality speech.\nFigure 3 shows that contrary to commonly held wisdom (Shen et al., 2017; Arik et al., 2017a;b; Ping et al., 2017; Skerry-Ryan et al., 2018; Wang et al., 2018; Bińkowski et al., 2019), Flowtron generates sharp harmonics and well resolved formants without a compound loss nor Prenet or Postnet layers.\nNow we show that adjusting σ2 is a simple and valuable approach that provides more variation and control thereof than Tacotron, without sacrificing speech quality and despite of having a similar but simpler architecture. For this, we synthesize 10 samples with Tacotron 2 using different values for the Prenet dropout probability p ∈ {0.45, 0.5, 0.55}, scaling outputs accordingly. Samples computed on values of p ∈ [0.3, 0.8] are not included because they sound unintelligible. Figure 4 provides plots of F0 contours extracted with the YIN algorithm (De Cheveigné & Kawahara, 2002), with minimum F0, maximum F0, and harmonicity threshold equal to 80 Hz, 400 Hz and 0.3 respectively. Our results are similar to the previous sample duration analysis. As expected, σ2 = 0 provides no variation in F0 contour1, while increasing σ2 will increase variation in F0 contours.\nOur results in Figure 4 also show that Flowtron samples are considerably less monotonous than the samples produced with Tacotron 2 at no cost and with a similar but simpler architecture. Whereas increasing σ2 considerably increases variation in F0, modifying p barely produces any variation. This is valuable because expressive speech is associated with non-monotonic F0 contours. In A.1 we show similar results with respect to sentence duration." }, { "heading": "3.3.2 INTERPOLATION BETWEEN SAMPLES", "text": "With Flowtron, we can perform interpolation in z-space to achieve interpolation in mel-spectrogram space. This experiment evaluates Flowtron models with and without speaker embeddings. For the experiment with speaker embeddings, we choose the speaker Sally and the phrase “It is well known that deep generative models have a rich latent space.\". We generate mel-spectrograms by sampling z ∼ N (0, 0.8) twice and interpolating between them over 100 timesteps. For the experiment without speaker embeddings we interpolate between Sally and Helen using the phrase “We are testing this model.\".\nFirst, we perform inference by sampling z ∼ N (0, 0.5) until we find z values, zh and zs, that produce mel-spectrograms with Helen’s and Sally’s voice respectively. We then generate samples by\n1Variations in σ2 = 0 are due to different z for WaveGlow.\nperforming inference while linearly interpolating between zh and zs. Our same speaker interpolation samples show that Flowtron is able to interpolate between multiple samples while producing correct alignment maps. Our different speaker interpolation samples show that Flowtron is able to gradually and smoothly morph one voice into another." }, { "heading": "3.4 SAMPLING THE POSTERIOR (STYLE TRANSFER)", "text": "We generate samples with Flowtron by sampling a posterior distribution conditioned on the evidence containing speech characteristics of interest, as described in 2.5 and Gambardella et al. (2019). Tacotron 2 GST Wang et al. (2018) has an equivalent posterior sampling approach. During inference, the model is conditioned on a weighted sum of global style tokens (posterior) queried through an embedding of existing audio samples (evidence). We evaluate Tacotron 2 GST using a single sample to query a style token, or multiple samples to compute an average style token. For complete results, please refer to audio samples in the supplemental material corresponding to the following sections." }, { "heading": "3.4.1 SEEN SPEAKER", "text": "In this section we run two style transfer experiments: the first one (Expressive) uses samples with high variance in pitch, which we use as a proxy for comparing expressivity in speech; the second (High Pitch), uses samples with high average pitch. In these experiments, we provide comparisons between Pitch Mean and Pitch Standard Deviation from the Reference samples providing the style, a Flowtron Baseline and after style transfer using Flowtron Posterior and Tacotron 2 GST.\nOur experiments show that by sampling from the posterior or interpolating between the posterior and the Gaussian prior over time, Flowtron makes a monotonic speaker gradually sound more expressive. Architectures similar to Tacotron 2 GST with fixed-latent embeddings are not able to perform gradual changes in style over time. Table 2 provides pitch summary statistics computed over 5 phrases and 10 takes each and shows that Flowtron is overall closer to the reference providing the style than Tacotron 2 GST. Our supplemental materials also show that Tacotron 2 GST sentences are repetitive and contain vocal-fry like distortions." }, { "heading": "3.4.2 SEEN SPEAKER WITH UNSEEN STYLE", "text": "We compare samples generated with Flowtron and Tacotron 2 GST to evaluate their ability to emulate a speaking style unseen during training of a speaker seen during training. While Sally’s data used during training consists of news article readings, the evaluation samples contain Sally’s interpretation of the somber and vampiresque novel, Born of Darkness (BOD).\nOur samples show that while Tacotron 2 GST fails to emulate the somber timbre in Born of Darkness, Flowtron succeeds in transferring not only the somber timbre, but also the low F0 and the long pauses associated with the narrative style." }, { "heading": "3.4.3 UNSEEN SPEAKER", "text": "In this experiment we compare Flowtron and Tacotron 2 GST samples to evaluate their ability to emulate the speaking style of a speaker not seen during training. The styles comes from speaker ID 24 and her “surprised\" samples in RAVDESS (Livingstone & Russo, 2018), a dataset with emotion labels. Table 2 shows that while the samples generated with Tacotron 2 GST are not able to emulate the high-pitched style from RAVDESS, Flowtron is able to make Sally sound high-pitched as in the “surprised\" style." }, { "heading": "3.5 INTERPOLATION BETWEEN STYLES (PRIOR AND POSTERIOR)", "text": "In this experiment we illustrate how to control the speaking style at inference time by adjusting the parameter λ in Equation 9 to interpolate between a baseline style (prior) and a target style (posterior). We use a model trained on LibriTTS and use a single sample from Sally’s (unseen speaker) Born of Darkness dataset as evidence providing the target style. We synthesize posterior samples generated with Flowtron with λ ∈ {0.1, 0.666, 1.0, 2.0}. Figure 5 reflects the interpolation in style as interpolation in spectral profiles. Our supplemental materials aurally reflect a similar interpolation in other non-textual characteristics." }, { "heading": "3.6 SAMPLING THE GAUSSIAN MIXTURE", "text": "In this last section we provide samples from Flowtron Gaussian Mixture (GM) and visualizations. We replicate the experiments in Tacotron GM-VAE (Hsu et al., 2018) to visualize how speakers are assigned to mixture components and provide samples in which we modulate speech characteristics by translating one of the dimensions of an individual mixture component.\nFor these experiments, Flowtron GM-LibriTTS is trained on LibriTTS without speaker embeddings and a Gaussian mixture with 8 component with predicted mean, covariance and component assignment\nprobabilities; Flowtron GM-LSH is trained on LSH with speaker embeddings and a Gaussian Mixture with 8 components, fixed mean and covariances and predicted component assignment probabilities." }, { "heading": "3.6.1 VISUALIZING ASSIGNMENTS", "text": "We evaluate model interpretability on a subset of LibriTTS with 123 speakers and 1410 utterances, 180 of which come from the validation set. Following Hsu et al. (2018), each utterance is assigned to the component with the highest posterior probability arg maxk p(φ̂k | x). We obtain posterior probabilities per utterance by using the Mel Encoder described in Section 2.3 and averaging the predicted component assignment probabilities over time. Figure 6 suggests that information in each component of Flowtron GM-LibriTTS is gender dependent.\nWe quantify the association between gender and mixture components with the metric described in Hsu et al. (2018). The assignment consistency with respect to gender is defined as 1M ∑N i=1 ∑Ni j=1 1yij = ŷi, whereM is the number of utterances, yij is the component assignment of utterance j from speaker i, and ŷi is the mode of {yij}Nij=1. The assignment consistency in Flowtron GM-LibriTTS is 82.4%, suggesting that the components group utterances by speaker and group speakers by gender. We provide visualizations in Figure 6." }, { "heading": "3.6.2 TRANSLATING DIMENSIONS", "text": "We use the model Flowtron GM-LSH and focus on translating one of the dimensions of a single mixture component by adding an offset. The samples in our supplementary material show that we are able to modulate specific speech characteristics like pitch and word duration. Although the samples generated by translating one the dimensions associated with pitch height have different pitch contours, they have the same duration. Similarly, our samples show that translating the dimension associated with length of the first word does not modulate the pitch of the first word. We provide visualizations in Figure 9 in A.5." }, { "heading": "4 CONCLUSION", "text": "We propose a new text to mel-spectrogram synthesis model based on autoregressive flows that is optimized by maximizing the likelihood and allows for speech variation and style transfer. Our results show that samples generated with Flowtron achieve mean opinion scores similar to SOTA TTS models. We demonstrate that our model learns a latent space that stores non-textual information, supervised using only MLE. Flowtron is able to produce high quality speech with high variability by adjusting σ2.\nOur results show that the latent space over non-textual features that can be investigated and manipulated to give the user more control over the generative model’s output. We provide many examples that showcase this, including transferring the style from speakers seen and unseen during training to another speaker using sentences with similar or different text, and making a monotonic speaker sound more expressive. For future work, we are interested in using normalizing flows for few-shot speech synthesis, speech compression and in semi-supervised settings to exploit datasets with limited labels." }, { "heading": "A APPENDIX", "text": "A.1 SPEECH VARIATION\nFigure 7 provides plots from sample durations in seconds. Our results show that larger values of σ2 produces samples with more variation in duration, whereas σ2 = 0 is fully deterministic. These results demonstrate that our latent space is able to model duration, which is a critical non-textual component to expressiveness in speech.\nA.2 POSTERIOR INFERENCE\nWe generate posterior samples with Flowtron by sampling a posterior distribution conditioned on evidence containing speech characteristics of interest, as described in (Gambardella et al., 2019). We collect the evidence by performing a forward pass with Flowtron using with a speaker embedding, (s ∼ N (0, I)), the observed mel-spectrogram, and the text from a set of samples with the speech characteristics of interest. We use a specific speaker embedding when we want to factor out information about a specific speaker from ζ.\nNext, we compute ζ̄ by averaging ζi,k over batch (i) or over batch and time (i, k) and use Equation 9 to compute the parameters of the posterior. When averaging over batch, we repeat the z-values over the time dimension until they reach the desired length. We find in our experiments that averaging over batch is more efficient for transfering the style than averaging over batch and time. In all experiments, we select the best performing samples given λ values between m ∗ 0.1 and m ∗ 4, where m is the number of samples in the evidence. While small λ values move the mean of the posterior closer to the evidence and decreases its variance, large λ values move the mean of the posterior closer to the prior and increase the variance.\nOnce the parameters of the posterior distribution are computed, we can sample the posterior distribution and perform inference with the desired text and speaker. Algorithm 1 provides a description of posterior inference with Flowtron.\nA.3 TRAINING DETAILS\nWe use the ADAM (Kingma & Ba, 2014) optimizer with default parameters, 1e-4 learning rate and1e-6 weight decay for Flowtron and 1e-3 learning rate and 1e-5 weight decay for the other models,following Wang et al. (2017). We anneal the learning rate once the generalization error starts toplateau and stop training once the the generalization error stops significantly decreasing or startsincreasing. Flowtron models with 2 steps of flow were trained on the LSH dataset for approximately1000 epochs, then fine-tuned on LibriTTS for 500 epochs. Tacotron 2 and Tacotron 2 GST are trained for approximately 500 epochs. Each model is trained on a single NVIDIA DGX-1 with 8 GPUs.\nAlgorithm 1: Flowtron Posterior inference Input : Trained Flowtron model f , evidence audio samples x1:m Output : Posterior sample\n1 For each meli,k, texti, speakeri ∈ x1:m ζi,k ← f−1(meli,k, texti, speakeri)\n2 if average over batch then 3 repeat each ζk over the time dimension until target length is achieved 4 ζ̄k ← Compute ζi,k average over batch k 5 else 6 ζ̄ ← Compute ζi,k average over batch and time k 7 end 8 µp,Σp ← Compute posterior parameters using Equation 9 9 Initialize Zp ∼ N (µp,Σp)\n10 Sample zp from Zp 11 Perform inference with Flowtron using zp, text and speaker\nA.4 ABLATION STUDIES\nA.4.1 COMPOSING FLOWS\nWe evaluated Flowtron models with 2, 3 and 6 steps of flow and found that more steps of flow have better likelihood but no significant qualitative improvement, while increasing inference time significantly. Hence, we chose to report results on Flowtron models with 2 steps of flow.\nA.4.2 BIDIRECTIONAL PROCESSING\nWe compared the bidirectional (reversing the ordering of the mel-spectrogram frames in time on even numbered steps of flows) and unidirectional processing and found that bidirectional processing provides better likelihood and audio quality. Hence, we use bidirectional processing in all our Flowtron models.\nA.4.3 ADDITIVE VS AFFINE TRANSFORMATIONS\nThe Tacotron 2 baseline without the postnet layer can be interpreted as additive single step autoregressive normalizing flow (ASSANF). By comparing Flowtron with Tacotron 2, we’re comparing with a model that is better than an (ASSANF), as Tacotron 2 sans Postnet does not have sharp harmonics. Hence, we prefer affine over additive transformations.\nA.4.4 COMPARISON WITH TACOTRON 2\nThe Tacotron 2 baseline without the postnet layer can be interpreted as additive single step autoregressive normalizing flow (ASSANF). By comparing Flowtron with Tacotron 2, we’re comparing with a model that is better than an (ASSANF), as Tacotron 2 sans Postnet does not have sharp harmonics. Hence, we prefer affine over additive transformations.\nA.5 TRANSLATING DIMENSIONS\nA.6 FLOWTRON AND TACOTRON SUMMARY VIEW\nFlowtron( (speaker_embedding): Embedding(3, 128) (embedding): Embedding(185, 512) (flows): ModuleList( (0): AR_Step(\n(conv): Conv1d(1024, 160, kernel_size=(1,), stride=(1,)) (lstm): LSTM(1664, 1024, num_layers=2) (attention_lstm): LSTM(80, 1024) (attention_layer): Attention( (softmax): Softmax(dim=2) (query): LinearNorm(\n(linear_layer): Linear(in_features=1024, out_features=640, bias=False) ) (key): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=640, bias=False) ) (value): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=640, bias=False) ) (v): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=1, bias=False) )\n) (dense_layer): DenseLayer( (layers): ModuleList(\n(0): LinearNorm( (linear_layer): Linear(in_features=1024, out_features=1024, bias=True) ) (1): LinearNorm(\n(linear_layer): Linear(in_features=1024, out_features=1024, bias=True) )\n) )\n) (1): AR_Back_Step(\n(ar_step): AR_Step( (conv): Conv1d(1024, 160, kernel_size=(1,), stride=(1,)) (lstm): LSTM(1664, 1024, num_layers=2) (attention_lstm): LSTM(80, 1024) (attention_layer): Attention(\n(softmax): Softmax(dim=2) (query): LinearNorm(\n(linear_layer): Linear(in_features=1024, out_features=640, bias=False) ) (key): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=640, bias=False) ) (value): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=640, bias=False) ) (v): LinearNorm(\n(linear_layer): Linear(in_features=640, out_features=1, bias=False) )\n) (dense_layer): DenseLayer(\n(layers): ModuleList( (0): LinearNorm( (linear_layer): Linear(in_features=1024, out_features=1024, bias=True)\n) (1): LinearNorm( (linear_layer): Linear(in_features=1024, out_features=1024, bias=True)\n) )\n) (gate_layer): LinearNorm(\n(linear_layer): Linear(in_features=1664, out_features=1, bias=True) )\n) )\n) (encoder): Encoder( (convolutions): ModuleList(\n(0): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): InstanceNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n)\n(1): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): InstanceNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n) (2): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): InstanceNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=False)\n) ) (lstm): LSTM(512, 256, batch_first=True, bidirectional=True)\n) )\nTacotron2( (embedding): Embedding(185, 512) (encoder): Encoder( (convolutions): ModuleList(\n(0): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) (1): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) (2): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) ) (lstm): LSTM(512, 256, batch_first=True, bidirectional=True)\n) (decoder): Decoder( (prenet): Prenet(\n(layers): ModuleList( (0): LinearNorm(\n(linear_layer): Linear(in_features=80, out_features=256, bias=False) ) (1): LinearNorm(\n(linear_layer): Linear(in_features=256, out_features=256, bias=False) )\n) ) (attention_rnn): LSTMCell(896, 1024) (attention_layer): Attention(\n(query_layer): LinearNorm( (linear_layer): Linear(in_features=1024, out_features=128, bias=False) ) (memory_layer): LinearNorm( (linear_layer): Linear(in_features=640, out_features=128, bias=False) ) (v): LinearNorm( (linear_layer): Linear(in_features=128, out_features=1, bias=False) ) (location_layer): LocationLayer( (location_conv): ConvNorm(\n(conv): Conv1d(2, 32, kernel_size=(31,), stride=(1,), padding=(15,), bias=False) ) (location_dense): LinearNorm(\n(linear_layer): Linear(in_features=32, out_features=128, bias=False) )\n) ) (decoder_rnn): LSTMCell(1664, 1024, bias=1) (linear_projection): LinearNorm(\n(linear_layer): Linear(in_features=1664, out_features=80, bias=True) ) (gate_layer): LinearNorm(\n(linear_layer): Linear(in_features=1664, out_features=1, bias=True) )\n) (postnet): Postnet( (convolutions): ModuleList(\n(0): Sequential( (0): ConvNorm(\n(conv): Conv1d(80, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=wTrue, track_running_stats=True)\n) (1): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) (2): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) (3): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 512, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) (4): Sequential( (0): ConvNorm(\n(conv): Conv1d(512, 80, kernel_size=(5,), stride=(1,), padding=(2,)) ) (1): BatchNorm1d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n) )\n) (speaker_embedding): Embedding(3, 128)\n)" } ]
2,021
null
SP:a97b9cd6237040dc602bed2c66af26143847c37f
[ "- The paper argues that neural models perform significantly worse than GBDT models on some learning to rank benchmarks. It first conducts a set of experiments to show that GBDT outperforms some neural rankers. Then, it presents a few tweaks related to feature transformation and data augmentation to improve the performance of neural models. The resulting neural models perform on par with the state-of-the-art GBDT models." ]
Despite the success of neural models on many major machine learning problems, their effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely acknowledged. We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets. This unfortunately was somehow overlooked in recent neural LTR papers. We then investigate why existing neural LTR models under-perform and identify several of their weaknesses. Furthermore, we propose a unified framework comprising of counter strategies to ameliorate the existing weaknesses of neural models. Our models are the first to be able to perform equally well, comparing with the best tree-based baseline, while outperforming recently published neural LTR models by a large margin. Our results can also serve as a benchmark to facilitate future improvement of neural LTR models.
[ { "affiliations": [], "name": "Zhen Qin" }, { "affiliations": [], "name": "Le Yan" }, { "affiliations": [], "name": "Honglei Zhuang" }, { "affiliations": [], "name": "Yi Tay" }, { "affiliations": [], "name": "Rama Kumar Pasumarthi" }, { "affiliations": [], "name": "Xuanhui Wang" }, { "affiliations": [], "name": "Michael Bendersky" }, { "affiliations": [], "name": "Marc Najork" } ]
[ { "authors": [ "Qingyao Ai", "Keping Bi", "Jiafeng Guo", "W Bruce Croft" ], "title": "Learning a deep listwise context model for ranking refinement", "venue": "In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2018 }, { "authors": [ "Qingyao Ai", "Xuanhui Wang", "Sebastian Bruch", "Nadav Golbandi", "Michael Bendersky", "Marc Najork" ], "title": "Learning groupwise multivariate scoring functions using deep neural networks", "venue": "In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval,", "year": 2019 }, { "authors": [ "Alex Beutel", "Paul Covington", "Sagar Jain", "Can Xu", "Jia Li", "Vince Gatto", "Ed H Chi" ], "title": "Latent cross: Making use of context in recurrent recommender systems", "venue": "In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining,", "year": 2018 }, { "authors": [ "Sebastian Bruch", "Xuanhui Wang", "Michael Bendersky", "Marc Najork" ], "title": "An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance", "venue": "In Proceedings of the 2019 ACM SIGIR International Conference on Theory of Information Retrieval,", "year": 2019 }, { "authors": [ "Sebastian Bruch", "Masrour Zoghi", "Michael Bendersky", "Marc Najork" ], "title": "Revisiting approximate metric optimization in the age of deep neural networks", "venue": "In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Sebastian Bruch", "Shuguang Han", "Michael Bendersky", "Marc Najork" ], "title": "A stochastic treatment of learning to rank scoring functions", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020 }, { "authors": [ "Chris Burges", "Tal Shaked", "Erin Renshaw", "Ari Lazier", "Matt Deeds", "Nicole Hamilton", "Greg Hullender" ], "title": "Learning to rank using gradient descent", "venue": "In Proceedings of the 22nd International Conference on Machine Learning,", "year": 2005 }, { "authors": [ "Christopher J. Burges", "Robert Ragno", "Quoc V. Le" ], "title": "Learning to rank with nonsmooth cost functions", "venue": "Advances in Neural Information Processing Systems", "year": 2007 }, { "authors": [ "Christopher JC Burges" ], "title": "From RankNet to LambdaRank to LambdaMART: An overview", "venue": null, "year": 2010 }, { "authors": [ "Zhe Cao", "Tao Qin", "Tie-Yan Liu", "Ming-Feng Tsai", "Hang Li" ], "title": "Learning to rank: from pairwise approach to listwise approach", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Olivier Chapelle", "Yi Chang" ], "title": "Yahoo! learning to rank challenge overview", "venue": "In Proceedings of the Learning to Rank Challenge,", "year": 2011 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Domenico Dato", "Claudio Lucchese", "Franco Maria Nardini", "Salvatore Orlando", "Raffaele Perego", "Nicola Tonellotto", "Rossano Venturini" ], "title": "Fast ranking with additive ensembles of oblivious and non-oblivious regression trees", "venue": "ACM Trans. Inf. Syst.,", "year": 2016 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2019 }, { "authors": [ "Aditya Grover", "Eric Wang", "Aaron Zweig", "Stefano Ermon" ], "title": "Stochastic optimization of sorting networks via continuous relaxations", "venue": "arXiv preprint arXiv:1903.08850,", "year": 2019 }, { "authors": [ "Shuguang Han", "Xuanhui Wang", "Mike Bendersky", "Marc Najork" ], "title": "Learning-to-rank with bert in tf-ranking", "venue": "arXiv preprint arXiv:2004.08476,", "year": 2020 }, { "authors": [ "Awni Hannun", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Greg Diamos", "Erich Elsen", "Ryan Prenger", "Sanjeev Satheesh", "Shubho Sengupta", "Adam Coates", "Andrew Y. Ng" ], "title": "Deep speech: Scaling up end-to-end speech recognition", "venue": "arXiv preprint arXiv:1412.5567,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Ziniu Hu", "Yang Wang", "Qu Peng", "Hang Li" ], "title": "Unbiased lambdamart: An unbiased pairwise learning-to-rank algorithm", "venue": "In The Web Conference,", "year": 2019 }, { "authors": [ "Thorsten Joachims" ], "title": "Optimizing search engines using clickthrough data", "venue": "In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp", "year": 2002 }, { "authors": [ "Thorsten Joachims" ], "title": "Training linear svms in linear time", "venue": "In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,", "year": 2006 }, { "authors": [ "Thorsten Joachims", "Adith Swaminathan", "Tobias Schnabel" ], "title": "Unbiased learning-to-rank with biased feedback", "venue": "In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining,", "year": 2017 }, { "authors": [ "Guolin Ke", "Qi Meng", "Thomas Finley", "Taifeng Wang", "Wei Chen", "Weidong Ma", "Qiwei Ye", "TieYan Liu" ], "title": "Lightgbm: A highly efficient gradient boosting decision tree", "venue": "In Proceedings of the 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Guang-He Lee", "Tommi S. Jaakkola" ], "title": "Oblique decision trees from derivatives of relu networks", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Pan Li", "Zhen Qin", "Xuanhui Wang", "Donald Metzler" ], "title": "Combining decision trees and neural networks for learning-to-rank in personal search", "venue": "In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Zachary C Lipton", "Jacob Steinhardt" ], "title": "Troubling trends in machine learning scholarship", "venue": "arXiv preprint arXiv:1807.03341,", "year": 2018 }, { "authors": [ "Tie-Yan Liu" ], "title": "Learning to rank for information retrieval", "venue": "Found. Trends Inf. Retr.,", "year": 2009 }, { "authors": [ "Bhaskar Mitra", "Nick Craswell" ], "title": "An introduction to neural information retrieval", "venue": "Foundations and Trends in Information Retrieval,", "year": 2018 }, { "authors": [ "Rodrigo Nogueira", "Wei Yang", "Kyunghyun Cho", "Jimmy Lin" ], "title": "Multi-stage document ranking with bert", "venue": "arXiv preprint arXiv:1910.14424,", "year": 2019 }, { "authors": [ "Liang Pang", "Jun Xu", "Qingyao Ai", "Yanyan Lan", "Xueqi Cheng", "Jirong Wen" ], "title": "Setrank: Learning a permutation-invariant ranking model for information retrieval", "venue": "In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2020 }, { "authors": [ "Rama Kumar Pasumarthi", "Sebastian Bruch", "Xuanhui Wang", "Cheng Li", "Michael Bendersky", "Marc Najork", "Jan Pfeifer", "Nadav Golbandi", "Rohan Anil", "Stephan Wolf" ], "title": "TF-Ranking: Scalable tensorflow library for learning-to-rank", "venue": "In Proceedings of the 25th ACM SIGKDD Interna-tional Conference on Knowledge Discovery & Data Mining,", "year": 2019 }, { "authors": [ "Rama Kumar Pasumarthi", "Honglei Zhuang", "Xuanhui Wang", "Michael Bendersky", "Marc Najork" ], "title": "Permutation equivariant document interaction network for neural learning to rank", "venue": "In Proceedings of the 2020 ACM SIGIR on International Conference on Theory of Information Retrieval,", "year": 2020 }, { "authors": [ "Luis Perez", "Jason Wang" ], "title": "The effectiveness of data augmentation in image classification using deep learning", "venue": "arXiv preprint arXiv:1712.04621,", "year": 2017 }, { "authors": [ "Liudmila Prokhorenkova", "Gleb Gusev", "Aleksandr Vorobev", "Anna Veronika Dorogush", "Andrey Gulin" ], "title": "Catboost: unbiased boosting with categorical features", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Tao Qin", "Tie-Yan Liu" ], "title": "Introducing LETOR 4.0", "venue": "datasets. CoRR,", "year": 2013 }, { "authors": [ "Tao Qin", "Tie-Yan Liu", "Hang Li" ], "title": "A general approximation framework for direct optimization of information retrieval measures", "venue": "Information retrieval,", "year": 2010 }, { "authors": [ "Zhen Qin", "Suming J. Chen", "Donald Metzler", "Yongwoo Noh", "Jingzheng Qin", "Xuanhui Wang" ], "title": "Attribute-based propensity for unbiased learning in recommender systems: Algorithm and case studies", "venue": "In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2020 }, { "authors": [ "Zhen Qin", "Zhongliang Li", "Michael Bendersky", "Donald Metzler" ], "title": "Matching cross network for learning to rank in personal search", "venue": "In The Web Conference,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Li Fei-Fei" ], "title": "ImageNet Large Scale Visual Recognition Challenge", "venue": "International Journal of Computer Vision (IJCV),", "year": 2015 }, { "authors": [ "Mohammad Saberian", "Pablo Delgado", "Yves Raimond" ], "title": "Gradient boosted decision tree neural network", "venue": "arXiv preprint arXiv:1910.09340,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "arXiv preprint arXiv:1706.03762,", "year": 2017 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "arXiv preprint arXiv:1804.07461,", "year": 2018 }, { "authors": [ "Jun Wang", "Lantao Yu", "Weinan Zhang", "Yu Gong", "Yinghui Xu", "Benyou Wang", "Peng Zhang", "Dell Zhang" ], "title": "Irgan: A minimax game for unifying generative and discriminative information retrieval models", "venue": "In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "Ruoxi Wang", "Bin Fu", "Gang Fu", "Mingliang Wang" ], "title": "Deep & cross network for ad click predictions", "venue": "In Proceedings of the ADKDD’17,", "year": 2017 }, { "authors": [ "Xuanhui Wang", "Cheng Li", "Nadav Golbandi", "Michael Bendersky", "Marc Najork" ], "title": "The lambdaloss framework for ranking metric optimization", "venue": "In Proceedings of the 27th ACM International Conference on Information and Knowledge Management,", "year": 2018 }, { "authors": [ "Qiang Wu", "Christopher J.C. Burges", "Krysta M. Svore", "Jianfeng Gao" ], "title": "Adapting boosting for information retrieval measures", "venue": "Information Retrieval Journal,", "year": 2010 }, { "authors": [ "Wei Yang", "Kuang Lu", "Peilin Yang", "Jimmy Lin" ], "title": "Critically examining the ”neural hype”: Weak baselines and the additivity of effectiveness gains from neural ranking models", "venue": "In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2019 }, { "authors": [ "Qian Yu", "Wai Lam" ], "title": "Data augmentation based on adversarial autoencoder handling imbalance for learning to rank", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Xiaofeng Zhu", "Diego Klabjan" ], "title": "Listwise learning to rank by exploring unique ratings", "venue": "In Proceedings of the 13th International Conference on Web Search and Data Mining,", "year": 2020 }, { "authors": [ "Honglei Zhuang", "Xuanhui Wang", "Michael Bendersky", "Marc Najork" ], "title": "Feature transformation for neural ranking models", "venue": "In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2020 }, { "authors": [ "Honglei Zhuang", "Zhen Qin", "Xuanhui Wang", "Mike Bendersky", "Xinyu Qian", "Po Hu", "Chary Chen" ], "title": "Cross-positional attention for debiasing clicks", "venue": "In The Web Conference,", "year": 2021 } ]
[ { "heading": null, "text": "Despite the success of neural models on many major machine learning problems, their effectiveness on traditional Learning-to-Rank (LTR) problems is still not widely acknowledged. We first validate this concern by showing that most recent neural LTR models are, by a large margin, inferior to the best publicly available Gradient Boosted Decision Trees (GBDT) in terms of their reported ranking accuracy on benchmark datasets. This unfortunately was somehow overlooked in recent neural LTR papers. We then investigate why existing neural LTR models under-perform and identify several of their weaknesses. Furthermore, we propose a unified framework comprising of counter strategies to ameliorate the existing weaknesses of neural models. Our models are the first to be able to perform equally well, comparing with the best tree-based baseline, while outperforming recently published neural LTR models by a large margin. Our results can also serve as a benchmark to facilitate future improvement of neural LTR models." }, { "heading": "1 INTRODUCTION", "text": "Neural approaches have been dominating in many major machine learning domains, such as computer vision (He et al., 2015), natural language processing (Devlin et al., 2019), and speech recognition (Hannun et al., 2014). However, the effectiveness of neural approaches in traditional Learningto-Rank (LTR), the long-established inter-disciplinary research area at the intersection of machine learning and information retrieval (Liu, 2009), is not widely acknowledged (Yang et al., 2019), especially on benchmark datasets that have only numerical features.\nHistorically, a series of LTR models were developed by researchers at Microsoft, starting with RankNet (Burges et al., 2005) and LambdaRank (Burges et al., 2007), both based on neural networks, and culminating in LambdaMART (Wu et al., 2010), which is based on Gradient Boosted Decision Trees (GBDT); Burges (2010) provides an overview of this evolution. There are two publicly available implementations of LambdaMART: one provided by the RankLib1 library that is part of the Lemur Project (henceforth referred to as λMARTRankLib); and the LightGBM2 implementation provided by Microsoft (Ke et al., 2017) (henceforth referred to as λMARTGBM). As we will show in Section 3, λMARTGBM substantially outperforms λMARTRankLib.\nThere is strong and continuing interest in neural ranking models, with numerous papers published in the last few years alone. Most of these papers treat RankNet and LambdaRank as weak baselines (Pang et al., 2020; Bruch et al., 2019b) and LambdaMART as the “state-of-the-art” (Bruch et al., 2019b; Li et al., 2019; Zhu & Klabjan, 2020; Hu et al., 2019). However, when examining these papers, we note that they either acknowledge their under-performance to λMARTGBM or claim state-of-the-art performance by comparing to a weaker λMARTRankLib implementation. The inconsistency of performance evaluation on benchmark datasets in this field has made it difficult to measure progress (Lipton & Steinhardt, 2018). It therefore remains an open question whether neural LTR models are as effective as they claim to be, and how to improve them if that is not the case.\n1https://sourceforge.net/p/lemur/wiki/RankLib/ 2https://github.com/microsoft/LightGBM\nIn this paper, we first conduct a benchmark to show that λMARTGBM outperforms recently published neural models, as well as the λMARTRankLib, by a large margin. While the neural paradigm is still appealing in a myriad of ways, such as being composable, flexible, and able to benefit from a plethora of new advances (Vaswani et al., 2017; Devlin et al., 2019), the research progress in neural ranking models could be hindered due to their inferior performance to tree models. It thus becomes critical to understand the pitfalls of building neural rankers and boost their performance on benchmark datasets.\nSpecifically, we investigate why neural LTR approaches under-perform on standard LTR datasets and identify three major weaknesses that are typically ignored by recent work. First, neural models are not as adept at performing effective feature transformations and scaling, which is one major benefit of using tree-based methods (Saberian et al., 2019). In ranking data which is typically longtailed, this can be a prohibitive property. Second, standard feed-forward networks are ineffective in generating higher-order features as noted by recent papers (Wang et al., 2017b; Beutel et al., 2018). More effective network architectures for neural LTR models are needed. Third, recent neural LTR work on benchmark datasets does not employ high-capacity networks, a key success factor of many neural models (Devlin et al., 2019), possibly due to a small scale of training data that causes overfitting. On the other hand, there are several potential benefits of neural approaches over LambdaMART for LTR, such as their flexibility to model listwise data and the existence of many techniques to mitigate data sparsity. To that end, we propose a new framework that ameliorates the weaknesses of existing neural LTR approaches and improves almost all major network components.\nIn the proposed framework, we make several technical contributions: (1) We demonstrate empirical evidence that a simple log1p transformation on the input features is very helpful. (2) We use data augmentation (DA) to make the most out of high-capacity neural models, which is surprisingly the first work in the LTR literature to do so. We show that adding a simple Gaussian noise helps, but only when the model capacity is appropriately augmented (which probably explains why there is no prior work on such a simple idea). (3) We use self-attention (SA) to model the listwise ranking data as context, and propose to use latent cross (LC) to effectively generate the interaction of each item and its listwise context.\nWe conduct experiments on three widely used public LTR datasets. Our neural models are trained with listwise ranking losses. On all datasets, our framework can outperform recent neural LTR methods by a large margin. When comparing with the strong LambdaMART implementation, λMARTGBM, we are able to achieve equally good results, if not better. Our work can also serve as a benchmark for neural ranking models, which we believe can lay a fertile ground for future neural LTR research, as rigorous benchmarks on datasets such as ImageNet (Russakovsky et al., 2015) and GLUE (Wang et al., 2018a) do in their respective fields." }, { "heading": "2 BACKGROUND", "text": "We provide some background on LTR, including its formulation and common metrics. We review LambdaMART and highlight its two popular implementations which are causes of the inconsistency of evaluations in the recent literature." }, { "heading": "2.1 LEARNING TO RANK", "text": "LTR methods are supervised techniques and the training data can be represented as a set Ψ = {(x, y) ∈ χn × Rn)}, where x is a list of n items xi ∈ χ and y is a list of n relevance labels yi ∈ R for 1 ≤ i ≤ n. We use χ as the universe of all items. In traditional LTR problems, each xi corresponds to a query-item pair and is represented as a feature vector in Rk where k is the number of feature dimensions. With slightly abuse of notation, we also use xi as the feature vector and say x ∈ Rn×k. The objective is to learn a function that produces an ordering of items in x so that the utility of the ordered list is maximized.\nMost LTR algorithms formulate the problem as learning a ranking function to score and sort the items in a list. As such, the goal of LTR boils down to finding a parameterized ranking function\ns(·; Θ) : χn → Rn, where Θ denotes the set of parameters, to minimize the empirical loss:\nL(s) = 1 |Ψ| ∑ (x, y)∈Ψ l(y, s(x)), (1)\nwhere l(·) is the loss function on a single list. LTR algorithms differ primarily in how they parameterize s and how they define l.\nThere are many existing ranking metrics such as NDCG and MAP used in LTR problems. A common property of these metrics is that they are rank-dependent and place more emphasis on the top ranked items. For example, the commonly adopted NDCG metric is defined as\nNDCG(πs, y) = DCG(πs, y) DCG(π∗, y) , (2)\nwhere πs is a ranked list induced by the ranking function s on x, π∗ is the ideal list (where x is sorted by y), and DCG is defined as:\nDCG(π, y) = n∑ i=1 2yi − 1 log2(1 + π(i)) = n∑ i=1 Gi Di\n(3)\nIn practice, the truncated version that only considers the top-k ranked items, denoted as NDCG@k, is often used." }, { "heading": "2.2 LAMBDAMART", "text": "LTR models have evolved from linear models (Joachims, 2002), to nerual networks (Burges et al., 2005), and then to decision trees (Burges, 2010) in the past two decades. LambdaMART, proposed about ten years ago (Wu et al., 2010; Burges, 2010), is still treated as the “state-of-the-art” for LTR problems in recent papers (Bruch et al., 2019b; Zhu & Klabjan, 2020). It is based on Gradient Boosted Decision Trees (GBDT). During each boosting step, the loss is dynamically adjusted based on the ranking metric in consideration. For example, ∆NDCG is defined as the absolute difference between the NDCG values when two documents i and j swap their positions in the ranked list sorted by the obtained ranking functions so far.\n∆NDCG(i, j) = |Gi −Gj | · ∣∣∣ 1 Di − 1 Dj ∣∣∣. (4) Then LambdaMART uses a pairwise logistic loss and adapts the loss by re-weighting each item pair in each iteration, with s(x)|i being the score for item i and α being a hyperparameter:\nl(y, s(x)) = ∑ yi>yj ∆NDCG(i, j) log2(1 + e −α(s(x)|i−s(x)|j)) (5)\nThere are two popular public implementations of LambdaMART, namely λMARTGBM and λMARTRankLib. λMARTGBM is more recent than λMARTRankLib and has more advanced features by leveraging novel data sampling and feature bundling techniques (Ke et al., 2017). However, recent neural LTR papers either use the weaker implementation of λMARTRankLib (Pang et al., 2020; Wang et al., 2017a; Ai et al., 2018; 2019), or acknowledge the inferior performance of neural models when compared with λMARTGBM (Bruch et al., 2019b). Such an inconsistency makes it hard to determine whether neural models are indeed more effective than the tree-based models." }, { "heading": "3 BENCHMARKING EXISTING METHODS", "text": "To resolve the inconsistency, we perform a benchmark on three popular LTR benchmark datasets to show that: 1) there is a large gap between the two implementations of tree-based LambdaMART λMARTGBM and λMARTRankLib; 2) Recent neural LTR methods are generally significantly worse than the stronger implementation. Then we discuss several weaknesses of recent neural LTR approaches, and point out promising directions, which lay the foundation of our proposed framework." }, { "heading": "3.1 DATASETS", "text": "The three data sets we used in our experiments are public benchmark datasets widely adopted by the research community. They are the LETOR dataset from Microsoft (Qin & Liu, 2013), Set1 from the YAHOO LTR challenge (Chapelle & Chang, 2011), and Istella (Dato et al., 2016). We call them Web30K, Yahoo, and Istella respectively. All of them are data sets for web search ranking and the largest data sets publicly available for LTR algorithms. The relevance labels of documents for each query are rated by human in the form of multilevel graded relevance. See Qin & Liu (2013) for an example list of features, such as the number of URL clicks, or the BM25 scores of the different page sections. An overview of these three datasets is shown in Table 1." }, { "heading": "3.2 COMPARISON", "text": "We compare a comprehensive list of methods in Table 2. λMARTGBM (Ke et al., 2017) and λMARTRankLib are the two LambdaMART implementations. RankSVM (Joachims, 2006) is a classic pairwise learning-to-rank model built on SVM. GSF (Ai et al., 2019) is a neural model using groupwise scoring function and fully connected layers. ApproxNDCG (Bruch et al., 2019b) is a neural model with fully connected layers and a differeiable loss that approximates NDCG (Qin et al., 2010). DLCM (Ai et al., 2018) is an RNN based neural model that use list context information to rerank a list of documents based on λMARTRankLib as in the original paper. SetRank (Pang et al., 2020) is a neural model using self-attention to encode the entire list and perform a joint scoring. SetRankre (Pang et al., 2020) is SetRank plus ordinal embeddings based on the initial document ranking generated by λMARTRankLib as in the original paper.\nWe choose to compare these methods because they are either popular or recent. The neural models are already leveraging advanced neural techniques such as using neural methods to model the entire ranking list, which is difficult for tree-based models to achieve. We reproduced results for λMARTRankLib, λMARTGBM, RankSVM, GSF, and ApproxNDCG with extensive hyperparameter tuning with more details in Appendix A. Results for the DLCM and SetRank methods are from their respective papers where the authors did their own tuning. Note that the test set is fixed for all datasets, thus the numbers are comparable.\nFrom Table 2, we can see the following. 1) λMARTGBM is a more appropriate “state-of-the-art” LambdaMART baseline, as it significantly outperforms λMARTRankLib. 2) Recent neural LTR methods, though sometimes outperform λMARTRankLib, are inferior to λMARTGBM by a large margin,\nsometimes by as much as 15%, comparatively. These results show the inconsistency of existing methods and validate the concerns on the current practice of neural LTR models3." }, { "heading": "4 NEURAL LTR MODELS", "text": "A natural question is: why do neural models under-perform on LTR benchmark datasets compared with LambdaMART, despite their success in many machine learning research areas? We first identify a few weaknesses of the neural LTR models and then propose our methods to address them." }, { "heading": "4.1 WEAKNESSES", "text": "By reviewing recent papers and the strength of tree-based models, we give the following hypotheses:\nFeature transformation. Neural networks are sensitive to input feature scales and transformations (Saberian et al., 2019). LTR datasets consist of features of diverse scales with long-tail distributions, such as the number of clicks of an item. Tree-based models are known to partition the feature space effectively, which is beneficial for datasets (such as LTR datasets) with only numeric features. Some recent work already shows the benefits of better input feature transformations than Gaussian normalization (Saberian et al., 2019; Zhuang et al., 2020). Unfortunately, neither the pioneering neural LTR papers (Burges et al., 2005; 2007) nor the most recent ones discuss the impact of feature transformation.\nNetwork architecture. Unless the focus is the neural architecture, neural LTR papers typically use a standard feed-forward network that consists of a stack of fully connected layers. However, fully connected layers are known to be ineffective in generating higher-order feature interactions. The problem has been widely studied in areas such as ads prediction (Wang et al., 2017b) and recommender systems (Beutel et al., 2018), but has not received enough attention for LTR.\nData sparsity. Recent neural LTR models are small and do not employ high-capacity networks (Bruch et al., 2019b; Pang et al., 2020), possibly due to the overfitting issue. While large datasets are key factors to many recent successes of neural models in other domains (He et al., 2015; Devlin et al., 2019), the publicly available LTR datasets are comparatively small. Popular techniques such as data augmentation to mitigate overfitting in high-capacity networks are commonly used in other areas (Perez & Wang, 2017). But it is less intuitive on how to do data augmentation for LTR datasets, compared with, e.g., rotating a cat image in computer vision." }, { "heading": "4.2 IMPROVEMENTS", "text": "We introduce our proposed neural LTR framework that tries to address the above mentioned concerns. Figure 1 summarizes our DASALC framework, which stands for Data Augmented SelfAttentive Latent Cross ranking network." }, { "heading": "4.2.1 EXPLICIT FEATURE TRANSFORMATION AND DATA AUGMENTATION", "text": "Features in LTR datasets are diverse and can be of different scales. Out of the three datasets we consider, only the Yahoo dataset has been normalized (we leave it not-transformed). It is well known that neural networks are sensitive to input data scale, and we apply a simple “log1p” transformation to every element of x and empirically find it works well for the Web30K and Istella datasets:\nx = loge(1 + |x|) sign(x). (6)\nwhere is the element-wise multiplication operator. We use a very simple data augmentation technique on LTR datasets. We add a random Gaussian noise independently to every element of input vector x:\nx = x +N (0, σ2I) (7) 3We use the Fold1 in Web30K to be consistent with the setup of Yahoo and Istella. Some of the reported results on Web30K were based on 5-fold cross-validation (CV). We verified on λMARTRankLib that the difference between Fold1 and CV is small and does not affect our conclusion.\nwhere σ is a scalar hyperparameter. The random noise is added after the log1p transformation in an online fashion during training (i.e. different perturbations will be added to the same data point seen in different batches). A single scalar σ for every feature is reasonable because the feature distributions are normalized by log1p. Also data augmentation is added after input Batch Normalization (BN) when applicable. Note that the random noise is added independently to every element so (later) BN will not cancel it away. We find such a simple data augmentation technique works well in our framework, but as shown in experiments, it only works when the capacity of the network is properly augmented as described in the next section.\nFor notation simplicity, we combine the log1p feature transformation and data augmentation into a single function f : Rn×k → Rn×k:\nf = loge(1 + |x|) sign(x) +N (0, σ2I) (8)" }, { "heading": "4.2.2 LEVERAGING LISTWISE CONTEXT", "text": "For LTR problem, the list of documents can be leveraged in neural models. This is the key base to enhance the network architecture for LTR. We leverage the multi-head self-attention (MHSA) mechanism (Vaswani et al., 2017) to encode ranking list information. More specifically, we generate a contextual embedding ai, for each item i, considering the document similarity between document i and every document in the list. For the multi-head self-attention mechanism, we have the input f ∈ Rn×k, and project f into a query (in the context of attention mechanism) matrix Q = fWQ, a key matrix K = fWK , and a value matrix V = fWV with trainable projection matrices WQ, WK , and WV ∈ Rk×z , where z is the attention head size. Then a self-attention (SA) head computes the weighted sum of the transformed values V as,\nSA(f) = Softmax(S(f))V, (9)\nwhere similarity matrix between Q and K is defined as S(f) = QK T\n√ z . For each layer, the results from the H heads are concatenated to form the output of multi-head self-attention by\nMHSA(f) = concath∈[H][SAh(f)]Wout + bout, (10)\nwhere Wout ∈ RHz×z and bout ∈ Rn×z are trainable parameters. We apply L ≥ 1 layers of multihead self-attention followed by a layer normalization (Ba et al., 2016) similarly to (Vaswani et al., 2017).\nBy treating ai as the listwise contextual embedding for item i, we further leverage the simple latent cross idea (Beutel et al., 2018) to effectively generate feature interactions:\nhcrossi = (1 + ai) hout(xi), (11)\nwhere is the element-wise multiplication operator (ai will go through a linear projection when the dimensions do not match, omitted in the equation), and hout(xi) is the output of the final hidden layer of regular network.\nLearning to rank can be seen as learning to induce order over set of items. One desirable property for ranking approaches that use listwise context is to be permutation equivariant: applying a permutation over input items leads to an equivalent permutation over output scores. DASALC satisfies such a permutation equivariance property. Proposition 1. Let π be a permutation of indices of [1, .., n] and x ∈ Rn×k be the input item representation. DASALC is permutation equivariant for scores generated over input items , i.e, sDASALC(π(x)) = π(sDASALC(x)). See proof at Appendix C." }, { "heading": "4.3 REMARKS", "text": "We compared several popular pointwise, pairwise, and listwise ranking losses. We report all results based on the softmax cross entropy loss l(y, s(x)) = − ∑n i=1 yi loge esi∑ j e\nsj since it is simple and empirically robust in general, as demonstrated in Appendix B.2.\nWe provided a general framework that can enhance neural LTR models in many components. For each component, we purposefully use simple or well-known techniques for enhancement because the scope of the current research is to identify the possible reasons why neural LTR is under-performing when compared with the best traditional tree-based methods. Clearly, each component can use more advanced techniques, such as learning a more flexible data transformation (Zhuang et al., 2020) or using data augmentation policy (Cubuk et al., 2019), which we leave as future work." }, { "heading": "5 EXPERIMENTS", "text": "We conduct experiments on the three LTR datasets (introduced in Sec 3.1) with our proposed framework and compare with some methods in Sec 3. For all our experiments using neural network approaches, we implemented them using the TF-Ranking (Pasumarthi et al., 2019) library.\nWe use two variants of our proposed approaches. DASALC is a model trained in our proposed framework. DASALC-ens is an ensemble of DASALC. By realizing LambdaMART is an ensemble method based on boosting, we leverage the randomness of neural model training and simply use the average score of 3-5 models (tuned on validation set) from different runs as the final score in DASALC-ens.\nMain result. The results are summarized in Table 3. We focus on the comparison with λMARTGBM and also include SetRank to highlight the difference with recent neural LTR models. Readers can refer to Table 2 for more results. We tune hyperparameters on the validation sets, with more details in Appendix A. We have the following observations and discussions: (1) DASALC can sometimes achieve comparable or better results than λMARTGBM, and outperforms recent neural LTR methods by a large margin. (2) DASALC-ens, though simple, can achieve neutral or significantly better results than λMARTGBM on all datasets and metrics. (3) The results on Yahoo dataset are weaker than the other two datasets. One thing to note is Yahoo dataset is already normalized upon release. As we note the importance of input feature transformation, the provided normalization may not be ideal for neural models, thus it should be encouraged to release LTR datasets with raw feature values.\nAblation study. We provide some ablation study results in Table 4 to highlight the effectiveness of each component in our framework. Each component is added cumulatively from left to right in the table. We can see that each component helps and the best performance is achieved when all components are combined. More detailed ablation study is provided in Appendix B. Appendix B.1 gives more results on the effect of the log1p transformation. Appendix B.2 compares different loss functions and shows that listwise ranking loss performs better. Appendix B.3 shows the benefit of effective listwise context modeling. Appendix B.4 shows the effect of data augmentation in different model architectures." }, { "heading": "6 RELATED WORK", "text": "We focus on traditional LTR problems when there are only numeric features and human ratings available. Some works (Mitra & Craswell, 2018; Nogueira et al., 2019; Han et al., 2020) on document matching and ranking leverage neural components such as word2vec and BERT when raw text is available, where the major benefit comes from semantic modeling of highly sparse input and tree-based methods become less relevant due to its limitation in handling sparse features.\nThe pioneering neural LTR models are RankNet (Burges et al., 2005) and LambdaRank (Burges et al., 2007). They use feed-forward networks on dense features as their scoring functions and became less favored than tree-based LambdaMART (Burges, 2010). Recent neural LTR models have explored new model architectures (Pang et al., 2020; Qin et al., 2020b), differetiable losses (Bruch et al., 2019b), and leveraging more auxiliary information (Ai et al., 2018). However, there is less work that specifically understands and addresses weaknesses for neural LTR, and a benchmark with strong tree-based baseline is missing. In this work, we show that relatively simple components that aim to address weaknesses of neural models can outperform recent methods significantly.\nThe idea of generating new data for LTR has been explored in few work recently, but their focus is to train more discriminative ranking models, not to mitigate the data sparsity problem for high-capacity neural models. For example, Yu & Lam (2019) uses a separate Autoencoder model to generate data and then feed them into tree-based models. This work can be treated as orthogonal to our data augmentation technique.\nSeveral LTR papers have leveraged neural sequence modeling based on LSTM (Ai et al., 2018) or self-attention (Pang et al., 2020; Pasumarthi et al., 2020), which is not easy for tree-based approaches to model. We also leverage listwise context via self-attention to show neural LTR models are easily extendable. The combination of self-attention based listwise context and latent cross in our work to specifically mitigate the ineffectiveness of neural model to generate higher-order feature interactions has not been explored in the literature.\nOur work is mostly orthogonal to another line of LTR research, namely unbiased learning to rank from implicit feedback data, such as clicks (Joachims et al., 2017; Hu et al., 2019; Qin et al., 2020a; Zhuang et al., 2021). There are also papers that try to reproduce tree models using neural architectures for tabular data (Saberian et al., 2019; Lee & Jaakkola, 2020). Our motivation is different in that our goal is to identify and mitigate weaknesses of neural approaches in general." }, { "heading": "7 CONCLUSION AND DISCUSSION", "text": "In this paper, we first showed the inconsistency of performance comparison between neural rankers and GBDT models, and verified the inferior performance of neural models. We then identified the weaknesses when building neural rankers in multiple components and proposed methods to address them. Our proposed framework performs competitively well with the strong tree-based baselines. We believe our general framework and the rigorous benchmarking provides critical contribution to facilitate future neural LTR research. In particular, neural models are powerful in modeling complex relations (e.g, attention mechanism (Vaswani et al., 2017)) and raw text features (e.g., BERT (Devlin et al., 2019)). Also, the active research on neural networks in other domains continuously advances neural techniques (e.g., optimizers (Kingma & Ba, 2014)) All these can be studied in the LTR setting and our work pave ways to avoid pitfalls when leveraging these techniques." }, { "heading": "A HYPERPARAMETER TUNING", "text": "For λMARTGBM, we do a grid search for number of trees ∈ {300, 500, 1000}, number of leaves ∈ {200, 500, 1000}, and learning rate ∈ {0.01, 0.05, 0.1, 0.5}. For our neural models the main hyperparameters are hidden layer size ∈ {256, 512, 1024, 2048, 3072, 4096} and number of layers ∈ {3, 4, 5, 6} for regular DNN, data augmentation noise ∈ [0, 5.0] using binary search with step 0.1, number of attention layers ∈ {3, 4, 5, 6}, and number of attention heads ∈ {2, 3, 4, 5}. The same parameter swept is enabled on the baselines we tried when applicable. One noticeable difference between our work and existing work is that we tried large hidden layer size up to 4096 and found that large models work better in general when data augmentation is enabled. We are in the process to release the code and trained models in an open-sourced software package." }, { "heading": "B ABLATION STUDIES AND ANALYSIS", "text": "B.1 EFFECT OF LOG1P INPUT TRANSFORMATION\nWe first show that the simple log1p transform can improve performance on the Web30K and Istella datasets (Yahoo dataset has already been normalized). Results in Table 5 are based on regular DNN models using the softmax cross-entropy loss. The trends are similar for other configurations. We also noted the results are in general slightly better than Gaussian normalization due to the long-tail nature of LTR dataset features, which we omit here.\nWe can see that such simple transformation can bring meaningful gains. In all following sections, we use log1p transformation by default.\nB.2 RANKING LOSSES\nMany recent progresses of neural LTR are on ranking losses, especially listwise ranking losses (Bruch et al., 2019b;a; 2020; Grover et al., 2019). For example, it is attractive to devise differentiable versions of ranking losses for end-to-end learning. Here we do a benchmark of different ranking losses on regular DNN models on different datasets to show that (1) Listwise ranking losses are superior choices to pointwise or pairwise losses that are normally used for non-neural LTR models; (2) Performances of state-of-the-art listwise ranking losses are comparable; (3) The softmax cross entropy loss is a simple but robust choice.\nWe consider the following ranking losses: • SigmoidCrossEntropy: a widely used pointwise loss: l(y, s(x)) = ∑n i=1−yisi + loge(1 +\nesi). • RankNet (Burges et al., 2005): a popular pairwise loss: l(y, s(x)) = ∑ yi>yj loge(1 +\nesj−si).\n• LambdaRank (Burges et al., 2007; Wang et al., 2018b): the pairwise loss with ∆NDCG weight, which is a direct implementation of the LambdaMART loss in Eq. (5). • Softmax (Cao et al., 2007; Bruch et al., 2019a): a popular listwise loss: l(y, s(x)) = − ∑n i=1 yi loge esi∑ j e sj .\n• ApproxNDCG (Qin et al., 2010; Bruch et al., 2019b): a listwise loss that is a differentiable approximation of NDCG metric: l(y, s(x)) = − 1DCG(π∗,y) ∑n i=1 2yi−1 log2(1+πs(i)) ,\nwhere πs(i) = 12 + ∑ j sigmoid( sj−si T ) with T a smooth parameter.\n• GumbelApproxNDCG (Bruch et al., 2019b; 2020): a listwise loss with a stochastic treatment on ApproxNDCG: scores s in the above NDCG loss function will be substituted by si + gi, with a gumbel noise gi = − loge(− loge Ui)) from Ui uniformly sampled in [0, 1]. • NeuralSortNDCG(Grover et al., 2019): a listwise loss that approximates NDCG metric\nwith the NeuralSort trick: l(y, s(x)) = − 1DCG(π∗,y) ∑n i,r=1 (2yi−1)P sir log2(1+r)\n, where P sir is an approximate permutation matrix, obtained by NeuralSort trick: P sir = softmax[((n+ 1− 2i)sr − ∑ j |sr − sj |)/T ], with T a smooth parameter.\n• GumbelNeuralSortNDCG: a listwise loss with a stochastic treatment of NeuralSortNDCG by replacing the score s in neural sort permutation matrix by si + gi, where gi is again sampled from the gumbel distribution. This is new in the literature but not the major focus of this work.\nThe results are summarized in Table 6. For different ranking losses, we make a grid search over different optimizers with different learning rates: for Adam optimizer, we scan learning rates ∈ {10−4, 10−3, 10−2}; for Adagrad optimizer, we scan learning rates ∈ {0.01, 0.1, 0.5}. When the smooth parameter T is applicable, we also scan it ∈ {0.1, 1, 10}. We report the results based on best NDCG@5 for different losses.\nAs we have stated above, we find that: (1) The performance of models trained with listwise losses are significantly better than the models trained with pointwise or pairwise losses. (2) Different\nlistwise losses are generally comparable, and we found that the softmax cross-entropy loss performs coherently well over different models and different datasets. It is thus used in our main results and following sections. (3) LambdaRank does not work well for neural models. On the other hand, previous work (Bruch et al., 2019a) shows that tree-based models with softmax loss are not as good as LambdaMART, demonstrating that tree-based models and neural LTR models have different behavior on different loss functions. This encourages future work to design neural LTR specific ranking losses.\nB.3 EFFECT OF LISTWISE CONTEXT\nWe study the effect of leveraging listwise context with self-attention, with and without latent cross (concatenation between item feature and context feature will be applied) (Pasumarthi et al., 2020) on the Web30K and Istella datasets. Results are shown in Table 7. We can see that using neural approach to model listwise context, which is difficult for tree-based models to do, is quite beneficial. Latent cross, though simple, can help leverage listwise context more effectively.\nB.4 EFFECT OF DATA AUGMENTATION\nOne of the technical findings in this work is that using a simple Gaussian noise as data augmentation can help neural LTR models. Below we add Gaussian noise with different strength (σ) to both DNN model and the DASALC framework with results shown in Table 8. We can see that the performance\nof DNN starts to drop as soon as we start to add noise. However, for DASALC, data augmentation helps and the performance looks robust using different levels of noise. The performance peeks around σ = 1.5. The optimal σ needs tuning for different datasets but the general trends are similar for other datasets. We treat the study of the exact mechanism of how data augmentation works in DASALC and the application of more sophisticated data augmentation techniques as future work.\nWe also try to add noise to λMARTGBM and see similar results as DNN. The results on the YAHOO dataset is shown in Table 9, we can see that adding noise leads to worse accuracy.\nB.5 PERFORMANCE ON CATBOOST\nWe mainly compared with λMARTRankLib and λMARTGBM in the main content since they are the most popular baselines used in recent papers. There are other GBDT implementations that can also be used for the LTR task. Catboost (Prokhorenkova et al., 2018) is a recently popular GBDT implementation for various tasks. We also evaluate its performance on the three LTR datasets. Note that Catboost is not specific to ranking and does not have a standard LambdaMART implementation to the best of our knowledge. We try both the QueryRMSE loss and YetiRank loss, which are the best performing losses on most existing Catboost’s benchmarks. The results are reported in Table 10.\nWe can see that Catboost can produce very decent results, clearly outperforming λMARTRankLib, but its comparison with λMARTGBM is mixed. We encourage researchers to also consider different implementations such as Catboost in future LTR work.\nB.6 LAMBDAMART ENSEMBLE\nWe showed that a simple ensemble of neural rankers can bring meaningful gains, leveraging the stochastic nature of neural network learning. On the other hand, LambdaMART itself is an ensemble algorithm using boosting, but it is still interesting to see the effect of ensembling multiple LambdaMART models. We conduct additional experiments on this front using λMARTGBM and have two major observations: 1) Running LambdaMART multiple times with the same configuration generates very similar results, and ensemble in this setting does not help, whereas neural rankers can benefit from such a simple setting; 2) In Table 11 we show ensembling LambdaMART with different configurations (e.g., different # trees, # leaves and learning rate) on the Istella dataset. We ensemble five LambdaMART models chosen on the validation set. The results on other datasets are similar.\nWe can see that the improvement from ensembling LambdaMART is smaller than that in neural rankers (see Table 3). Our hypothesis is that model ensembles tend to be more effective for neural rankers with stronger stochastic nature, and exploring advanced model ensemble methods with neural rankers is an interesting future direction." }, { "heading": "C PERMUTATION EQUIVARIANCE ANALYSIS", "text": "For any general scoring function s(x) : Rn×k → Rn, and a permutation π over indices [1, ..., n], we call s to be permutation equivariant iff\ns(π(x)) = π(s(x)) (12)\nThe scoring function for proposed approach, DASALC, can be written as a combination of feature transformation and data augmentation function f, output of multi-headed self-attention a :=\nMHSAL(f) and output of final layer of regular network hout(x).\nsDASALC(x) = WTFCReLU((1 + a(x)) hout(x)) (13)\nNote that per-item transformations, which we refer to as univariate transformations, are trivially permutation equivariant. Also, composition of two permutation equivariant functions is also permutation equivariant, as the permutation operator and the permutation equivariant functions are commutative. Hence linear projection, ReLU activation and f (as a function of x) are permutation equivariant. Multi-headed self-attention is shown to be permutation equivariant (Pang et al., 2020). Hence, on applying permutation π to the proposed scoring function, we see that it satisfies the permutation equivariance property.\nπ(sDASALC(x)) = π(WTFC ReLU((1 + a(x)) hout(x))) = WTFC ReLU(π((1 + a(x)) hout(x))) = WTFC ReLU((1 + π(a(x)) π(hout(x))) = WTFCReLU(((1 + a(π(x))) hout(π(x)))) = sDASALC(π(x))" } ]
2,021
GRADIENT BOOSTED DECISION TREES?
SP:e56c1cfe3a5303c1176c9778ef1ea75855d7e20f
[ "the paper propose to tackle visual reasoning problem in videos. The proposed solution is to combine MONET (Burgess et al., 2019) with self-attention mechanism (Vaswani et al., 2017) to first encode images into object-centric encodings and aggregate the encodings using self-attention to make the final prediction. The method is shown to outperform neural-symbolic reasoning approaches such as MAC (Hudson & Manning, 2018) and NS-DR (Yi et al 2020.) on image QA and R3D (Girdhar & Ramanan, 2020) on CARTER, which is a video reasoning task / benchmark." ]
Transformer-based language models have proved capable of rudimentary symbolic reasoning, underlining the effectiveness of applying self-attention computations to sets of discrete entities. In this work, we apply this lesson to videos of physical interaction between objects. We show that self-attention-based models operating on discrete, learned, object-centric representations perform well on spatio-temporal reasoning tasks which were expressly designed to trouble traditional neural network models and to require higher-level cognitive processes such as causal reasoning and understanding of intuitive physics and narrative structure. We achieve state of the art results on two datasets, CLEVRER and CATER, significantly outperforming leading hybrid neuro-symbolic models. Moreover, we find that techniques from language modelling, such as BERT-style semi-supervised predictive losses, allow our model to surpass neuro-symbolic approaches while using 40% less labelled data. Our results corroborate the idea that neural networks can reason about the causal, dynamic structure of visual data and attain understanding of intuitive physics, which counters the popular claim that they are only effective at perceptual pattern-recognition and not reasoning per se.
[]
[ { "authors": [ "Jacob Andreas", "Marcus Rohrbach", "Trevor Darrell", "Dan Klein" ], "title": "Learning to compose neural networks for question answering", "venue": "CoRR, abs/1601.01705,", "year": 2016 }, { "authors": [ "Anonymous. Hopper" ], "title": "Multi-hop transformer for spatiotemporal reasoning", "venue": "In Submitted to International Conference on Learning Representations,", "year": 2021 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Chris P. Burgess", "Loic Matthey", "Nick Watters", "Rishabh Kabra", "Irina Higgins", "Matt Botvinick" ], "title": "MONet: Unsupervised scene decomposition and representation", "venue": null, "year": 1901 }, { "authors": [ "João Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? A new model and the kinetics", "venue": "dataset. CoRR,", "year": 2017 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Zhe Chen" ], "title": "Object-based attention: A tutorial review", "venue": "Attention, Perception, & Psychophysics,", "year": 2012 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "William W Cohen", "Jaime Carbonell", "Quoc V Le", "Ruslan Salakhutdinov" ], "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "venue": null, "year": 1901 }, { "authors": [ "Mikyas T. Desta", "Larry Chen", "Tomasz Kornuta" ], "title": "Object-based reasoning in VQA", "venue": "CoRR, abs/1801.09718,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Yilun Du", "Kevin Smith", "Tomer Ulman", "Joshua Tenenbaum", "Jiajun Wu" ], "title": "Unsupervised discovery of 3d physical objects from video", "venue": "arXiv preprint arXiv:2007.12348,", "year": 2020 }, { "authors": [ "Rohit Girdhar", "Deva Ramanan" ], "title": "CATER: A diagnostic dataset for Compositional Actions and TEmporal Reasoning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Klaus Greff", "Raphaël Lopez Kaufman", "Rishabh Kabra", "Nick Watters", "Christopher Burgess", "Daniel Zoran", "Loïc Matthey", "Matthew Botvinick", "Alexander Lerchner" ], "title": "Multi-object representation learning with iterative variational inference", "venue": "URL http://arxiv", "year": 1903 }, { "authors": [ "Christopher Hahn", "Frederik Schmitt", "Jens U. Kreber", "Markus N. Rabe", "Bernd Finkbeiner" ], "title": "Transformers generalize to the semantics of logics", "venue": "arXiv preprint arXiv:2003.04218,", "year": 2020 }, { "authors": [ "Tengda Han", "Weidi Xie", "Andrew Zisserman" ], "title": "Video representation learning by dense predictive coding", "venue": "In Workshop on Large Scale Holistic Video Understanding,", "year": 2019 }, { "authors": [ "Drew A Hudson", "Christopher D Manning" ], "title": "Compositional attention networks for machine reasoning", "venue": null, "year": 2018 }, { "authors": [ "Justin Johnson", "Bharath Hariharan", "Laurens van der Maaten", "Li Fei-Fei", "C. Lawrence Zitnick", "Ross B. Girshick" ], "title": "CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning", "venue": "CoRR, abs/1612.06890,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Ryan Kiros", "Yukun Zhu", "Russ R Salakhutdinov", "Richard Zemel", "Raquel Urtasun", "Antonio Torralba", "Sanja Fidler" ], "title": "Skip-thought vectors. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "Guillaume Lample", "François Charton" ], "title": "Deep learning for symbolic mathematics", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Liunian Harold Li", "Mark Yatskar", "Da Yin", "Cho-Jui Hsieh", "Kai-Wei Chang" ], "title": "Visualbert: A simple and performant baseline for vision and language", "venue": null, "year": 1908 }, { "authors": [ "Zhixuan Lin", "Yi-Fu Wu", "Skand Vishwanath Peri", "Weihao Sun", "Gautam Singh", "Fei Deng", "Jindong Jiang", "Sungjin Ahn" ], "title": "SPACE: Unsupervised object-oriented scene representation via spatial attention and decomposition", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Jiasen Lu", "Dhruv Batra", "Devi Parikh", "Stefan Lee" ], "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "venue": "arXiv preprint arXiv:1908.02265,", "year": 2019 }, { "authors": [ "Gary Marcus" ], "title": "The next decade in ai: Four steps towards robust artificial intelligence", "venue": "arXiv preprint arXiv:2002.06177,", "year": 2020 }, { "authors": [ "Tomas Mikolov", "Kai Chen", "Greg Corrado", "Jeffrey Dean" ], "title": "Efficient estimation of word representations in vector space", "venue": "arXiv preprint arXiv:1301.3781,", "year": 2013 }, { "authors": [ "Melanie Mitchell" ], "title": "Can GPT-3 make analogies? https://medium.com/@melaniemitchell", "venue": "me/can-gpt-3-make-analogies-16436605c446,", "year": 2020 }, { "authors": [ "David Raposo", "Adam Santoro", "David G.T. Barrett", "Razvan Pascanu", "Timothy P. Lillicrap", "Peter W. Battaglia" ], "title": "Discovering objects and their relations from entangled scene representations", "venue": "CoRR, abs/1702.05068,", "year": 2017 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross B. Girshick", "Jian Sun" ], "title": "Faster R-CNN: towards real-time object detection with region proposal", "venue": "networks. CoRR,", "year": 2015 }, { "authors": [ "Pieter R. Roelfsema", "Victor A. Lamme", "Henk Spekreijse" ], "title": "Object-based attention in the primary visual cortex of the macaque", "venue": "monkey. Nature,", "year": 1998 }, { "authors": [ "Aviv Shamsian", "Ofri Kleinfeld", "Amir Globerson", "Gal Chechik" ], "title": "Learning object permanence from video", "venue": "arXiv preprint arXiv:2003.10469,", "year": 2020 }, { "authors": [ "Weijie Su", "Xizhou Zhu", "Yue Cao", "Bin Li", "Lewei Lu", "Furu Wei", "Jifeng Dai" ], "title": "Vl-bert: Pre-training of generic visual-linguistic representations", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Chen Sun", "Fabien Baradel", "Kevin Murphy", "Cordelia Schmid" ], "title": "Contrastive bidirectional transformer for temporal representation learning", "venue": "CoRR, abs/1906.05743,", "year": 2019 }, { "authors": [ "Chen Sun", "Austin Myers", "Carl Vondrick", "Kevin Murphy", "Cordelia Schmid" ], "title": "Videobert: A joint model for video and language representation learning", "venue": "CoRR, abs/1904.01766,", "year": 2019 }, { "authors": [ "Hao Tan", "Mohit Bansal" ], "title": "Lxmert: Learning cross-modality encoder representations from transformers", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing,", "year": 2019 }, { "authors": [ "Sjoerd van Steenkiste", "Klaus Greff", "Jürgen Schmidhuber" ], "title": "A perspective on objects and systematic generalization in model-based RL", "venue": "CoRR, abs/1906.01035,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xiaolong Wang", "Ross Girshick", "Abhinav Gupta", "Kaiming He" ], "title": "Non-local neural networks", "venue": null, "year": 2018 }, { "authors": [ "Kexin Yi", "Jiajun Wu", "Chuang Gan", "Antonio Torralba", "Pushmeet Kohli", "Joshua B. Tenenbaum" ], "title": "Neural-symbolic vqa: Disentangling reasoning from vision and language understanding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Kexin Yi", "Chuang Gan", "Yunzhu Li", "Pushmeet Kohli", "Jiajun Wu", "Antonio Torralba", "Joshua B. Tenenbaum" ], "title": "CLEVRER: Collision events for video representation and reasoning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Yang You", "Jing Li", "Sashank Reddi", "Jonathan Hseu", "Sanjiv Kumar", "Srinadh Bhojanapalli", "Xiaodan Song", "James Demmel", "Cho-Jui Hsieh" ], "title": "Large batch optimization for deep learning: Training BERT in 76 minutes", "venue": "Technical Report UCB/EECS-2019-103, EECS Department,", "year": 2019 }, { "authors": [ "Vinicius Zambaldi", "David Raposo", "Adam Santoro", "Victor Bapst", "Yujia Li", "Igor Babuschkin", "Karl Tuyls", "David Reichert", "Timothy Lillicrap", "Edward Lockhart", "Murray Shanahan", "Victoria Langston", "Razvan Pascanu", "Matthew Botvinick", "Oriol Vinyals", "Peter Battaglia" ], "title": "Deep reinforcement learning with relational inductive biases", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "‖x‖·‖y" ], "title": "We found that these variations did not significantly change the performance of the model (and the optimal temperature setting was close to τ = 1), and leave to future work more careful analysis of these contrastive losses and the representations they encourage", "venue": null, "year": 2020 }, { "authors": [ "Girdhar", "Ramanan" ], "title": "For CLEVRER, we resize videos to 64 by 64 resolution and sample 25 random frames, as in Yi et al. (2020). We divide our batch of 16 videos and questions in two, a supervised sub-batch and an unsupervised sub-batch (where question answers are not provided). The supervised sub-batch is used to calculate the classification", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Artificial intelligence research has long been divided into rule-based approaches and statistical models. Neural networks, a classic example of the statistical approach, certainly have limitations despite their massive popularity and success. For example, experiments with two recently released video question-answering datasets, CLEVRER (Yi et al., 2020) and CATER (Girdhar & Ramanan, 2020), demonstrate that neural networks fail to adequately reason about spatio-temporal and compositional structure in visual scenes. While the networks perform adequately when asked to describe their inputs, they tend to fail when asked to predict, explain, or consider counterfactual possibilities.\nBy contrast, a neuro-symbolic model called NS-DR (Yi et al., 2020) appears to be much better suited to predicting, explaining, and considering counterfactual possibilities with this data. The model leverages independent neural networks to detect objects, infer dynamics, and syntactically parse the question. A hand-coded symbolic executor interprets the questions grounded on the outputs of the networks. The fact that hybrid models employing both distributed (neural) representations and symbolic logic can sometimes perform better has led some to consider neuro-symbolic hybrids to be a more promising model class compared to end-to-end neural networks (Andreas et al., 2016; Yi et al., 2018; Marcus, 2020).\nThere is evidence from other domains, however, that neural networks can indeed adequately model higher-level cognitive processes. For example, in some symbolic domains (such as language), neural networks outperform hybrid neuro-symbolic approaches when tasked to classify or predict (Devlin et al., 2018). Neural models have also had some success in mathematics, a domain that, intuitively, would seem to require the execution of formal rules and manipulation of symbols (Lample & Charton, 2020). Somewhat surprisingly, large-scale neural language models such as GPT-3 (Brown et al., 2020) can acquire a propensity for arithmetic reasoning and analogy-making without being trained explicitly for such tasks, suggesting that current neural network limitations are ameliorated when scaling to more data and using larger, more efficient architectures (Brown et al., 2020; Mitchell, 2020). A key motivation of our work, therefore, is to reconcile existing neural network limitations in video domains with their (perhaps surprising) successes in symbolic domains.\nOne common element of these latter results is the repeated application of self-attention processes (Vaswani et al., 2017) to sequences of discrete ‘entities’. Here, we apply this insight to videos of physical interactions between sets of objects, where the input data to models are continuously-valued pixel arrays at multiple timesteps (together with symbolic questions in certain cases).\nA key design decision is the appropriate level of granularity for the discrete units underlying the selfattention computation. What is the visual analogue to a word in language, or a symbol in mathematics? We hypothesize that the discrete entities acted upon by self-attention should correspond to semantic entities relevant to the task. For tasks based on visual data derived from physical interactions, these entities are often times objects (van Steenkiste et al., 2019; Battaglia et al., 2018). To extract representations of these entities, we use MONet, an unsupervised object segmentation model (Burgess et al., 2019), but we leave open the possibility that other object-estimation algorithms might work better. We propose that a sufficiently expressive self-attention model acting on entities corresponding to physical objects will exhibit, on video datasets, a similar level of higher-level cognition and ‘reasoning’ seen when these models are applied to language or mathematics.\nAltogether, our results demonstrate that self-attention-based neural nets can outperform hybrid neurosymbolic models on visual tasks that require high-level cognitive processes, such as causal reasoning and physical understanding. We show that choosing the right level of discretization is critical for successfully learning these higher-order capabilities: pixels and local features are too fine, and entire scenes are too coarse. Moreover, we identify the value of self-supervised tasks, especially in low data regimes. These tasks ask the model to infer future arrangements of objects given the past, or to infer what must have happened for objects to look as they do in the present. We verify these conclusions in two video datasets, one in which the input is exclusively visual (CATER) and one that requires the combination of language (questions) and vision (CLEVRER)." }, { "heading": "2 METHODS", "text": "Our principal motivation is the converging evidence for the value of self-attention mechanisms operating on a finite sequences of discrete entities. Written language is inherently discrete and hence is well-suited to self-attention-based approaches. In other domains, such as raw audio or vision, it is less clear how to leverage self-attention. We hypothesize that the application of self-attention-based models to visual tasks could benefit from an approximate ’discretization’ process analogous to the segmentation of speech into words or morphemes, and that determining the appropriate level of discretization is an important choice that can significantly affect model performance.\nAt the finest level, data could simply be discretized into pixels (as is already the case for most machine-processed visual data). But since pixels are too-fine grained, some work considers the downsampled “hyper-pixel” outputs of a convolutional network to comprise the set of discrete units (e.g. Zambaldi et al. (2019); Lu et al. (2019)). In the case of videos, an even courser discretization scheme is often used: representations of frames or subclips (Sun et al., 2019b).\nThe neuroscience literature, however, suggests that biological visual systems infer and exploit the existence of objects, rather than use spatial or temporal blocks with artificial boundaries (Roelfsema et al., 1998; Spelke, 2000; Chen, 2012). Because objects are the atomic units that tasks we consider here focus on, it makes sense to discretize on the level of objects. Numerous object segmentation algorithms have been proposed (Ren et al., 2015; He et al., 2017; Greff et al., 2019). We chose to use MONet, an unsupervised object segmentation algorithm that produces object representations with disentangled features (Burgess et al., 2019). Because MONet is unsupervised, we can train it directly in our domain of interest without the need for object segmentation labels.\nTo segment each frame into object representations, MONet first uses a recurrent attention network to obtain a set of No “object attention masks” (No is a fixed parameter). Each attention mask represents the probability that any given pixel belongs to that mask’s object. The pixels assigned to the mask are encoded into latent variables with means µti ∈ Rd, where i indexes the object and t the frame. These means are used as the object representations in our model. More details are provided in Appendix A.1.\nThe self-attention component is a transformer model (Vaswani et al., 2017) over the sequence µti. In addition to this sequence of vectors, we include a trainable vector CLS ∈ Rd that is used to generate classification results; this plays a similar role to the CLS token in BERT (Devlin et al., 2018). Finally, for our CLEVRER experiments, where the inputs include a question and potentially several\nchoices, we embed each question word wi (and choice for multiple choice questions) in Rd and include these in the sequence of inputs to the transformer. We also append a two-dimensional one-hot vector to µti and wi to indicate whether the input is a word or an object.\nWe pass the sequence consisting of the object latent means µti, the classification token, and the word embeddings (for CLEVRER) through a transformer with NT layers. We add a relative positional encoding at each layer of the transformer to give the model knowledge of the word and frame order (Dai et al., 2019). The transformed value of the classification token CLS is passed through an MLP (with one hidden layer of size NH ) to generate the final answer.\nThis general approach is adapted to each of our datasets according to the format of expected answers. The final layer, which operates on the transformed value of CLS, is a softmax over answers for CLEVRER descriptive questions, a logit for each choice for CLEVRER multiple-choice questions, or a softmax over the grid-index of the final location of the snitch for CATER. A schema of our architecture is shown in Figure 1.\nNote that in this model, an object in one frame can attend to every object in every frame. We also consider an alternative model with hierarchical attention, which consists of two stages of selfattention with two different transformers. The first transformer acts independently on the objects within each frame along with the word embeddings. The outputs of the first transformer for each frame are concatenated into a single feature vector, one for each frame. The second transformer acts on these feature vectors, treating each frame as an atomic entity. We study the importance of global attention (objects as the atomic entities) vs hierarchical attention (objects, and subsequently frames as the atomic entities). The comparison is shown in Table 1." }, { "heading": "2.1 SELF-SUPERVISED LEARNING", "text": "Self-supervised learning—unsupervised learning where the data provides its own supervision—has a long history in language processing, where it allows models to acquire useful representations of words, phrases and sentences (Mikolov et al., 2013; Kiros et al., 2015; Devlin et al., 2018). Such techniques are also effective in visual domains, for image classification (Chen et al., 2020), video analysis (Han et al., 2019), and RL agents (Gregor et al., 2019). In these cases, it is proposed that the learned representations are more informationally dense, allowing models to satisfy their ultimate objectives more effectively. MONet, the technique we use to estimate object-centric visual representations, can also be considered a self-supervised learning method.\nWe explored whether self-supervised learning could improve the performance of our model beyond the benefits conveyed by object-level representation, i.e. in ways that support the model’s interpretation" }, { "heading": "Estimation steps", "text": "of the dynamics of the scenes rather than just via improved perception of static observations. Our approach is inspired by the loss used in BERT (Devlin et al., 2018), where a bidirectional transformer model is trained to predict certain words that are masked from the input. In our case, we mask object representations, and train the model to infer the content of the masked object representations using its knowledge from all unmasked objects.\nMore formally, we set\ntransformer input = 〈 CLS; mtiµti|t,i ; wi|i 〉 ,\nwhere mti ∈ {0, 1} is a masking indicator. We write the output of the transformer as transformer output = 〈 CLS′; µ′ti|t,i ; w ′ i|i 〉 .\nWe expect the transformer to understand the underlying dynamics of the video, so that the masked out slot µti could be predicted from µ′ti. We add an auxiliary loss to guide the transformer in learning effective representations capable of this type of dynamics prediction:\nauxiliary loss = ∑ t,i τtil (f(µ ′ ti), µ),\nwhere f is a learned linear mapping to Rd, l a loss function, and τti ∈ {0, 1} are one-hot indicator variables identifying the prediction targets. We propagate gradients only to the parameters of f and the transformer. This auxiliary loss is added to the main classification loss with weighting λ, and both losses are minimized simultaneously by the optimizer. In particular, we do not pretrain the model with only the auxiliary loss.\nWe tested two different loss functions, an L2 loss and a contrastive loss (formulas given in Appendix A.2), and six different masking schemes, as illustrated in Figure 2. This exploration was motivated by the observation that video inputs at adjacent timesteps are highly correlated in a way that adjacent words are not. We thus hypothesized that BERT-style prediction of adjacent words might not be optimal. A different masking strategy, in which prediction targets are separated from the context by more than a single timestep, may stimulate capacity in the network to acquire the environment knowledge that permits context-based unrolls and better long-horizon predictions.\nFirst, we set mti = 1 (uniformly) at random across t and i and τti = 1−mti, generated by fixing the expected proportion of the mti set to 1 (schema b in Figure 2). While simple, this has the downside of masking out multiple objects per frame, which is potentially problematic since MONet does not assign objects to slot indices in a well defined way. MONet usually switches object-to-slot\nassignments multiple times in a single video, and these switches occur unpredictably. If multiple slots are masked out, the transformer cannot determine with certainty which missing object to assign to each slot, and so the auxiliary loss could penalize the model even if it predicted all the objects correctly. To avoid this problem, we also try constraining the mask such that exactly one mti = 0 for each t (schema a); this ensures only one slot per frame is masked out, eliminating the ambiguity.\nTo pose harder prediction challenges, we add a buffer between the context (where mti = 1) and the infilling targets (where τti = 1). For t in this buffer zone, both mti = 0 and τti = 0 (schemas c–f ). In the presence of this buffer, we compared prediction (where the context is strictly before the targets; schema c, d) versus infilling (where the context surrounds the targets; schema e, f ). We also compared setting the targets as individual objects (schema c, e) versus targets as entire scenes (schema d, f ).\nWe visually inspect the efficacy of this self-supervised loss in encouraging better representations (beyond improvements of scores on tasks) in Appendix C." }, { "heading": "3 EXPERIMENTS", "text": "We tested our model on two datasets, CLEVRER (Yi et al., 2020) and CATER (Girdhar & Ramanan, 2020). For each dataset, we pretrained a MONet model on individual frames. More training details and a table of hyperparameters are given in Appendix A.3." }, { "heading": "3.1 CLEVRER", "text": "CLEVRER features videos of CLEVR objects (Johnson et al., 2016) that move and collide with each other; these objects are not necessarily visible in every frame. For each video, several questions are posed to test the model’s understanding of what happened (or might happen). Unlike most other visual question answering datasets, which test for only descriptive understanding (“what happened in the video?”), CLEVRER explicitly poses other more causally-complex questions, including explanatory questions (“why did something happen?”), predictive questions (“what will happen next?”), and counterfactual questions (“what would happen in a different circumstance?”) (Yi et al., 2020).\nWe compare our model to state of the art models reported in the literature: MAC (V+) and NS-DR from Yi et al. (2020), as well as the DCL model from Anonymous (2021a) (simultaneous to our work). MAC (V+) (based on the MAC network introduced in Hudson & Manning (2018)) is an end-to-end network that combines visual and language representations. It is augmented with object information and trained using ground truth labels for object segmentation masks and features (e.g. color, shape). NS-DR and DCL are hybrid models that apply a symbolic logic engine to outputs of various neural networks. The neural networks are used to detect objects, predict dynamics, and parse the question into a program, and the symbolic executor runs the parsed program to obtain the final output. NS-DR is trained using ground truth labels and ground truth parsed programs, while DCL requires only the ground truth parsed programs.\nTable 1 shows the result of our model compared to these models; our model is also listed in the public leaderboard provided by Yi et al. (2020) under the name “neural”1. Across all categories, our model significantly outperforms the previous best models. Moreover, compared to the other models, our model does not use any labeled data other than the correct answer for the questions, nor does it require pretraining on any other dataset. Our model also was not specifically designed for this task, and it straightforwardly generalizes to other tasks as well, such as CATER (Girdhar & Ramanan, 2020).\nDetailed analysis Some sample model classifications on a randomly selected set of videos and questions are provided in Appendix D.1. These examples suggest qualitatively that, for most instances where the model was incorrect, humans would plausibly furnish the same answer. During further analysis of our results, we observed a spectrum of difficulties for CLEVRER’s counterfactual questions, which we discuss in Appendix B. We find that our model scores 59.8% on the hardest counterfactual questions, which is still substantially better than both chance and all other models, albeit with some room for improvement remaining. Finally, to shed light on how our model arrives at its predictions, we provide detailed analysis of attention weights in Appendix C.\nModel ablation Table 1 shows the contributions of various components of our model. First, selfattention is necessary for solving this problem. For comparison, we replace our model’s transformer with four fully connected layers with 2048 units per layer2. We find that an MLP is unable to answer non-descriptive questions effectively, despite using more parameters (20M vs 15M parameters).\nSecond, we verify that an object-based discretization scheme is essential to the performance of our model. We compare with a version of the architecture where the MONet object representations µti are replaced with ResNet hyperpixels as in Zambaldi et al. (2019)3. Concretely, we flatten the output of the final convolutional layer of the ResNet to obtain a sequence of feature vectors that is fed into the transformer as the discrete entities. We find that an object level representation, such as one output by MONet, greatly outperforms the locality-aware but object-agnostic ResNet representation.\nWe also observe the importance of global attention between all objects across all frames, compared to a hierarchical attention model where objects within a frame could attend to each other but frames could only attend to each other as an atomic entity. We hypothesize that global attention may be important because with hierarchical attention, objects in different frames can only attend to each other at the “frame” granularity. A cube attending to a cube in a different frame would then gather information about the other non-cube objects, muddling the resulting representation. Since we care about how objects evolve over time, not operating at the level of objects is intuitively problematic.\nFinally, we see that an auxiliary self-supervised (infill) loss improves the performance of the model by between 4 and 6 percentage points, with the greatest improvement on the counterfactual questions.\nSelf-supervision strategies We compared the various masking schemes and loss functions for our auxiliary loss; a detailed figure is provided in Appendix A (Figure 4). We find that for all question types of the CLEVRER task, an L2 loss performs better than a contrastive loss, and among the masking schemes, masking one object per frame is the most effective. This particular result runs counter to our hypothesis that predictions or infilling in which the target is temporally removed from the context could encourage the model to learn more about scene dynamics and object interactions than (BERT-style) local predictions of adjacent targets. Of course, there may be other settings or loss functions that reveal the benefits of non-local prediction or constrastive losses; we leave this investigation to future work.\nData efficiency We investigated how model performance varies as a function of the number of labelled (question-answer) pairs it learns from. To do so, we train models on N% of the videos and their associated labeled data. We evaluate the effect of including the auxiliary self-supervised loss\n1https://evalai.cloudcv.org/web/challenges/challenge-page/667/ leaderboard/1813\n2We also tried a bidirectional LSTM, which achieved even lower performance. This may be because the structure of our inputs requires the learning of long-range dependencies.\n3Note that in contrast to the ResNet-based models in Yi et al. (2020), our ResNet was not pretrained on ImageNet or any other dataset, but was simply trained with the rest of the model on the task.\n(applied to the entire dataset, not just the labelled portion) in this low data regime. This scenario, where unlabeled data is plentiful while labeled data is scarce, occurs frequently in practice, since collecting labeled data is much more expensive than collecting unlabeled data.\nFigure 3 shows that our best model reaches the approximate level of the previous state-of-the-art approaches using only 50%-60% of the data. We see that the self-supervised auxiliary loss makes a particular improvement to validation performance in low-data regimes. For instance, when trained on only 50% of the available labelled data, self-supervised learning enables the model to reach a performance of 37% on counterfactual questions (compared to 25% by MAC (V+) and 42% by NS-DR), while without self-supervised learning, the model only reaches a performance of 13% (compared to the 10% achieved by answering randomly (Yi et al., 2020))." }, { "heading": "3.2 CATER", "text": "We also tested our model on CATER, an object-tracking dataset Girdhar & Ramanan (2020). In CATER, objects from the CLEVR dataset (Johnson et al., 2016) move and potentially cover other such objects, and the goal is to predict the location of a target object (called the snitch) in the final frame. Because the target object could be covered by multiple objects that could move in the meantime, the model must be sensitive to notions such as such as object permanence in order to track the target. There are two main variants of the CATER dataset, static camera and moving camera, differing in whether or not the camera that produces the video could move. A moving camera introduces additional complexity in that the model has to understand the camera motion and take that into account when making its prediction.\nTable 2 shows our model compared to state of the art models in the literature on both static and moving camera videos. R3D is an implementation of I3D (Carreira & Zisserman, 2017) using ResNets in Wang et al. (2018), which also introduced the addition of non-local interactions; R3D and R3D non-local are the strongest two models evaluated by Girdhar & Ramanan (2020) OPNet, or the Object Permanence network (Shamsian et al., 2020), is an architecture with inductive biases designed for object tracking tasks; it was trained with extra supervised labels, namely the bounding boxes for all objects (including occluded ones). Hopper is a multi-hop transformer model from Anonymous (2021b) developed simultaneously with this work.\nWe train our model simultaneously on both static and moving camera videos. Our model outperforms the R3D models for both static and moving cameras. We also ran our model with an additional auxiliary loss consisting of the L1 distance between the predicted cell and the actual cell. With this additional loss, we get comparable results in the moving camera case as the R3D models for the static camera case. Moreover, we achieve comparable accuracy as OPNet for accuracy and L1 distance, despite requiring less supervision to train. Appendix D.2 gives a few sample outputs of our model;\nin particular we note that it is able to find the target object in several cases where the object was occluded, demonstrating that our model is able to do some level of object tracking. Finally,we find that an auxiliary self-supervised loss helps the model perform well in the low data regime for CATER as well, as shown in Figure 3." }, { "heading": "4 RELATED WORK", "text": "Self-attention for reasoning Various studies have shown that transformers (Vaswani et al., 2017) can manipulate symbolic data in a manner traditionally associated with symbolic computation. For example, in Lample & Charton (2020), a transformer model learned to do symbolic integration and solve ordinary differential equations symbolically, tasks traditionally reserved for symbolic computer algebra systems. Similarly, in Hahn et al. (2020), a transformer model learned to solve formulas in propositional logic and demonstrated some degree of generalization to out of distribution formulas. Finally, Brown et al. (2020) showed that a transformer trained for language modeling can also do simple analogical reasoning tasks without explicit training. Although these models do not necessarily beat carefully tuned symbolic algorithms (especially on out of distribution data), they are an important motivation for our proposed recipe for attaining strong reasoning capabilities from self-attention-based models on visually grounded tasks.\nObject representations A wide body of research points to the importance of object segmentation and representation learning. Various supervised and unsupervised methods have been proposed for object detection and feature extraction (Ren et al., 2015; He et al., 2017; Burgess et al., 2019; Greff et al., 2019; Lin et al., 2020; Du et al., 2020). Past research have also investigated using object based representations in downstream tasks (Raposo et al., 2017; Desta et al., 2018).\nSelf-supervised learning Another line of research concerns learning good representations through self-supervised learning, with an unsupervised auxiliary loss to encourage the discovery of better representations. These better representations could lead to improved performance on supervised tasks, especially when labeled data is scarce. In Devlin et al. (2018), for instance, an auxiliary infill loss allows the BERT model to benefit from pretraining on a large corpus of unlabeled data. Our approach to object-centric self-supervised learning is heavily inspired by the BERT infilling loss. Other studies have shown similar benefits to auxiliary learning in the vision domain as well (Gregor et al., 2019; Han et al., 2019; Chen et al., 2020). These works apply various forms of contrastive losses to predict scene dynamics. The better representations that these contrastive losses encourage carry downstream benefits to supervised and reinforcement learning tasks.\nVision and language in self-attention models Recently, many works have emerged on applying transformer models to visual and multimodal data, for static images (Li et al., 2019; Lu et al., 2019; Tan & Bansal, 2019; Su et al., 2020) and videos (Zambaldi et al., 2019; Sun et al., 2019b;a). These approaches combine the output of convolutional networks with language in various ways using self-attention. While these previous works focused on popular visual question answering tasks, which typically consist of descriptive questions (Yi et al., 2020), we focus on understanding deeper causal dynamics of videos. Together with these works, we provide more evidence that self-attention between visual and language elements enables good performance on a diverse set of tasks. In addition, while\nthe use of object representations for discretization in tasks involving static images is becoming more popular, the right way to discretize videos is less clear. We provide strong evidence in the form of ablation studies for architectural decisions that we claim are essential for higher reasoning for this type of data: visual elements should correspond to physical objects in the videos and inter-frame attention between sub-frame entities (as opposed to inter-frame attention of entire frames) is crucial. We also demonstrate the success of using unsupervised object segmentation methods as opposed to the supervised methods used in past work." }, { "heading": "5 CONCLUSION", "text": "We apply a self-attention model to videos discretized into objects and show that such a model is able to understand video dynamics and perform causal reasoning. Our model substantially outperforms all previous state of the art methods on all metrics for two different datasets, including hybrid neuro-symbolic architectures hand-coded and explicitly designed to solve a specific task. This result adds to the growing body of evidence that neural networks, particularly self-attention architectures, could do reasoning tasks that traditionally only symbolic logic AI excel at (Lample & Charton, 2020; Mitchell, 2020). We also show that discretization at the object level is essential to the performance of our model. Finally, we demonstrate that many techniques from natural language processing, such as the classification token and masking/infilling, apply in the visual domain as well. We hope that this bridge between video understanding research and natural language research will facilitate idea sharing between the domains." }, { "heading": "A METHODS DETAILS", "text": "" }, { "heading": "A.1 MONET", "text": "To segment each w × h frame Ft into No object representations, MONet uses a recurrent attention network to obtainNo attention masks Ati ∈ [0, 1]w×h for i = 1, . . . , No that represent the probability of each pixel in Ft belonging to the i-th object, with ∑No i=1 Ati = 1. This attention network is coupled with a component VAE with latents zti ∈ Rd for i = 1, . . . , No that reconstructs Ati Ft, the i-th object in the image. The latent posterior distribution q(zt|Ft,Ati) is a diagonal Gaussian with mean µti, and we use µti as the representation of the i-th object." }, { "heading": "A.2 SELF-SUPERVISED TRAINING", "text": "Recall in the main text that we wrote the auxiliary self-supervised loss as auxiliary loss = ∑ t,i τtil (f(µ ′ ti), µ).\nWe tested an L2 loss and a contrastive loss (inspired by the loss used in Han et al. (2019)), and the formulas for the two losses are respectively:\nlL2 (f(µ ′ ti), µ) = ‖f(µ′ti)− µti‖ 2 2\nlcontrastive (f(µ ′ ti), µ) = − log exp(f(µ′ti) · µti)∑ s,j exp (f(µ ′ ti) · µsj) .\nA comparison of these losses and the masking schemes is given in Figure 4.\nWe also tested a few variations of the contrastive loss inspired by literature and tested all combinations of variations. The first variation is where the negative examples all come from the same frame:\nlcontrastive (f(µ ′ ti), µ) = − log exp(f(µ′ti) · µti)∑ j exp (f(µ ′ ti) · µtj) .\nThe second variation is adding a temperature τ to the softmax (Chen et al., 2020):\nlcontrastive (f(µ ′ ti), µ) = − log exp(f(µ′ti) · µti)/τ∑ s,j exp (f(µ ′ ti) · µsj/τ) .\nThe final variation we tested is using cosine similarity instead of dot product:\nlcontrastive (f(µ ′ ti), µ) = − log exp(sim(f(µ′ti), µti))∑ s,j exp (sim(f(µ ′ ti), µsj)) .\nwhere sim(x,y) = x·y‖x‖·‖y‖ . We found that these variations did not significantly change the performance of the model (and the optimal temperature setting was close to τ = 1), and leave to future work more careful analysis of these contrastive losses and the representations they encourage." }, { "heading": "A.3 TRAINING DETAILS", "text": "We generally follow similar training procedures as for the models described in Yi et al. (2020) and Girdhar & Ramanan (2020).\nFor CLEVRER, we resize videos to 64 by 64 resolution and sample 25 random frames, as in Yi et al. (2020). We divide our batch of 16 videos and questions in two, a supervised sub-batch and an unsupervised sub-batch (where question answers are not provided). The supervised sub-batch is used to calculate the classification loss, and the unsupervised sub-batch is used to calculate the unsupervised auxiliary loss. This division was made so that we can use a subset of available data for the supervised sub-batch while using all data for the unsupervised sub-batch. The supervised sub-batch is further subdivided into two sub-batches of size 4, for descriptive and multiple choice questions (this division was made since the output format is different for the two types of questions).\nFor CATER, we also resize videos to 64 by 64 resolution and sample 80 random frames. We train on static and moving camera data simultaneously, with the batch of 8 videos divided equally between the two.\nFor both datasets, we pretrain a MONet model on frames extracted from the respective dataset. The training of the MONet models follow the procedures described in Burgess et al. (2019).\nMotivated by findings from language modeling, we trained the main transformer model using the LAMB optimizer (You et al., 2019) and found that it offered a significant performance boost over the ADAM optimizer (Kingma & Ba, 2014) for the CLEVRER dataset (data not shown). Results converge after 200,000 steps for CLEVRER and 60,000 steps for CATER. All error bars are computed over at least 5 seeds. The below table lists the hyperparameters used in our model.\nParameter Value Batch-size 16\nTransformer heads 10 Transformer layers 28 Embedding size d 16\nNumber of objects No 8 Prediction head hidden layer size 128\nLearning rate 0.002 Learning rate warmup steps 4000\nInfill cost λ 0.01 (a) Hyperparameters for CLEVRER.\nParameter Value Batch-size 8\nTransformer heads 8 Transformer layers 16 Embedding size d 36\nNumber of objects No 8 Prediction head hidden layer size 144\nLearning rate 0.002 Learning rate warmup steps 4000\nInfill cost λ 2.0 (b) Hyperparameters for CATER." }, { "heading": "B ANALYSIS OF CLEVRER DATASET", "text": "During analysis of our results, we noticed that some counterfactual questions in the CLEVRER dataset can be solved without using counterfactual reasoning. In particular, about 47% of the counterfactual questions ask about the effect of removing an object that did not collide with any other object, hence having no effect on object dynamics; an example is given in Figure 5. Moreover, even for the questions where the removed object is causally connected to the other objects, about 45% can be answered perfectly by an algorithm answering the question as if it were a descriptive question. To quantify this, we wrote a symbolic executor that uses the provided ground-truth video annotations\nand parsed questions to determine causal connectivity and whether each choice happened in the non-counterfactual scenario.\nAlthough determining whether or not a given counterfactual question can be answered this way still requires counterfactual reasoning, we want to eliminate the possibility that our model achieved its 75% accuracy on counterfactual questions without learning counterfactual reasoning; instead it might have reached that score simply by answering all counterfactual questions as descriptive questions. To verify this is not the case, we evaluated our model on only the harder category of counterfactual questions where the removed object does collide with other objects and which cannot be answered by a descriptive algorithm. We find that our model achieves a performance of 59.8% on this harder category. This is significantly above chance, suggesting that our model is indeed able to do some amount of true counterfactual reasoning." }, { "heading": "C QUALITATIVE ANALYSIS", "text": "We provide qualitative analysis of attention weights in order to shed light on how our model arrives at its predictions. These examples illustrate broad patterns evident from informal observation of the model’s attention weights. We focus on the following video from CLEVRER:\nIn this video, a yellow rubber cube collides with a cyan rubber cylinder. The yellow cube then collides with a brown metallic cube, while the cyan cylinder and a green rubber cube approach each other but do not collide. Finally, the green cube approaches but does not collide with the brown cube.\nCross-modal attention We analyzed the cross-modal attention between words in the question and the MONet objects in one frame of the video. For each word, we determined the MONet object that attended to that word the most. In particular, we looked at the attention weights in the last layer of the transformer for one head (of the multi-head attention). The result is shown in the visualization below. We drew bounding boxes corresponding to the objects as determined by MONet, and we colored each word in the question according to the MONet object that attended to that word with highest weight (black represents a MONet slot without any objects). We observe that, for this head, objects attend heavily to the words that describe them. For example the cyan cylinder attends heavily to the words “cylinder” and “removed”.\nQ: If the cylinder is removed, which event will not happen?\n1. The brown object collides with the green object. 2. The yellow object and the metal cube collide. 3. The yellow cube collides with the green object.\nMost important objects For each frame of the video, we look at the objects that were most heavily attended upon in determining the final answer. We measure the attention weights (for one head) in the last layer of the transformer for the CLS token attending on each object. The image below\nillustrates the results, when the model is tasked with assessing the likelihood of the first choice of the above counterfactual question (whether the green object will collide with the brown object if the cyan cylinder is removed). We drew bounding boxes for the two most important objects of each frame, as determined by the attention weights. Note that our MONet was configured to detect eight objects (including the background); the transformer did not select the “empty” object slots nor the background slot. We observe that this head generally focuses on the green and brown objects, but switches its focus to the cyan cylinder and brown cube when it looks like the cylinder’s interaction with the yellow cube could potentially change the outcome.\nThe relative importance of the various objects depends on the question the model is answering. When the model is tasked with a predictive question (whether or not the cylinder and the green cube will collide) a different attention pattern emerges. Here, we observe one head of the transformer focusing on collisions: first the collision of the cylinder and the yellow cube, then on the cylinder and the green cube when they move towards each other.\nObject alignment Recall that MONet does not assign objects to slots in a well-determined manner— tiny changes in an image can cause MONet to unpredictably assign objects to slots in a different permutation. The transformer is able to maintain object identity when MONet outputs objects in a different order. The image below, where we again show the two most attended-upon objects for each frame, illustrate instances where MONet changes the permutation of objects. In this image, we plot time on the x-axis and MONet slot index on the y-axis; the slots containing the two most important objects are grayed out. We observe that the transformer is able to align objects across time, maintaining consistent attention to the green and brown objects.\nEffectiveness of the auxiliary loss Finally, we visually inspect our hypothesis that our selfsupervised loss encourages the transformer in learning better representations. For clarity of the subsequent illustration, we use the scene prediction masking scheme, as described in Figure 2. In this scheme, the transformer has to predict the contents of the last few frames (the target frames) given the beginning of the video. To pose harder predictive challenges, we mask out the three frames preceding\nthe target frames in addition to the target frames themselves. The two images below compare the predicted frames (second image) to the true frames (first image). In the second image, the black frames are the three masked out frames preceding the target frames. The frames following the black frames are the target frames; they contain the MONet-reconstructed images obtained from latents predicted by the transformer. The frames preceding the black frames are MONet-reconstructed images obtained from the original latents (the latents input into the transformer).\nWe observe that with the self-supervised loss, we get coherent images from the transformer-predicted latents with all the right objects (in the absence of the auxiliary loss, the transformed latents generate incoherent rainbow blobs). We also observe the rudiments of prediction, as seen in the movement of the yellow object in the predicted image. Nevertheless, it is also clear that the transformer’s predictions are not perfect, and we leave improvements of this predictive infilling to future work." }, { "heading": "D EXAMPLE MODEL PREDICTIONS", "text": "In this section, we provide a few sample classifications produced by our model. All examples are produced at random from the validation set; in particular we did not cherry-pick any examples to highlight the performance of our model." }, { "heading": "D.1 CLEVRER", "text": "We provide four videos and up to two questions per question type for the video (many videos in the dataset come with only one explanatory or predictive question). For each question type with more than one question, we try to choose one correct classification and one misclassification if available to provide for greater diversity. Besides this editorial choice, all classifications are sampled randomly.\nQ: How many metal objects are moving? Model: 1 Label: 1 Q: Which of the following is not responsible for the collision between the metal cube and the yellow cube? 1. the presence of the\ngray cube 2. the gray object’s en-\ntrance 3. the presence of the\nred rubber cube 4. the collision between\nthe gray cube and the metal cube\nModel: 3 Label: 3\nQ: Which event will happen next? 1. The gray object col-\nlides with the red object\n2. The gray object and the cylinder collide\nModel: 1 Label: 1\nQ: Which event will happen if the red object is removed? 1. The gray object and\nthe brown object collide\n2. The gray object collides with the cylinder\n3. The gray cube collides with the yellow object\n4. The brown cube and the yellow object collide\nModel: 1, 4 Label: 1, 4\nQ: What is the shape of the stationary metal object when the red cube enters the scene? Model: cylinder Label: cylinder\nQ: What will happen if the cylinder is removed? 1. The brown cube col-\nlides with the red cube\n2. The red object and the yellow object collide\n3. The gray cube collides with the red cube\n4. The gray object collides with the brown object\nModel: 3, 4 Label: 3, 4\nQ: What color is the metal object that is stationary when the metal cube enters the scene? Model: blue Label: blue Q: Which of the following is not responsible for the collision between the cyan object and the sphere? 1. the presence of the\nred rubber object 2. the red object’s enter-\ning the scene 3. the collision between\nthe sphere and the blue cube\nModel: 1, 2, 3 Label: 1, 2, 3\nQ: What will happen next? 1. The metal cube and\nthe red cube collide 2. The sphere collides\nwith the metal cube Model: 1 Label: 1\nQ: Without the red cube, which event will happen? 1. The sphere collides\nwith the blue cube 2. The cyan object and\nthe blue cube collide Model: 1 Label: 1\nQ: What material is the last object that enters the scene? Model: metal Label: rubber\nQ: What will not happen without the sphere? 1. The cyan object col-\nlides with the red cube\n2. The cyan object collides with the metal cube\n3. The metal cube and the red cube collide\nModel: 1, 3 Label: 3\nQ: Are there any moving brown objects when the red object enters the scene? Model: no Label: no Q: Which of the following is not responsible for the collision between the red object and the gray sphere? 1. the presence of the\ngray cube 2. the collision between\nthe red object and the cyan object\n3. the rubber cube’s entering the scene\n4. the presence of the cyan object\nModel: 1, 3 Label: 1, 3\nQ: What will happen next? 1. The gray cube and\nthe brown object collide\n2. The red object collides with the rubber cube\nModel: 2 Label: 2\nQ: If the cylinder is removed, which of the following will not happen? 1. The gray cube and\nthe brown cube collide\n2. The red object and the cyan object collide\n3. The red sphere and the rubber cube collide\n4. The cyan object and the brown cube collide\nModel: 1, 4 Label: 1, 4\nQ: How many rubber objects are moving? Model: 3 Label: 3\nQ: How many objects are stationary when the sphere enters the scene? Model: 1 Label: 1 Q: Which of the following is not responsible for the yellow object’s colliding with the green object? 1. the presence of the\npurple sphere 2. the blue object’s en-\ntrance 3. the collision between\nthe blue object and the rubber cube\n4. the sphere’s entering the scene\nModel: 2, 3 Label: 2, 3\nQ: What will happen next? 1. The sphere collides\nwith the rubber cube 2. The yellow cube and\nthe green object collide\nModel: 1 Label: 1\nQ: Which event will not happen if the green cube is removed? 1. The yellow object\nand the blue object collide\n2. The sphere collides with the blue cube\n3. The sphere and the yellow object collide\n4. The sphere collides with the yellow cube\nModel: 2 Label: 2\nQ: What is the shape of the last object that enters the scene? Model: cube Label: cube\nQ: Which of the following will happen if the yellow object is removed? 1. The blue cube and the\ngreen cube collide 2. The sphere collides\nwith the blue cube 3. The sphere collides\nwith the green cube Model: 1, 3 Label: 1" }, { "heading": "D.2 CATER", "text": "We include ten random videos from the validation subset of the static camera CATER dataset. In the final frame of the video, the correct grid cell of the target snitch is drawn in blue, and the model’s prediction is drawn in red. We note that the model is able to find the snitch in scenarios where the snitch is hidden under a cone that later moves (along with the still hidden snitch); in the sixth example, the model also handled a case where the snitch was hidden under two cones at some point in time." } ]
2,020
null
SP:1d3fbd26ee829b120b08d1d474743606d3f72292
[ "The paper presents an influence estimation method for GANs. It discusses why previous approaches on influence estimation cannot be easily extended to GANs. It proposes to use Jacobian of the gradient of discriminator’s loss with respect to the generator’s parameters to learn how absence of an instance in the discriminator’s training affects the generator’s parameters. The authors evaluate whether an instance is harmful based on its influence on GAN evaluation metrics. They show that removing these harmful instances improves performance of GANs on MNIST with respect to three metrics: Inception Score, FID and Average Log Likelihood (ALL). " ]
Identifying harmful instances, whose absence in a training dataset improves model performance, is important for building better machine learning models. Although previous studies have succeeded in estimating harmful instances under supervised settings, they cannot be trivially extended to generative adversarial networks (GANs). This is because previous approaches require that (i) the absence of a training instance directly affects the loss value and that (ii) the change in the loss directly measures the harmfulness of the instance for the performance of a model. In GAN training, however, neither of the requirements is satisfied. This is because, (i) the generator’s loss is not directly affected by the training instances as they are not part of the generator’s training steps, and (ii) the values of GAN’s losses normally do not capture the generative performance of a model. To this end, (i) we propose an influence estimation method that uses the Jacobian of the gradient of the generator’s loss with respect to the discriminator’s parameters (and vice versa) to trace how the absence of an instance in the discriminator’s training affects the generator’s parameters, and (ii) we propose a novel evaluation scheme, in which we assess harmfulness of each training instance on the basis of how GAN evaluation metric (e.g., inception score) is expected to change due to the removal of the instance. We experimentally verified that our influence estimation method correctly inferred the changes in GAN evaluation metrics. We also demonstrated that the removal of the identified harmful instances effectively improved the model’s generative performance with respect to various GAN evaluation metrics.
[ { "affiliations": [], "name": "SARIAL NETWORKS" }, { "affiliations": [], "name": "Naoyuki Terashita" }, { "affiliations": [], "name": "Hiroki Ohashi" }, { "affiliations": [], "name": "Yuichi Nonaka" }, { "affiliations": [], "name": "Takashi Kanemaru" } ]
[ { "authors": [ "Antreas Antoniou", "Amos Storkey", "Harrison Edwards" ], "title": "Data augmentation generative adversarial networks", "venue": "arXiv preprint arXiv:1711.04340,", "year": 2017 }, { "authors": [ "Martin Arjovsky", "Soumith Chintala", "Léon Bottou" ], "title": "Wasserstein generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Ashish Bora", "Eric Price", "Alexandros G. Dimakis" ], "title": "AmbientGAN: Generative models from lossy measurements", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Ali Borji" ], "title": "Pros and cons of gan evaluation measures", "venue": "Computer Vision and Image Understanding,", "year": 2019 }, { "authors": [ "Markus M Breunig", "Hans-Peter Kriegel", "Raymond T Ng", "Jörg Sander" ], "title": "Lof: identifying densitybased local outliers", "venue": "In Proceedings of the 2000 ACM SIGMOD international conference on Management of data,", "year": 2000 }, { "authors": [ "R Dennis Cook", "Sanford Weisberg" ], "title": "Characterizations of an empirical influence function for detecting influential cases in regression", "venue": null, "year": 1980 }, { "authors": [ "Gauthier Gidel", "Hugo Berard", "Gaëtan Vignoud", "Pascal Vincent", "Simon Lacoste-Julien" ], "title": "A variational inequality perspective on generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Satoshi Hara", "Atsushi Nitanda", "Takanori Maehara" ], "title": "Data cleansing for models trained with sgd", "venue": "Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Peter J Huber" ], "title": "Robust statistics, volume 523", "venue": null, "year": 2004 }, { "authors": [ "Takuhiro Kaneko", "Tatsuya Harada" ], "title": "Noise robust generative adversarial networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Rajiv Khanna", "Been Kim", "Joydeep Ghosh", "Sanmi Koyejo" ], "title": "Interpreting black box predictions using fisher kernels", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Pang Wei Koh", "Percy Liang" ], "title": "Understanding black-box predictions via influence functions", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Christian Ledig", "Lucas Theis", "Ferenc Huszar", "Jose Caballero", "Andrew Cunningham", "Alejandro Acosta", "Andrew Aitken", "Alykhan Tejani", "Johannes Totz", "Zehan Wang", "Wenzhe Shi" ], "title": "Photorealistic single image super-resolution using a generative adversarial network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Fei Tony Liu", "Kai Ming Ting", "Zhi-Hua Zhou" ], "title": "Isolation forest", "venue": "Eighth IEEE International Conference on Data Mining,", "year": 2008 }, { "authors": [ "Shaohui Liu", "Yi Wei", "Jiwen Lu", "Jie Zhou" ], "title": "An improved evaluation framework for generative adversarial networks", "venue": "arXiv preprint arXiv:1803.07474,", "year": 2018 }, { "authors": [ "Xudong Mao", "Qing Li", "Haoran Xie", "Raymond YK Lau", "Zhen Wang", "Stephen Paul Smolley" ], "title": "Least squares generative adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Bernhard Schölkopf", "John C Platt", "John Shawe-Taylor", "Alex J Smola", "Robert C Williamson" ], "title": "Estimating the support of a high-dimensional distribution", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "C. Szegedy", "V. Vanhoucke", "S. Ioffe", "J. Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Ilya O Tolstikhin", "Sylvain Gelly", "Olivier Bousquet", "Carl-Johann Simon-Gabriel", "Bernhard Schölkopf" ], "title": "Adagan: Boosting generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Jiajun Wu", "Chengkai Zhang", "Tianfan Xue", "Bill Freeman", "Josh Tenenbaum" ], "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "Zhiming Zhou", "Han Cai", "Shu Rong", "Yuxuan Song", "Kan Ren", "Weinan Zhang", "Jun Wang", "Yong Yu" ], "title": "Activation maximization generative adversarial nets", "venue": "In International Conference on Learning Representations,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative adversarial networks (GANs) proposed by Goodfellow et al. (2014) are a powerful subclass of generative model, which is successfully applied to a number of image generation tasks (Antoniou et al., 2017; Ledig et al., 2017; Wu et al., 2016). The expansion of the applications of GANs makes improvements in the generative performance of models increasingly crucial.\nAn effective approach for improving machine learning models is to identify training instances that harm the model performance. Traditionally, statisticians manually screen a dataset for harmful instances, which misguide a model into producing biased predictions. Recent influence estimation methods (Khanna et al., 2019; Hara et al., 2019) automated the screening of datasets for deep learning settings, in which the sizes of both datasets and data dimensions are too large for users to manually determine the harmful instances. Influence estimation measures the effect of removing an individual training instance on a model’s prediction without the computationally prohibitive cost of model retraining. The recent studies identified harmful instances by estimating how the loss value changes if each training instance is removed from the dataset.\nAlthough previous studies have succeeded in identifying the harmful instances in supervised settings, the extension of their approaches to GAN is non-trivial. Previous approaches require that (i) the existence or absence of a training instance directly affects a loss value, and that (ii) the decrease in the loss value represents the harmfulness of the removed training instance. In GAN training, however, neither of the requirements is satisfied. (i) As training instances are only fed into the discriminator, they only indirectly affect the generator’s loss, and (ii) the changes in the losses of GAN\ndo not necessarily capture how the removed instances harm the generative performance. This is because the ability of the loss to evaluate the generator is highly dependent on the performance of the discriminator.\nTo this end, (i) we propose an influence estimation method that uses the Jacobian of the gradient of the discriminator’s loss with respect to the generator’s parameters (and vice versa), which traces how the absence of an instance in the discriminator’s training affects the generator’s parameters. In addition, (ii) we propose a novel evaluation scheme to judge if an instance is harmful or not on the basis of influence on GAN evaluation metric, that is, how a GAN evaluation metric (e.g., inception score (Salimans et al., 2016)) changes if a given training instance is removed from the dataset. We identify harmful instances by estimating the influence on GAN evaluation metric by leveraging our influence estimation method.\nWe verified that the proposed influence estimation method correctly estimated the influence on GAN evaluation metrics across different settings of the dataset, model architecture, and GAN evaluation metrics. We also demonstrated that removing harmful instances, which were identified by the proposed method, effectively improved various GAN evaluation metrics.1\nOur contributions are summarized as follows:\n• We propose an influence estimation method that uses the Jacobian of the gradient of the discriminator’s loss with respect to the generator’s parameters (and vice versa), which traces how the absence of an instance in the discriminator’s training affects the generator’s parameters.\n• We propose a novel evaluation scheme to judge if an instance is harmful or not on the basis of influence on GAN evaluation metrics rather than that on the loss value, and to leverage the proposed influence estimation method to identify harmful instances.\n• We experimentally verified that our influence estimation method correctly inferred the influence on GAN evaluation metrics. Further, we demonstrated that the removal of the harmful instances suggested by the proposed method effectively improved the generative performance with respect to various GAN evaluation metrics." }, { "heading": "2 PRELIMINARIES", "text": "Notation For column vectors a, b ∈ Rp, we denote the inner product by 〈a, b〉 = ∑p i=1 aibi. For a function f(a), we denote its gradient with respect to a by∇af(a). We denote the identity matrix of size p by Ip, the zero vector of length p by 0p, and the ones vector of length p by 1p.\nGenerative Adversarial Networks (GAN) For simplicity, we consider an unconditional GAN that consists of the generator G : Rdz → Rdx and the discriminator D : Rdx → R, where dz and dx are the number of dimensions of latent variable z ∼ p(z) and data point x ∼ p(x), respectively. The parameters of generator θG ∈ RdG and discriminator θD ∈ RdD are learned though the adversarial training;G tries to sample realistic data whileD tries to identify whether the data is real or generated.\nFormulation of GAN Objectives For the generality, we adopt the formulation of Gidel et al. (2019) in which G and D try to minimize LG and LD, respectively, to obtain the following Nash equilibrium (θ∗G,θ ∗ D):\nθ∗G ∈ arg min θG LG (θG,θ ∗ D) and θ ∗ D ∈ arg min θD LD (θ ∗ G,θD) . (1)\nFor the latter part of this paper, we use a coupled parameter vector θ := (θG,θD)> ∈ Rdθ=dG+dD when we refer to the whole parameters of GAN.\nIn this paper, we assume that LG and LD have the following forms2: LG (θ) := Ez∼p(z) [fG (z;θ)] , LD (θ) := Ez∼p(z) [ f [z] D (z;θ) ] + Ex∼p(x) [ f [x] D (x;θ) ] . (2)\n1Code is at https://github.com/hitachi-rd-cv/influence-estimation-for-gans 2This covers the common settings of GAN objectives: the non-zero-sum game proposed by Goodfellow\net al. (2014), Wasserstein distance (Arjovsky et al., 2017), and the least squares loss (Mao et al., 2017).\nWe can recover the original minimax objective by taking fG (z;θ) = log (1−DθD (GθG (z))), f [z] D = −fG, and f [x] D (x;θ) = − logDθD (x).\nAdversarial SGD (ASGD) To make our derivation easier to understand, we newly formulate the parameter update of a GAN trained by stochastic gradient descent, which we call adversarial SGD (ASGD). For simplicity, this paper considers simultaneous training, in which the generator and the discriminator are simultaneously updated at a single step. We denote the dataset by Dx := {xn ∼ p(x)}Nn=1, which consists of N data points. Let St ⊂ {1, . . . , N} be a set of sample indices at the t-th step. We assume that the mini-batch of the t-th step consists of instances {xi}i∈St and a set of latent variables Zt = {z[t]l ∼ p(z)} |St| l=1 , which are sampled independently at each step t.\nWe denote the mean of LG and LD across the mini-batch by L G(Z;θ) := 1|Z| ∑ z∈Z fG (z;θ)\nand LD(S,Z;θ) := 1|Z| (∑ z∈Z f [z] D (z;θ) + ∑ i∈S f [x] D (xi;θ) ) , respectively. The t-th step of\nASGD updates the coupled parameters by θ[t+1] = θ[t] −Btg ( St,Zt;θ[t] ) , where\nBt :=\n( η [t] G IdG O\nO η [t] D IdD\n) ∈ Rdθ×dθ , g (S,Z;θ) := ( ∇θGL G (Z;θ) ∇θDLD (S,Z;θ) ) ∈ Rdθ . (3)\nη [t] G ∈ R+ and η [t] D ∈ R+ are the learning rates of the t-th step for θG and θD, respectively." }, { "heading": "3 PROPOSED METHOD", "text": "This section explains the two main contributions of our paper: the influence estimation method for GANs that predicts how the removal of a training instance changes the output of the generator and the discriminator (Section 3.1), and two important parts of our instance evaluation scheme, that are, the definition of influence on GAN evaluation metric and its estimation algorithm (Section 3.2)." }, { "heading": "3.1 INFLUENCE ESTIMATION FOR GAN", "text": "We refer to influence estimation as the estimation of changes in a model’s output under a training instance’s absence. As the model’s output changes through the changes in the model’s parameters, we start with the definition of ASGD-Influence, which represents the changes in parameters, and then formulate its estimator.\nASGD-Influence ASGD-Influence is defined on the basis of the following counterfactual ASGD. Let θ[t]−j denote the parameters at t-th step trained without using j-th training instance. Counterfactual ASGD starts optimization from θ[1]−j = θ [1] and updates the parameters of the t-th step by θ [t+1] −j = θ [t] −j−Btg ( St \\ {j},Zt;θ[t]−j ) . We define ASGD-Influence ∆θ−j as the parameter differ-\nence between counterfactual ASGD and ASGD at the final step t = T , namely ∆θ−j := θ [T ] −j−θ[T ].\nEstimator of ASGD-Influence Our estimator uses an approximation of the mean of the gradient. Let ( ∇θGL G(Z;θ),∇θDLD(S,Z;θ) )> be the joint gradient vector of the mini-batch. We introduce the Jacobian of the joint gradient vector of the t-th mini-batch with respect to θ:\nJt :=\n( J\n[t] GG J [t] GD\nJ [t] DG J [t] DD\n) = ( ∇2θGL G ( Zt;θ[t] ) ∇θD∇θGL G ( Zt;θ[t] ) ∇θG∇θDLD ( St,Zt;θ[t] ) ∇2θDLD ( St,Zt;θ[t] ) ) . (4) When we assume both LG(θ) and LG(θ) are second-order differentiable with respect to θ, the first-order Taylor approximation gives g ( St,Zt;θ[t]−j ) − g ( St,Zt;θ[t] ) ≈ Jt ( θ [t] −j − θ[t] ) . With\nthis approximation, we have\nθ [t+1] −j − θ [t+1] = ( θ [t] −j − θ [t] ) −Bt ( g ( St,Zt;θ[t]−j ) − g ( St,Zt;θ[t] )) ≈ (Idθ −BtJt) ( θ [t] −j − θ [t] ) ,∀j 6∈ St. (5)\nFor simplicity, we first focus on 1-epoch ASGD in which each instance appears only once. Let π (j) be the step where the j-th instance is used. Considering the absence of ∇θDf [x] D (xj ;θ\n[π(j)]) in the π(j)-th step of counterfactual ASGD, we have θ[π(j)+1]−j − θ[π(j)+1] = η [π(j)] D |Sπ(j)| ( 0dG , ∇θDf [x] D (xj ;θ [π(j)]) )>\n. By denoting Zt := Idθ − BtJt and recursively applying the approximation (5), we obtain\n∆θ−j ≈ η [π(j)] D\n|Sπ(j)| ZT−1ZT−2 · · ·Zπ(j)+1\n( 0dG\n∇θDf [x] D ( xj ;θ\n[π(j)] )) . (6)\nFor the practical situation of K-epoch ASGD, in which the j-th instance is sampled K times at t = π1 (j) , . . . , πK (j), the estimator of the ASGD-Influence is given by\n∆θ̂−j := K∑ k=1 T−πk(j)−1∏ s=1 ZT−s η[πk(j)]D |Sπk(j)| ( 0dG ∇θDf [x] D ( xj ;θ [πk(j)] )) . (7)\nLinear Influence To estimate the influence on outputs, we introduce linear influence L[T ]−j (u) := 〈u,∆θ−j〉 of a given query vector u ∈ Rdθ . If we take u = ∇θfG ( z;θ[T ] ) , the linear influence\napproximates the influence on the generator’s loss L[T ]−j (u) ≈ fG ( z;θ [T ] −j ) − fG ( z;θ[T ] ) .\nLet ( u [t]> G ∈ RdG ,u [t]> D ∈ RdD ) := u>ZT−1ZT−2 · · ·Zt+1. The linear influence of the j-th instance is approximated by the proposed estimator:\nL [T ] −j (u) ≈ 〈 u,∆θ̂−j 〉 = K∑ k=1 η [πk(j)] D |Sπk(j)| 〈 u [πk(j)] D ,∇θDf [x] D ( xj ;θ [πk(j)] )〉 . (8)\nThe estimation algorithm consists of two phases; training phase performs K-epoch ASGD by storing information A[t] ← (St, η[t]G , η [t] D ,θ\n[t],Zt) and inference phase calculates (8) using A[1], . . . ,A[T−1]. See Appendix A for the detailed algorithm." }, { "heading": "3.2 INFLUENCE ON GAN EVALUATION METRIC", "text": "This section explains our proposal of a new evaluation approach for data screening for GANs. Firstly we propose to evaluate harmfulness of an instance on the basis of influence on GAN evaluation metrics. Secondly we propose to leverage the influence-estimation algorithm explained in Section 3.1 to identify harmful instances with respect to the GAN evaluation metrics.\nInfluence on GAN Evaluation Metric Let V (D) be a GAN evaluation metric that maps a set of data points D := {x̃m ∈ Rdx}Mm=1 into a scalar value that gives the performance measure of G. Let generated dataset DG(Z;θG) := {G(z;θG)| z ∈ Z}. Using a set of latent variables Z := {z̃m ∼ p(z)}Mn=1 that is sampled independently from the training, we define the influence on GAN evaluation metric by\n∆V [T ] −j := V ( DG ( Z;θ[T ]G,−j )) − V ( DG ( Z;θ[T ]G )) , (9)\nwhere θ[T ]G,−j and θ [T ] G are the generator parameters of counterfactual ASGD and the ASGD of the T -th step, respectively.\nEstimation Algorithm In order to build the estimation algorithm of the influence on GAN evaluation metric, we focus on an important property of some common evaluation metrics for which the gradient with respect to the element of their input ∇x̃mV (D) is computable. For example, Monte Carlo estimation of inception score has a form of exp( 1|D| ∑ x̃m∈D KL(pc(y|x̃m)‖pc(y)) where pc is a distribution of class label y drawn by a pretrained classifier. When the classifier is trained using back-propagation, ∇x̃mV (D) is computable.\nHere, we assume V (D) is first-order differentiable with respect to x̃m. From the chain rule, we have a gradient of the GAN evaluation metrics with respect to θ:\n∇θV (DG(Z;θ[T ]G )) =\n(∑M n=1∇θG∇x̃nV ( DG ( Z;θ[T ]G )) 0dD ) . (10)\nOur estimation algorithm performs the inference phase of linear influence taking u = ∇θV (DG(Z;θ[T ]G )) in order to obtain the approximation L [T ] −j (∇θV (DG(Z;θ [T ] G ))) ≈ ∆V [T ] −j ." }, { "heading": "4 RELATED STUDIES", "text": "SGD-Influence Hara et al. (2019) proposed a novel definition of the influence called SGDInfluence and its estimator, which greatly inspired us to propose the influence estimation method for GANs. Suppose a machine learning model with parameters φ ∈ Rdφ is trained to minimize the mean of the loss 1N ∑N n=1 L (χn;φ) across the training instances χ1, . . . , χN . Let the mean of\nthe loss of the mini-batch L (S;φ) := 1|S| ∑ i∈S L (χi;φ). They introduced two SGD steps with\nlearning rate ηt ∈ R+: SGD given by φ[t+1] = φ[t] − ηt∇φL ( St;φ[t] ) , and counterfactual SGD\ngiven by φ[t+1]−j = φ [t] −j − ηt∇φL ( St \\ {j} ;φ[t]−j ) . Their estimator of SGD-Influence φ[T ]−j −φ[T ]\nis based on the following approximation:\nφ [t+1] −j − φ [t+1] ≈ ( Idφ − ηt∇2φL ( St;φ[t] ))( φ [t] −j − φ [t] ) ,∀j 6∈ St. (11)\nHara et al. (2019) also identified harmful instances for classification based on linear influence of the cross-entropy loss estimated using a validation dataset. Removing the estimated harmful instances with their approach demonstrated improvements in the classification accuracy.\nOur approach differs from Hara et al. (2019)’s work in two ways. Firstly, our approach uses the Jacobian of the joint gradient vector Jt instead of the Hessian of the mean loss∇2φL ( St;φ[t] ) . As long as LG 6= LD, Jt is asymmetric and inherently different from the Hessian. Moreover, a source of the asymmetry J [t]GD plays an important role in transferring the effect of removal of a training instance from the discriminator to the generator. Let θ[t]G,−j − θ [t] G ∈ RdG and θ [t] D,−j − θ [t] D ∈ RdD be ASGD-Influence on θG and θD of the t-th step, respectively. The upper blocks of (5) can be rewritten as\nθ [t+1] G,−j − θ [t+1] G ≈ ( IdD − η [t] G J [t] GG )( θ [t] G,−j − θ [t] G ) + η [t] G J [t] GD ( θ [t] D,−j − θ [t] D ) . (12)\nNote that J [t]GD transfers the t-th step of ASGD-Influence on θD to the next step of ASGD-Influence on θG. The Hessian of Hara et al. (2019), which uses a single combination of the parameters and the loss function, cannot handle this transfer between the two models. Secondly, we use the influence on GAN evaluation metrics for identifying harmful instances rather than that on the loss value. This alleviates the problem of the GAN’s loss not representing the generative performance.\nInfluence Function Koh & Liang (2017) proposed influence estimation method that incorporated the idea of influence function (Cook & Weisberg, 1980) in robust statistics. They showed that influences on parameters and predictions can be estimated with the influence function assuming the satisfaction of the optimality condition and strong convexity of the loss function. They also identified harmful instances on the basis of the influence on the loss value, assuming consistency of the loss value with the task performance.\nOur influence estimation method is designed to eliminate these assumptions because normally GAN training does not satisfy the assumptions regarding the optimality condition, the convexity in the loss function, and the consistency of the loss value with the performance." }, { "heading": "5 EXPERIMENTS", "text": "We evaluated the effectiveness of the proposed method in two aspects: the accuracy of influence estimation on GAN evaluation metrics (Section 5.1), and the improvement in generative performance by removing estimated harmful instances (Section 5.2)\nGAN Evaluation Metrics In both experiments, we used three GAN evaluation metrics: average log-likelihood (ALL), inception score (IS), and Fréchet inception distance (FID) (Heusel et al., 2017). ALL is the de-facto standard for evaluating generative models (Tolstikhin et al., 2017). Let Z ′ := {z′n ∼ p(z)}N ′ n=1 and D′x := {x′n ∼ p(x)}N ′\nn=1, which is sampled separately from p(z) and the training dataset Dx, respectively. ALL measures the likelihood of the true data under the distribution that is estimated from generated data using kernel density estimation. We calculated ALL of D′x under the distribution estimated from generated dataset DG(Z ′;θ [T ] G ). Recall Z ′ is the set of latent variables sampled independently from the training (Section 3.2). FID measures Fréchet distance between two sets of feature vectors of real images D′x and those of generated images DG(Z ′;θ [T ] G ). The feature vectors are calculated on the basis of a pre-trained classifier. Larger values of ALL and IS and a smaller value of FID indicate the better generative performance. See Appendix C.1 for the detailed setting of each GAN evaluation metric." }, { "heading": "5.1 EXPERIMENT 1: ESTIMATION ACCURACY", "text": "We ran the influence estimation method on GANs to estimate influence on various GAN evaluation metrics, and then compared the estimated influence with true influence. The detailed setup can be found in Appendix C.2.\nSetup ALL is known to be effective for low-dimensional data distributions (Borji, 2019) and both FID and IS are effective for image distributions. We thus prepared two different setups: fullyconnected GAN (FCGAN) trained with 2D multivariate normal distribution (2D-Normal) for ALL, and DCGAN (Radford et al., 2016) trained with MNIST (LeCun et al., 1998) for IS and FID. IS and FID require classifiers to obtain class label distribution and feature vectors, respectively. We thus trained CNN classifier of MNIST3 using D′x. We set N = 10k and N ′ = |D′x| = |Z ′| = 10k. The experiment was conducted as follows. Firstly, we ran the K-epoch of the training phase of linear influence with the training dataset Dx. We determined K = 50 since we observed the convergence of GAN evaluation metrics at K = 50. For IS and FID, we trained the classifier using D′x and corresponding labels. We then randomly selected 200 target instances from Dx. We obtained estimated influence on GAN evaluation metrics of each target instance by performing the inference phase of linear influence with u = ∇θV (DG(Z ′;θ[T ]G )). The true influence of each target instance was computed by running the counterfactual ASGD.\nWe used the same evaluation measures as the previous work (Hara et al., 2019): Kendall’s Tau and the Jaccard index. Kendall’s Tau measures the ordinal correlation between the estimated and true influence on GAN evaluation metrics. It has a value of 1 when the orders of the two sets of values are identical. For the Jaccard index, we selected 10 instances with the largest positive and largest negative influence values to construct a set of 20 critical instances. The Jaccard index is equal to 1 when a set of estimated critical instances is identical to that of true critical instances.\nTo investigate the relationship between a number of tracing back steps and the estimation accuracy, we also evaluated the influence on GAN evaluation metrics of k-epoch ASGD. In k-epoch training, both inference phase of linear influence and the counterfactual ASGD traced back only k ≤ K epochs from the latest epoch K. We varied k = 1, 5, 10, 20, 50 and ran the experiment ten times for each k by changing the random seeds of the experiments.\nResults Figure 1 shows the average Kendal’s Tau and the Jaccard index of the repeated experiments. Hereinafter, we use p < .05 to judge the statistical significance of the results. For all k, Kendall’s Tau and the Jaccard index of estimated influence on ALL were statistically significantly better than the result in which the order of estimated influence values were random (random case). Even in the more difficult setups of IS and FID, which handled the high-dimensional dataset and complex architecture, the results were statistically significantly better than that of the random case except for Jaccard index of IS with k = 50. We also observed the estimation accuracy dropped as k increased. This reflects the nature of our estimator that recursively performs linear approximation\n3Although the original IS and FID use Inception Net (Szegedy et al., 2016) trained with ImageNet, we instead adopted a domain-specific classifier as encouraged by several studies (Zhou et al., 2018; Liu et al., 2018) to alleviate the domain mismatch with ImageNet.\nas many times as the number of steps. We thus conclude that when the required number of tracing back steps is small enough, our influence estimation method is effective and the estimated influence on GAN evaluation metric is useful for identifying harmful instances." }, { "heading": "5.2 EXPERIMENT 2: DATA CLEANSING", "text": "We investigated if removing identified harmful instances actually improved the generative performance to evaluate the effectiveness of our proposed method for data cleansing. We define data cleansing as an attempt to improve GAN evaluation metrics by removing a set of training instances. See appendix C.3 for the detailed settings.\nSetup We studied the data cleansing for the two setups explained in the previous section: 2DNormal with FCGAN and MNIST with DCGAN. We mostly followed the settings of Section 5.1 but set training dataset size N = 50k for both setups.\nWe identified harmful instances in 2D-Normal training dataset using estimated influence on ALL, and those in MNIST using estimated influence on IS and FID. We considered a training instance was harmful when it had negative (positive) influence on FID (ALL or IS).\nFor both setups, we also selected instances using baseline approaches: anomaly detection method, influence on the discriminator loss, and random values. For anomaly detection, we adopted isolation forest (Liu et al., 2008). Isolation forest fitted the model using the data points of Dx for 2D-Normal and feature vectors of the classifier of Dx for MNIST. We adopted the selection based on the influence on the discriminator loss to verify our assumption that the influence on the loss does not represent the harmfulness of the instances. Influence on the discriminator loss was calculated on the expected loss of LD (θ) with DG(Z ′;θ[T ]G ) and D′x. We considered instances with negative influence were harmful.\nWe conducted the experiments as follows. After the training phase of K epoch, we determined nh < N harmful instances with the proposed approach and baselines. Then, we ran counterfactual ASGD with the determined harmful instances excluded. For the reliable estimation accuracy of influence and reasonable costs of the computation and storage, the inference phase traced back only 1-epoch from the last epoch, and counterfactual ASGD only re-ran the latest epoch. We tested with various nh.\nWe refer to the generator of the final model as the cleansed generator and denote its parameters by θ?G. We evaluated the cleansed generator with test GAN evaluation metrics V (DG(Ztest);θ?G)), in which a set of test latent variables Ztest was obtained by sampling Ntest times from p(z) independently fromZ ′ andZ1, . . . ,ZT . Test ALL and FID used a test datasetDtest := {x[n]test ∼ p(x)} Ntest n=1 that consists of instances newly sampled from 2D-Normal and instances in the original test dataset of MNIST, respectively. We set Ntest = 10k and ran the experiment 15 times with different random seeds.\nQuantitative Results Figure 2 shows the average test GAN evaluation metrics of the repeated experiments for each selection approach. For the data cleansing on 2D-Normal, the proposed approach with influence on ALL showed statistically significant improvement from the original model and it outperformed the baselines (Figure 2a). For the MNIST setup, our approach with influence on FID and IS statistically significantly improved FID (Figure 2c) and IS (Figure 2b), respectively. They also outperformed the baselines. In addition, the results indicate that data cleansing based on the influence on a specific GAN evaluation metric is also effective for another metric that is not used for the selection; removing harmful instances based on the influence on FID (IS) statistically significantly improved IS (FID). However, we make no claim that the proposed method can improve all the other evaluation metrics, such as Kullback-Leibler divergence. This is because all the current GAN evaluation metrics have their own weaknesses (e.g., IS fails to detect whether a model is trapped into one bad mode (Zhou et al., 2018)), and the proposed method based on those GAN evaluation metrics cannot inherently avoid their weaknesses. These improvements thus can be observed only in a subclass of GAN evaluation metrics. Further evaluation of data cleansing with our method should incorporate the future improvements of the GAN evaluation metrics.\nWhile the improvements were smaller than the proposed approach, we also observed that data cleansing based on the influence on the discriminator loss improved all the GAN evaluation metrics. This counter-intuitive result indicates that the discriminator loss weakly measures the performance of the generator that is trained along with the discriminator.\nQualitative Results We examined the characteristics of instances that were evaluated to be harmful by our method. Overall, we observed that our method tends to judge instances as harmful when they belong to regions from which the generators sample too frequently compared to the true distribution. Figure 3 shows the estimated harmfulness of the training instances of 2D-Normal and the distribution of the generated samples. The proposed approach with influence on ALL evaluated the instances around lower-left and upper-right regions to be harmful (Figure 3a). These regions correspond to the regions where the generated distribution has higher density than that of the true distribution (Figure 3b “No removal” and “True”). Similar characteristics were seen in harmful\nMNIST instances suggested by our approach with influence on FID. A large number of samples from class 1 were regarded as harmful as shown in Figure 4a, when the generator sampled images of the digit 1 too frequently (Figure 4b).\nWe also investigated how the data cleansing by our approach visually changed the generated samples. As seen from the distributions in Figure 3b, the probability density in the upper-right region decreased after the data cleansing (from “No removal” to “Cleansed”). As a result, the generator distribution moved closer to the true distribution. The same effect was observed in a visually more interesting form in the data cleansing for MNIST. The generated samples originating from some latent variables changed from the image of digit 1 to that of other digits after the data cleansing based on the estimated influence on FID (highlighted samples in Figure 4c). We suppose this effect improved the diversity in the generated samples, resulting in better FID and IS." }, { "heading": "6 CONCLUSION", "text": "We proposed an influence estimation method for GAN that uses the Jacobian of the gradient of the discriminator’s loss with respect to the generator’s parameters (and vice versa), which traces how the absence of an instance in the discriminator’s training affects the generator’s parameters. We also proposed a novel evaluation scheme to judge if an instance is harmful or not on the basis of the influence on GAN evaluation metrics rather than that on the loss value, and to leverage the proposed influence estimation method to identify harmful instances. We experimentally verified that estimated and true influence on GAN evaluation metrics had a statistically significant correlation. We also demonstrated removing identified harmful instances effectively improved the generative performance with respect to various GAN evaluation metrics." }, { "heading": "A ALGORITHM FOR LINEAR INFLUENCE", "text": "The proposed estimation algorithm for linear influence, which is explained in Section 3.1, is divided into the training phase (Algorithm 1) and inference phase (Algorithm 2).\nThe training phase executes ASGD training while storing the mini-batch indices St, the learning rate η[t]G , η [t] D , the parameters θ\n[t] and the sampled latent variable Zt into the informationA[t] at each step.\nIn the inference phase, L[T ]−j (u) is estimated by the recursive calculation. First, we set L [T ] −j (u) to 0 and set the query vector u. The information A[t], which is obtained in the training phase, is read in the order of t = T − 1, T − 2, . . . , 1. When j ∈ St, L[T ]−j (u) is updated using (8). Let\nut = ( u [t] G ,u [t] D )> . Each step updates u based on ut+1 = u>t Zt = u > t (Idθ −BtJt). A naive\ncalculation of u>t Jt requires O ( d2θ )\nmemory to store the matrix Jt, which can be prohibitive for very large models. We can avoid this difficulty by directly computing u>t Jt without the explicit computation of Jt. Because u>t Jt = ∇θ 〈 ut, (∇θGL G,∇θDLD)> 〉 , we need only to compute the derivative of the inner product of ut and the joint gradient vector.\nOur algorithm also covers the alternating gradient descent, in which the two models alternatively update their parameters at each step. By taking η[t]G and η [t] D such that they alternatively take 0 at each step, we can have ASGD and the estimator of ASGD-Influence for the alternating gradient descent. The implementation of linear influence for the alternating gradient descent is available in our repository4." }, { "heading": "B OTHER RELATED WORKS", "text": "Anomaly Detection A typical approach for identifying harmful instances is outlier detection. Outlier detection is used to remove abnormal instances from the training set before training the model to ensure that the model is not affected by the abnormal instances. For tabular data, there are several popular methods, such as One-class support vector machine (Schölkopf et al., 2001), local outlier factor (Breunig et al., 2000), and isolation forest (Liu et al., 2008). Although these methods can find abnormal instances, they are not necessarily harmful for the resulting models, as we showed in the experiment.\nTraining GAN from Noisy Images One typical type of data that harm generative performance is noisy images. AmbientGAN (Bora et al., 2018) and noise-robust GAN (Kaneko & Harada, 2020)\n4https://github.com/hitachi-rd-cv/influence-estimation-for-gans\nare learning algorithms that make it possible to train a clean image generator from noisy images. The difference between these studies and ours is that these studies assume that the noise (e.g., Gaussian noise on pixels) given independently from the data distribution of the clean images is the only problem. However, some instances can affect the performance even if the instances are drawn only from the data distribution, which is the case robust statistics (Huber, 2004) typically focuses on. Our experiment 5.2 indicates that the model performance depends not only on noisy images but also on a non-negligible number of harmful instances in the original dataset." }, { "heading": "C DETAILED EXPERIMENTAL SETTINGS AND RESULTS", "text": "" }, { "heading": "C.1 GAN EVALUATION METRICS", "text": "We adopted Gaussian kernel with the band-width 1 for kernel density estimation used in ALL. The architecture of CNN classifier of MNIST used for IS and FID can be found in Table 1. We selected the output of the 4th layer for the feature vectors for FID." }, { "heading": "C.2 EXPERIMENT 1: ESTIMATION ACCURACY", "text": "Setup In the experiment of Section 5.1, we adopted the hyper parameters shown in Table 2. We trained fullly-connected GAN (FCGAN) for 2D multivariate normal distribution, in which the both G and D has 1 hidden layer of hG and hD units, respectively (Table 3). 2D-Normal is given by N (µ,Σ), in which the mean vector µ = 12 and the covariance matrix Σ = ((1, 0.8) , (0.8, 1))>. DCGAN consists of transposed convolution (or deconvolution) layers and convolution layers (Table 4). The channels of the both layers in G and D were determined by hG and hD, respectively. We used Layer Normalization (Ba et al., 2016) for the layers shown in Table 4 for the stability of the training. We also introduced the L2-norm regularization with the rate γ ∈ R+ for all the kernels of both FCGAN and DCGAN. We used the non-zero-sum game objective of the original paper (Goodfellow et al., 2014) in which G tries to minimize −DθD (GθG (z)) for both models." }, { "heading": "C.3 EXPERIMENT 2: DATA CLEANSING", "text": "Setup We adopted the same architecture as the Section 5.1 (Table 3) for FCGAN and slightly different architecture (Table 4) in which hG and hD are larger (Table 5) for DCGAN. Other hyper parameters followed Table 5. We also provide visual explanations of the data settings in the experiments with influence on ALL, IS, and FID in Figure 5, 6, and 7, respectively.\nResults Table 6-8 show the detailed results of Figure 2. And they clarify with which nh and selection approach the test GAN evaluation metrics were statistically significantly improved." }, { "heading": "D DETAILED DISCUSSION ON EXPERIMENT 2", "text": "This section first discusses three aspects of the results in Section 5.2: Section D.1 explains the common characteristics of harmful instances suggested by our approach, Section D.2 discusses qualitative aspects of the data cleansing using generated samples, and Section D.3 discusses how the characteristics of harmful instances and effect of the data cleansing are consistent among the trainings with different random seeds. Finally, we explain the limitation of our method and present the future direction in Section D.4." }, { "heading": "D.1 CHARACTERISTICS OF HARMFUL INSTANCE", "text": "In this section, we examine the characteristics of instances that are evaluated to be harmful or helpful by our method. We regard a sample is helpful if its influence on a metric is opposite of harmful instances.\nTable 9 shows the estimated harmfulness of the training instances of 2D-Normal and the distribution of the generated samples. The proposed approach with influence on ALL evaluated the instances around lower-left and upper-right regions to be harmful (Table 9 (a, i)). These regions correspond to the regions where the generated distribution has higher density than that of the true distribution; The generator before the cleansing (Table 9 (a, ii, No removal)) sampled too frequently from lower-left and upper-right regions compared to the true distribution (Table 9 (a, ii, True)). This characteristics was not observed in the plots of baseline approaches. The approach based on influence on the discriminator loss seems to ignore the difference in the density around the lower-left region (Table 9 (b, i)) and isolation forest did not take the generator’s distribution into account (Table 9 (c, i)).\nSimilar characteristics were seen in harmful MNIST instances suggested by our approach with influence on IS and FID. When the generator over-sampled a specific digit (e.g., the digit 1 in Table 10 (a, iii)), our approach tended to judge the images of the digit to be harmful (e.g., a large number of 1 in Table 10 (b-c, i)). Similarly, our method judged instances of a specific digit as helpful (e.g., the digit 6 in Table 10 (b-c, ii)) when the generator failed to sample the digit (e.g., the absence of 6 in Table 10 (a, iii)). On the contrary, harmful instances suggested on the basis of influence on the discriminator loss did not show the tendency (Table 10 (d, i)). The baseline approach with isolation forest based on the classifier feature-space seems to have judged the images that were difficult to be classified as harmful, rather than the over-sampled digit (Table 10 (e, i)). It regarded that instances are helpful when they belong to a digit that seems to have been easy to be classified (Table 10 (e, ii)).\nTo summarize, our method tends to judge instances as harmful when they belong to regions from which the generators sample too frequently compared to the true distribution." }, { "heading": "D.2 QUALITATIVE STUDY OF DATA CLEANSING", "text": "We then investigate how the data cleansing using the suggested harmful instances visually change generated samples.\nAs seen from Table 9 (a, ii), the probability density in the upper-right region decreased after the data cleansing (from “No removal” to “Cleansed”). As a result, the generator distribution got closer to the true distribution. Although the baselines indicated the same direction of changes in the distributions (Table 9 (b-c, ii)), these were not as significant as ours.\nThe same effect was observed in visually more interesting form in the data cleansing for MNIST. The generated samples originating from some latent variables changed from the image of digit 1 to that of other digits after the data cleansing based on the estimated influence on IS and FID (highlighted samples in Table 10 (b-c, iii)). This implies that a certain amount of density that are over-allocated for the digit 1 moved to the regions of other digits. We assume this effect improved the diversity in the generated samples, resulting in better FID and IS. This characteristics was not clearly observed in the baselines (highlighted samples in Table 10 (d-f, iii)).\nThese observations suggest that our method helps the GAN’s training so that the generator re-assigns the densities that were over-allocated to certain regions to other regions." }, { "heading": "D.3 CONSISTENCY OF QUALITATIVE CHARACTERISTICS AMONG DIFFERENT TRAININGS", "text": "We show additional visual results to confirm the consistency of the findings on the characteristics of harmful instances and generated samples after data cleansing, which we described in Section D.2 and Section D.3, respectively.\nTable 11 shows the harmfulness of the training instances and the distribution of the generated samples obtained using 5 different random seeds in 2D-Normal case. As seen from the table, regardless of which region a generator assigns high density to, our method consistently regards the training samples around the region as harmful. In addition, the distributions of the generated samples get closer to the true distribution by removing these harmful training instances in the data cleansing.\nTable 12 visualizes the MNIST examples of harmful instances, helpful instances, and generated images before and after the data cleansing. Different rows correspond to different random seeds. We found the consistency in visual characteristics was moderate in MNIST case. A few results demonstrated the common qualitative characteristics when the improvements in GAN evaluation metrics were large (Table 12 (a) and (d)). In the training with the 4th random seed (d), the suggestion of harmful instances showed some tendency; many instances of digit 7 were regarded as harmful whereas those of digit 4 were not at all (Table 12 (d, i)). The data cleansing based on this suggestion seems to have improved the diversity of the generated samples by reducing the samples of digit 7 and increasing those of digit 4 (highlighted samples in Table 12 (d, iv)). This indicates the consistent characteristics of the data cleansing discussed in the previous section to some extent; it helps the GAN’s training so that the generator re-assigns the densities that were over-allocated to certain data regions to other regions." }, { "heading": "D.4 CURRENT LIMITATION AND FUTURE DIRECTION", "text": "The limitation of our method is that it does not guarantee the harmful instances suggested on the basis of influence on one GAN evaluation metric are not necessarily harmful from the viewpoint of other metrics. For example, we have demonstrated that removing instances that predicted to have negative influence on FID improved both test FID and IS (Figure 2) and increased visual diversity in generated images (Table 10 and 12). However, it does not seem to have improved visual quality (e.g., sharpness, reality, etc.) of the individual generated-samples. Therefore, it is possible that these instances are harmful only for some particular aspects of generative performance, i.e. the diversity in this case, and they are not harmful for the other aspect, i.e. the visual quality in this case.\nWe would argue that this limitation is closely tied with the limitation of the current GAN evaluation metrics. For example, FID takes the diversity of generated samples into account, but they only partly take the visual quality into account; e.g., FID based on Inception Net was shown to focus on textures rather than shapes of the objects (Karras et al. (2020)). In this sense, we clarify that we never claim our method can improve the “true” generative performance from all the aspects, considering the situation that there is no “true” evaluation metric that measures all the aspects of the generative performance.\nThe advantage of our method is that it does not have to care how the evaluation metrics are defined as long as they are differentiable with respect to the generated samples. Furthermore, our evaluation method makes no assumption about what the harmful characteristics of instances are. This means that it is expected to be easily applied to another evaluation metric if better metric is developed in the future. One of our main contributions in such sense is that we experimentally verified that our method successfully improved the generative performance in terms of a targeted metric, using limited but currently widely accepted metrics.\nOur future work includes incorporating such future improvements in the GAN evaluation metric to obtain better insights on the relationship between training instances and generative performance. In addition, we would like to relax the current constraint on the optimizer. Our method is currently applicable only to SGD but we would like to find a way to extend it to other optimizers such as Adam (Kingma & Ba (2015)) to deal with the latest GAN models." } ]
2,021
INFLUENCE ESTIMATION FOR GENERATIVE ADVER-
SP:e9cb82d442fd1f42348d33be29e2735da7e13dbe
[ "This paper studies a very interesting new problem of assessing unrolled models in a broader context using NAS methods. LISTA-style unrolling has been popular for deep learning-based inverse problems. But it is quantitatively unclear how good the unrolled models are, among all possible model variations. To fill in this gap, the authors first define a proper search space based on the varying connections and neurons from the unrolled LISTA backbone architecture. NAS is then exploited as the tool to find the best subsect of architecture from the large space. " ]
In recent years, great success has been witnessed in building problem-specific deep networks from unrolling iterative algorithms, for solving inverse problems and beyond. Unrolling is believed to incorporate the model-based prior with the learning capacity of deep learning. This paper revisits the role of unrolling as a design approach for deep networks: to what extent its resulting special architecture is superior, and can we find better? Using LISTA for sparse recovery as a representative example, we conduct the first thorough design space study for the unrolled models. Among all possible variations, we focus on extensively varying the connectivity patterns and neuron types, leading to a gigantic design space arising from LISTA. To efficiently explore this space and identify top performers, we leverage the emerging tool of neural architecture search (NAS). We carefully examine the searched top architectures in a number of settings, and are able to discover networks that are consistently better than LISTA. We further present more visualization and analysis to “open the black box”, and find that the searched top architectures demonstrate highly consistent and potentially transferable patterns. We hope our study to spark more reflections and explorations on how to better mingle model-based optimization prior and data-driven learning.
[ { "affiliations": [], "name": "Tianjian Meng" }, { "affiliations": [], "name": "Xiaohan Chen" }, { "affiliations": [], "name": "Yifan Jiang" }, { "affiliations": [], "name": "Zhangyang Wang" } ]
[ { "authors": [ "Aviad Aberdam", "Alona Golts", "Michael Elad" ], "title": "Ada-lista: Learned solvers adaptive to varying models", "venue": "arXiv preprint arXiv:2001.08456,", "year": 2020 }, { "authors": [ "Pierre Ablin", "Thomas Moreau", "Mathurin Massias", "Alexandre Gramfort" ], "title": "Learning step sizes for unfolded sparse coding", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Alexios Balatsoukas-Stimming", "Christoph Studer" ], "title": "Deep unfolding for communications systems: A survey and some new directions", "venue": "arXiv preprint arXiv:1906.05774,", "year": 2019 }, { "authors": [ "Amir Beck", "Marc Teboulle" ], "title": "A fast iterative shrinkage-thresholding algorithm for linear inverse problems", "venue": "SIAM journal on imaging sciences,", "year": 2009 }, { "authors": [ "José M Bioucas-Dias", "Mário AT Figueiredo" ], "title": "A new twist: Two-step iterative shrinkage/thresholding algorithms for image restoration", "venue": "IEEE Transactions on Image processing,", "year": 2007 }, { "authors": [ "Thomas Blumensath", "Mike E Davies" ], "title": "Iterative thresholding for sparse approximations", "venue": "Journal of Fourier analysis and Applications,", "year": 2008 }, { "authors": [ "Mark Borgerding", "Philip Schniter" ], "title": "Onsager-corrected deep learning for sparse linear inverse problems", "venue": "In Signal and Information Processing (GlobalSIP),", "year": 2016 }, { "authors": [ "Xiaohan Chen", "Jialin Liu", "Zhangyang Wang", "Wotao Yin" ], "title": "Theoretical linear convergence of unfolded ista and its practical weights and thresholds", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Xiangxiang Chu", "Bo Zhang", "Ruijun Xu", "Jixiang Li" ], "title": "Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search", "venue": null, "year": 1907 }, { "authors": [ "Benjamin Cowen", "Apoorva Nandini Saridena", "Anna Choromanska" ], "title": "Lsalsa: accelerated source separation via learned sparse coding", "venue": "Machine Learning,", "year": 2019 }, { "authors": [ "Sören Dittmer", "Tobias Kluth", "Peter Maass", "Daniel Otero Baguer" ], "title": "Regularization by architecture: A deep prior approach for inverse problems", "venue": "Journal of Mathematical Imaging and Vision,", "year": 2019 }, { "authors": [ "Raja Giryes", "Yonina C Eldar", "Alex Bronstein", "Guillermo Sapiro" ], "title": "Tradeoffs between convergence speed and reconstruction accuracy in inverse problems", "venue": "IEEE Transactions on Signal Processing,", "year": 2018 }, { "authors": [ "Dong Gong", "Zhen Zhang", "Qinfeng Shi", "Anton van den Hengel", "Chunhua Shen", "Yanning Zhang" ], "title": "Learning deep gradient descent optimization for image deconvolution", "venue": "IEEE Transactions on Neural Networks and Learning Systems,", "year": 2020 }, { "authors": [ "Karol Gregor", "Yann LeCun" ], "title": "Learning fast approximations of sparse coding", "venue": "In Proceedings of the 27th International Conference on Machine Learning,", "year": 2010 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Howard Heaton", "Xiaohan Chen", "Zhangyang Wang", "Wotao Yin" ], "title": "Safeguarded learned convex optimization", "venue": "arXiv preprint arXiv:2003.01880,", "year": 2020 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Kuldeep Kulkarni", "Suhas Lohit", "Pavan Turaga", "Ronan Kerviche", "Amit Ashok" ], "title": "ReconNet: Noniterative reconstruction of images from compressively sensed measurements", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Liam Li", "Ameet Talwalkar" ], "title": "Random search and reproducibility for neural architecture search", "venue": "arXiv preprint arXiv:1902.07638,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Jialin Liu", "Xiaohan Chen", "Zhangyang Wang", "Wotao Yin" ], "title": "Alista: Analytic weights are as good as learned weights in lista", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Morteza Mardani", "Qingyun Sun", "David Donoho", "Vardan Papyan", "Hatef Monajemi", "Shreyas Vasanawala", "John Pauly" ], "title": "Neural proximal gradient descent for compressive imaging", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Vishal Monga", "Yuelong Li", "Yonina C Eldar" ], "title": "Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing", "venue": null, "year": 1912 }, { "authors": [ "Thomas Moreau", "Joan Bruna" ], "title": "Understanding trainable sparse coding with matrix factorization", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Xi Peng", "Ivor W Tsang", "Joey Tianyi Zhou", "Hongyuan Zhu" ], "title": "k-meansnet: When k-means meets differentiable programming", "venue": "arXiv preprint arXiv:1808.07292,", "year": 2018 }, { "authors": [ "Hieu Pham", "Melody Y Guan", "Barret Zoph", "Quoc V Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameter sharing", "venue": "arXiv preprint arXiv:1802.03268,", "year": 2018 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In Proceedings of the aaai conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Pablo Sprechmann", "Alexander M Bronstein", "Guillermo Sapiro" ], "title": "Learning efficient sparse and low rank models", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2015 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Satoshi Takabe", "Tadashi Wadayama" ], "title": "Theoretical interpretation of learned step size in deepunfolded gradient descent", "venue": "arXiv preprint arXiv:2001.05142,", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Deep image prior", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Zhangyang Wang", "Qing Ling", "Thomas Huang" ], "title": "Learning deep l0 encoders", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2016 }, { "authors": [ "Zhangyang Wang", "Ding Liu", "Shiyu Chang", "Qing Ling", "Yingzhen Yang", "Thomas S Huang" ], "title": "Deep dual-domain based fast restoration of jpeg-compressed images", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Xingyu Xie", "Jianlong Wu", "Guangcan Liu", "Zhisheng Zhong", "Zhouchen Lin" ], "title": "Differentiable linearized admm", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Bo Xin", "Yizhou Wang", "Wen Gao", "David Wipf", "Baoyuan Wang" ], "title": "Maximal sparsity with deep networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yangyang Xu", "Wotao Yin" ], "title": "A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion", "venue": "SIAM Journal on imaging sciences,", "year": 2013 }, { "authors": [ "Jian Zhang", "Bernard Ghanem" ], "title": "Ista-net: Interpretable optimization-inspired deep network for image compressive sensing", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Bin Zhao", "Li Fei-Fei", "Eric P Xing" ], "title": "Online detection of unusual events in videos via dynamic sparse coding", "venue": "In Computer Vision and Pattern Recognition (CVPR),", "year": 2011 }, { "authors": [ "Shuai Zheng", "Sadeep Jayasumana", "Bernardino Romera-Paredes", "Vibhav Vineet", "Zhizhong Su", "Dalong Du", "Chang Huang", "Philip HS Torr" ], "title": "Conditional random fields as recurrent neural networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "Joey Tianyi Zhou", "Kai Di", "Jiawei Du", "Xi Peng", "Hao Yang", "Sinno Jialin Pan", "Ivor W Tsang", "Yong Liu", "Zheng Qin", "Rick Siow Mong Goh" ], "title": "SC2Net: Sparse LSTMs for sparse coding", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Barret Zoph", "Quoc V Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "arXiv preprint arXiv:1611.01578,", "year": 2016 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The signal processing and optimization realm has an everlasting research enthusiasm on addressing ill-conditioned inverse problems, that are often regularized by handcrafted model-based priors, such as sparse coding, low-rank matrix fitting and conditional random fields. Since closed-form solutions are typically unavailable for those model-based optimizations, many analytical iterative solvers arise to popularity. More recently, deep learning based approaches provide an interesting alternative to inverse problems. A learning-based inverse problem solver attempts to approximate the inverse mapping directly by optimizing network parameters, by fitting “black box” regression from observed measurements to underlying signals, using synthetic or real-world sample pairs.\nBeing model-based and model-free respectively, the analytical iterative solvers and the learningbased regression make two extremes across the spectrum of inverse problem solutions. A promising direction arising in-between them is called algorithm unrolling (Monga et al., 2019). Starting from an analytical iterative solver designed for model-based optimization, its unrolled network architecture can be generated by cascading the iteration steps for a finite number of times, or equivalently, by running the iterative algorithm with early stopping. The original algorithm parameters will also turn into network parameters. Those parameters are then trained from end to end using standard deep network training, rather than being derived analytically or selected from cross-validation.\nUnrolling was first proposed to yield faster trainable regressors for approximating iterative sparse solvers (Gregor & LeCun, 2010), when one needs to solve sparse inverse problems on similar data repeatedly. Later on, the unrolled architectures were believed to incorporate model-based priors while enjoying the learning capacity of deep networks empowered by training data, and therefore became a rising direction in designing principled and physics-informed deep architectures. The growing popularity of unrolling lies in their demonstrated effectiveness in developing compact, data-efficient, interpretable and high-performance architectures, when the underlying optimization model is assumed available. Such approaches have witnessed prevailing success in applications such as compressive\n∗The authors Tianjian Meng and Xiaohan Chen contribute equally to the work.\nsensing (Zhang & Ghanem, 2018), computational imaging (Mardani et al., 2018), wireless communication (Cowen et al., 2019; Balatsoukas-Stimming & Studer, 2019), computer vision (Zheng et al., 2015; Peng et al., 2018), and other algorithms such as ADMM (Xie et al., 2019).\nThe empirical success of unrolling has sparkled many curiosities towards its deeper understanding. A series of efforts (Moreau & Bruna, 2017; Giryes et al., 2018; Chen et al., 2018; Liu et al., 2019; Ablin et al., 2019; Takabe & Wadayama, 2020) explored the theoretical underpinning of unrolling as a specially adapted iterative optimizer to minimizing the specific objective function, and proved the favorable convergence rates achieved over classical iterative solver, when the unrolled architectures are trained to (over)fit particular data. Orthogonally, this paper reflects on unrolling as a design approach for deep networks. The core question we ask is:\nFor solving model-based inverse problems, what is the role of unrolling in designing deep architectures? What can we learn from unrolling, and how to go beyond?" }, { "heading": "1.1 RELATED WORKS: PRACTICES AND THEORIES OF UNROLLING", "text": "(Gregor & LeCun, 2010) pioneered to develop a learning-based model for solving spare coding, by unrolling the iterative shrinkage thresholding algorithm (ISTA) algorithm (Blumensath & Davies, 2008) as a recurrent neural network (RNN). The unrolled network, called Learned ISTA (LISTA), treated the ISTA algorithm parameters as learnable and varied by iteration. These were then finetuned to obtain optimal performance on the data for a small number of iterations. Numerous works (Sprechmann et al., 2015; Wang et al., 2016a; Zhang & Ghanem, 2018; Zhou et al., 2018) followed this idea to unroll various iterative algorithms for sparse, low-rank, or other regularized models.\nAs the most classical unrolling example, a line of recent efforts have been made towards theoretically understanding LISTA. Moreau & Bruna (2017) re-factorized the Gram matrix of the dictionary, and thus re-parameterized LISTA to show its acceleration gain, but still sublinearly. Giryes et al. (2018) interpreted LISTA as a projected gradient descent descent (PGD) where the projection step was inaccurate. Chen et al. (2018) and Liu et al. (2019) for the first time introduced necessary conditions for LISTA to converge linearly, and show the faster asymptotic rate can be achieved with only minimal learnable parameters (e.g., iteration-wise thresholds and step sizes). Ablin et al. (2019) further proved that learning only step sizes improves the LISTA convergence rate by leveraging the sparsity of the iterate. Besides, several other works examined the theoretical properties of unrolling other iterative algorithms, such as iterative hard thresholding (Xin et al., 2016), approximated message passing (Borgerding & Schniter, 2016) , and linearized ADMM (Xie et al., 2019). Besides, the safeguarding mechanism was also introduced for guiding learned updates to ensure convergence, even when the test problem shifts from the training distribution (Heaton et al., 2020).\nOn a separate note, many empirical works (Wang et al., 2016a;b; Gong et al., 2020) advocated that the unrolled architecture, when used as a building block for an end-to-end deep model, implicitly enforces some structural prior towards the model training (resulting from the original optimization objective) (Dittmer et al., 2019). That could be viewed as a special example of “architecture as prior” (Ulyanov et al., 2018). A recent survey (Monga et al., 2019) presents a comprehensive discussion. Specifically, the authors suggested that since iterative algorithms are grounded on domain-specific formulations, they embed reasonably accurate characterization of the target function. The unrolled networks, by expanding the learnable capacity of iterative algorithms, become “tunable” to approximate the target function more accurately. Meanwhile compared to generic networks, they span a relatively small subset in the function space and therefore can be trained more data-efficiently." }, { "heading": "1.2 MOTIVATIONS AND CONTRIBUTIONS", "text": "This paper aims to quantitatively assess “how good the unrolled architectures actually are”, using LISTA for sparse recovery as a representative example. We present the first design space ablation study1 on LISTA: starting from the original unrolled architecture, we extensively vary the connectivity patterns and neuron types. We seek and assess good architectures in a number of challenging settings, and hope to expose successful design patterns from those top performers.\nAs we enable layer-wise different skip connections and neuron types, the LISTA-oriented design space is dauntingly large (see Sections 2.1 and 2.2 for explanations). As its manual exploration is infeasible, we introduce the tool of neural architecture search (NAS) into the unrolling field. NAS\n1We define a design space as a family of models derived from the same set of architecture varying rules.\ncan explore a gigantic design space much more efficiently, and can quickly identify the subset of top-performing candidates for which we can focus our analysis on. Our intention is not to innovate on NAS, but instead, to answer a novel question in LISTA using NAS as an experiment tool.\nFrom our experiments, a befitting quote may be: for designing neural networks to solve modelbased inverse problems, unrolling is a good answer, but usually not the best. Indeed, we are able to discover consistently better networks in all explored settings. From top candidates models, we observe highly consistent and potentially transferable patterns. We conclude this paper with more discussions on how to better leverage and further advance the unrolling field." }, { "heading": "2 TECHNICAL APPROACH", "text": "Assume a sparse vector x∗ = [x∗1, · · · , x∗M ]T ∈ RM , we observe its noisy linear measurements:\nb = ∑M\nm=1 dmx\n∗ m + ε = Dx ∗ + ε, (1)\nwhere b ∈ RN , D = [d1, · · · ,dM ] ∈ RN×M is the dictionary, and ε ∈ RN is additive Gaussian white noise. For simplicity, each column of D is normalized, that is, ‖dm‖2 = ‖D:,m‖2 = 1, m = 1, 2, · · · ,M . Typically, we have N M . A popular approach for solving the inverse problem: b→ x, is to solve the LASSO below (λ is a scalar):\nminx 1\n2 ‖b−Dx‖22 + λ‖x‖1 (2)\nusing iterative algorithms such as the iterative shrinkage thresholding algorithm (ISTA):\nx(k+1) = ηλ/L ( x(k) + DT (b−Dx(k))/L ) , k = 0, 1, 2, . . . (3)\nwhere ηθ is the soft-thresholding function2 and L is a smoothness constant that decides the step size.\nInspired by ISTA, (Gregor & LeCun, 2010) proposed to learn the weight matrices in ISTA rather than fixing them. LISTA unrolls ISTA iterations as a recurrent neural network (RNN): if truncated to K iterations, LISTA becomes a K-layer feed-forward neural network with side connections:\nx(k+1) = ηθ(k)(Wbb + α (k)W(k)x x (k)), k = 0, 1, · · · ,K − 1. (4)\nIf we set Wb ≡ DT /L, W(k)x ≡ I−DTD/L, α(k) ≡ 1, θ(k) ≡ λ/L, then LISTA recovers ISTA3. In practice, we start with x(0) set to zero. As suggested by the seminal work (Liu et al., 2019), sharing Wb and Wx across layers does no hurt to the performance while reducing the parameter complexity. We follow their weight-tying scheme in the paper, while learning layer-wise α(k) and θ(k). α(k) is separated from Wx to preserve flexibility.\nWe note one slight difference between our formulation (4), and the parameterization scheme suggested by Theorem 1 of (Chen et al., 2018). The latter also restricted {Wb, Wx} to be coupled layerwise, and showed it yielded the same representation capability as the orginal LISTA in Gregor & LeCun (2010). We instead have {Wb, Wx} as two independent learnable weight matrices. The main reason we did not follow (Chen et al., 2018) is that Wb-Wx coupling was directly derived for unrolling ISTA, which is no longer applicable nor intuitive for other non-LISTA architecture variants. Our empirical results also back that our parameterization in (4) is easier to train for varied architectures from search, and performs the same well when adopted in training the original LISTA.\nNow, given pairs of sparse vector and its noisy measurement (x∗,b), our goal is to learn the parameters Θ = {Wb,Wx, α(k), θ(k)}K−1k=0 such that x(K) is close to x∗ for all sparse x∗ following some distribution P . Therefore, all parameters in Θ are subject to end-to-end learning:\nmin Θ\nEx∗,b∼P‖x(K)(Θ,b,x(0))− x∗‖22. (5)\nThis problem is approximately solved over a training dataset {(x∗i ,bi)}Ni=1 sampled from P . 2Soft-thresholding function is defined in a component-wise way: ηθ(x) = sign(x)max(0, |x| − θ) 3Those can be parameter initialization in LISTA, which we follow (Chen et al., 2018) to use by default." }, { "heading": "2.1 VARYING THE CONNECTIVITY IN LISTA", "text": "We first present how we construct new LISTA-oriented architectures based on extensively varying the connectivity patterns, starting from the original unrolled LISTA. The way we add connections to the original unrolled model is called Learnable Weighted Average (LWA). Specifically, before the output of the k-th layer x(k) is fed into the next layer, we allow the outputs of previous layers to be possibly (not necessarily) connected to x(k). The input to the next layer is calculated with a learnable average of all the connections. Formally,\nx̃(k) = ∑k\ni=1 gi,k · (ci,k x(i)), (6)\nwhere ci,k are (unbounded) model parameters that will be trained with data, and gi,k are gates whose values are chosen to decide specific connectivity. Note that LWA introduces a few extra parameters (coefficients ci,k) than LISTA, but those constitute only a very minor portion w.r.t. the size of Θ.\nWe completely recognize that there are other possibilities to vary connections, and did our due diligence to make the informed choice of LWA. First, we only focus on adding connections, motivated from both the deep learning and the empirical unrolling fields. On one hand, prevailing success in deep learning (He et al., 2016; Huang et al., 2017) endorse that more densely connected networks are more likely to yield better performance. Prior unrolling works (Wang et al., 2016a) also empirically found that removing connections from an unrolled architectures would deteriorate its effectiveness. We also provide additional experiments in the Appendix A.4 to demonstrate that pruning existing connections in LISTA will be consistently harmful. Second, besides injecting new connections by LWA, we have also experimented with several other options, including a naive averaging way (simply averaging the outputs of all chosen layers), and a momentum-inspired averaging way (which is inspired by placing a momentum term into the unrolled algorithm). Through various experiments, we have confirmed that both options are almost consistently inferior to LWA, and therefore focus on LWA in the main paper. The details of the two other options, as well as the experimental comparisons, can be founded in the Appendix A.3 too.\nWe by default keep all existing connections in LISTA itself, i.e. the gate function gk,k that controls the connection from the immediate preceding layer is always 1. In this case, for a K-layer network, the total number of possible architectures is 21+2+···+(K−2) = 2 (K−1)(K−2) 2 . When K = 16 as commonly set in previous unrolling works (Chen et al., 2018; Borgerding & Schniter, 2016), only varying the connnecitivy will lead to a total of over 2105 ≈ 4× 1031 different architectures." }, { "heading": "2.2 VARYING THE NEURON TYPE IN LISTA", "text": "Besides connection patterns, another important factor that might affect the model performance is the choice of the neuron (non-linear activation function) type. The default neuron in LISTA is the soft-thresholding operator with learnable thresholds (bias term), naturally from unrolling ISTA.\nTo study whether they might be potential better neuron options or combinations, we augment the search space to allow each layer to select its neuron from three types:{soft-thresholding, ReLU, LeakyReLU(α = 0.1)}. That increases the total architecture numbers further by 316 times, resulting in the final gigantic space of∼ 1.75×1039 candidate architectures deriving from a 16-layer LISTA." }, { "heading": "2.3 HOW TO EFFICIENTLY FIND AND ANALYZE GOOD ARCHITECTURES", "text": "The daunting size of our LISTA search space begs the question: how to effectively find the best small subset of architectures, that we can observe and reliably summarize patterns from? A manual\nexploration is infeasible, and hence we refer to the tool of neural architecture search (NAS) (Zoph & Le, 2016) to fulfill our goal. Here NAS is leveraged as an off-the-shelf tool to explore the search space efficiently and to quickly focus on the most promising subset of top candidates.\nAre NAS results stable and trustworthy? Since our main goal is to analyze the found topperformer models and to identify design patterns, the effectiveness and stability of NAS are crucial to the trustworthiness of our conclusions. We take several actions to address this potential concern.\nFirst, to avoid the instability arising in training and evaluating the sampled candidate models, we adopt no parameter sharing (Pham et al., 2018; Liu et al., 2018). Parameter sharing is currently a popular scheme in NAS to save search time, but can introduce search bias (Chu et al., 2019). Despite the accompanied tremendous computation costs, we train each individual model from scratch to fully converge, to ensure the most stable, thorough and fully reproducible search results as possible.\nSecond, to avoid the possible artifact of any single NAS algorithm, we repeat our experiments using three different state-of-the-art, non-weight-sharing NAS algorithms, including reinforcement learning (RL) based method (Zoph & Le, 2016), regularized evolution based method (Real et al., 2019) and random sampling based method (Li & Talwalkar, 2019). Those algorithms usually yield diverse search behaviors, and in general there is no clear winner among the three (Li & Talwalkar, 2019). We use the negative validation NMSE as the optimization reward for them all. Somehow surprisingly, we find: (1) the best candidate architectures (e.g., top-50 in terms of NMSE) found by the three methods reach almost the same good NMSE values; and (2) those best subsets are highly overlapped (nearly identical) in the specific models found, and show quite consistent architecture patterns. Hence due to the space limit, we focus on reporting all results and findings from RL-based NAS, as the other two produce almost the same. We hypothesize that our gigantic LISTA search space might have a smoother and more friendly landscape compared to typical NAS benchmarks, that facilitates state-of-the-art NAS algorithms to explore. We leave that for future work to verify.\nThird, the top-1 architecture found by NAS will inevitably be subject to sampling randomness and other search algorithm fluctuations (e.g., hyperparameters). Therefore, instead of overly focusing on analyzing the top-1 or few architectures, we take one additional step to create an “average architecutre” from the top-50 architectures, to further eliminate random factors that might affect our results’ credibility. Specifically, for every possible connection between layers, we compute the percentage (between 0 and 1) of models from the top-50 set that have this connection. We then use a threshold of 0.5 to decide whether the average architecture will have this connection (if the percentage ≥ 0.5) or not (if < 0.5). As we will visualize in Section 4, the top-50 architectures usually have a high agreement level on connections, making most percentages naturally close to either 0 or 1. The average architecture helps more clearly perceive the “good patterns” shared by top models." }, { "heading": "3 EXPERIMENTS AND ANALYSIS", "text": "" }, { "heading": "3.1 EXPERIMENT SETTING AND IMPLEMENTATION DETAILS", "text": "Searching. As discussed in Section 2.3, we focus on the RL-based NAS (Zoph & Le, 2016) without weight sharing, i.e., train sampled models from scratch to full convergence. Our experiments are performed on 400 Cloud TPUs, with more than 25,000 architectures sampled during each search4.\nAfter the search is complete, we focus on the “average architecture” from the top-50 sampled set, as defined in Section 2.3. Typical NAS works that often find the few best models to have notable performance gaps between each other; different from them, our top-50 searched models have only minor NMSE deviations, in addition to their consistent architecture patterns. That endorses the stability of our search, and potentially reaffirms our smoothness assumption of LISTA search space.\nTo emphasize, we have only performed NAS in the problem setting of Section 3.2 (synthetic, noiseless). For all remaining experiments, we directly re-train and evaluate the same average architecture, on corresponding training and testing sets. The main motivation is to avoid the hassle of re-doing the tedious model search for every new problem. Instead, we wish to show that by searching only once in a problem setting, the searched architecture not only performs best in the same setting, but also can be directly applied (with re-training) to other related settings with superior performance. Such transferrability study has also been a popular subject in NAS literature (Zoph et al., 2018), where one searches on one dataset and re-trains the searched model on another similar dataset.\n4Our codes, and searched & trained models are published on GitHub.\nTraining and evaluating sampled models. We adopt the same stage-wise training strategy from (Chen et al., 2018; Liu et al., 2019) and use the same default K =16 layers. Every model is trained with batch size 128 and learning rate 0.0005 on 2 Cloud TPUs. In most experiments, the training, validation and testing sets are sampled i.i.d. without overlap, except the training/testing mismatch experiments where we purposely have training and testing sets sampled from different distributions. Also by default, we sample 102,400 samples for training, and 10,240 for validation and testing each, constituting data abundant settings. A data limited setting will be discussed later. For all synthetic data experiments, the model performance is evaluated with normalized MSE (NMSE) in decibel (Chen et al., 2018) averaged over the testing set: the lower the better.\nBaselines. We compare with three strong hand-crafted baselines: the original unrolling LISTA model; the densely connected variant of LISTA (every layer connected to all its preceding layers), denoted as Dense-LISTA; and another improved unrolled sparse recovery model from FISTA (Moreau & Bruna, 2017) called LFISTA, which could also perceived as model-based injection of more connections into LISTA. For the fair comparison with LFISTA, we adopt the same parameterization to (4) and (6) to model its Nesterov acceleration term. We find that to consistently improve the training stability and final performance of LFISTA too. More details are in the Appendix A.1." }, { "heading": "3.2 SEARCH AND EVALUATION IN THE NOISELESS SETTING", "text": "We perform our search on the simplest synthetic setting with no additive noise, i.e. ε = 0 in (1). We generate all our data, with a random sampled dictionary D ∈ Rm×n with m = 250, n = 500, following (Chen et al., 2018; Borgerding & Schniter, 2016). In Appendix A.2, we also investigate the effect of problem size by varying m and n, as well as the effect of an ill-conditioned dictionary (instead of Gaussian). We sample the entries of D i.i.d. from the Gaussian distribution Dij ∼ N(0, 1/m) and normalize its columns to unit `2 norm. To sample the sparse vectors x, we decide each of its entry to be non-zero following the Bernoulli distribution with p = 0.1, and then generate its non-zero entries from the N(0, 1). We tried to adjust the dictionary formulation, sparsity level or nonzero magnitude distributions in some experiments, and observe the conclusions to be highly consistent. We therefore only report in this setting due to the space limit.\nThe results are reported and in the row [b] in Table 1, where we compared our searched top-50 average architecture with three baselines. Our main observations are:\n• The searched average model (significantly) outperforms hand-crafted ones. That proves our concept: much stronger models could be found by NAS in the LISTA-oriented design space. Particularly, it surpasses the vanilla LISTA by a large gap of 10.9 dB.\n• The choices of neuron types are “embarrassingly uniform”. Even we allow each of the 16 layers to independently select neurons, none shows to favor ReLU or leaky ReLU. All our top-50 architectures unanimously adopt soft-thresholding as the only neuron type for all their layers. It is somehow a surprise, and hints the important role of model-based prior in unrolled networks.\n• Non-trivial connectivity patterns are discovered, While LFISTA and Dense-LISTA both improve over LISTA, our search result indicates that “the denser the better” is not the true claim here (Table 1, row [a]). That differs from the general mind in computer vision models (Huang et al., 2017). More discussions on the searched connectivity are in Section 4." }, { "heading": "3.3 TRASNFERABILITY STUDY FOR THE SEARCHED MODEL", "text": "We now present the transferablity study, to re-train and evaluate the above-searched average architecture in more challenging settings. The baselines are also re-trained and compared in fair settings.\n#1. Noisy measurement [c, d] We first add non-zero Gaussian noise ε (i.i.d. across training and testing) to synthetic data. We use two noise levels, corresponding to SNR = 40 dB and 20 dB, respectively. Results in rows [c] , [d] of Table 1 show similar trends as [a], though with small gaps.\n#2. Non-exactly sparse vector [e] We next challenge LISTA to recover the non-exactly sparse x∗ in (1). We generate x∗ from the Gamma distribution Γ(α, β) with α = 1.0 and β = 0.1. In this way, x∗ has a handful of coordinates with large magnitudes, while the remaining coordinates have small yet nonzero values, making it approximately sparse or not exactly so. In order to disentangle different factors’ influences, no additive noise is placed in this case (SNR = ∞). From row [e] of Table 1, the same trend is observed, so is the superiority of the searched model.\n#3. Training-testing mismatch [f, g, h, i] We proceed to checking more challenging and practical scenarios of model robustness where the training and testing distribution may mismatch. We consider three cases in Table 1: (1) Transfer-Noise (Gaussian): we train models on the noiseless training set (SNR=∞), and directly apply to the two noisy testing sets, i.e., SNR = 40 dB as in row [f ] and 20 dB as in [g]. (2) Transfer-Noise (Gaussian→ salt & pepper): the models trained with SNR =∞ are encountered with an unseen noise type in testing: salt & pepper noise with 1% density; (3) Perturbed dictionary5: we keep our training/validation sets generated by the same dictionary D (SNR =∞), while re-generating a testing set by the following way: we sample a small ∆D from a Laplacian distribution L(0, 10−3), perturb D̄ = D + ∆D to create an unseen basis, normalize D̄ to unit column and then use it to generate new testing samples (no additive noise).\nNot surprisingly, all models see degraded performance under those challenging settings, but our searched architecture sticks as the best (or tie) among all. LFISTA and Dense-LISTA are also able to outperform LISTA. Besides, the gaps between Dense-LISTA and ours are generally reduced in all those mismatch cases, implying that denser connections might benefit model robustness here.\n#4. Limited training data [j, k] We lastly verify a hypothesis raised in (Monga et al., 2019), that unrolling provides model-based prior and helps train more generalizable networks in the data-limited training regime. To prove this concept, we reduce the training set to 10,240 and 5,120 samples, i.e., 10% and 5% of the default training size, respectively6. The validation set is also reduced to 1,024 samples (10% default size), but the testing set size remains unchanged.\nWe observe two specifically interesting phenomenons: 1) for the first time in our experiments, LFISTA outperforms Dense-LISTA notably, and the margin seems to enlarge as the training size decreases; (2) the searched architecture largely outperforms the other three when the training size is 10,240; yet it becomes comparable to LFISTA at the smaller 5,120 training size, albeit still clearly surpassing LISTA and Dense-LISTA. The lessons we learned seem to convey compound information: (i) LFISTA’s robust performance suggests that model-based unrolling indeed provides strong inductive prior, that is advantageous under data-limited training; (ii) compared to [b− i] where Dense-LISTA has consistently strong performance, adding overly dense connections seems not to be favored in data-limited regimes; (3) our searched architecture appears to be the right blend of model-based prior and data-driven capacity, that also possesses robustness to data-limited training." }, { "heading": "4 OPEN THE BOX: WHAT GOOD DESIGN PATTERNS ARE INSIDE?", "text": "A closer look at the averaged connectivity. Experiments in Section 3 indicate that significantly improved architectures can be spotted by NAS, whose strong performance generalizes in many unseen settings. We now dive into analyzing our searched average architecture. Figure 2 plots the average of the top-50 architectures before threshold, i.e., each value indicate the “percentage” of top-50 models having that connection (1 denotes all and 0 none). The following patterns are observed:\n• The consensus on connectivity is overall high. There are nearly 50% connections that over 90% of the top-50 models agree not to use. It seems the sparse recovery task obeys certain model-based prior and does not favor overly densely connections, as seen in Dense-LISTA.\n5Many applications (such as video-based) can be formulated as sparse coding models whose dictionaries are subject to small dynamic perturbations (e.g, slowly varied over time) (Liu et al., 2019; Zhao et al., 2011).\n6We did not go smaller since all performance would catastrophically drop when training size is below 5%.\n• Besides the default connection (diagonal line), every layer uses at least one extra skip connection. If counting in threshoding (0.5), all layers except layers 12, 14, 15 “activate” no less than 1/3 possible input skips, and more than half layers “activate” no less than 1/2. Those extra connections might represent or be interpreted as acceleration terms, potentially more sophisticated than studied in (Moreau & Bruna, 2017): we leave for future research.\n• Early and late layers have better consensus on connections, while middle layers (9-11) display some diversity of choice. The last layer (16) looks particularly special: it has connections from most preceding layers with high agreements. Our further experiments find that those connections to the last layer contribute a notable part to our searched performance: if we only pick those extra connections to layer-16 from our average architecture, and add them to LISTA/LFISTA, then we can immediately boost LISTA from -43.3 dB to -48.2 dB, and LFISTA from -47.3 dB to -48.4 dB (noiseless setting). Understanding the role of ”densely connected last layer” in unrolling seems to be an interesting open question.\nTransferring found pattern to other unrolling. We briefly explore whether the LISTA-found design patterns can generalize to other unrolling. We choose differentiable linearized ADMM (DLADMM) (Xie et al., 2019), another competitive unrolling model for sparse recovery, as the test bed. We directly transplant the connectivity in Figure 2, by adding those extra (non-diagonal) connections to augment D-LADMM, with no other change. We compare (a) the original D-LADMM; (b) random architectures with 42 extra connections sampled i.i.d. (the same total number as the transferred pattern); (c) D-LADMM with dense connections added only to the latter half layers (77 extra connections in total); (d) a densely-connected D-LADMM constructed similar to Dense-LISTA with 105 extra connections; (e) our transferred one with 42 extra searched connections. For the random connected models, we sample 5 patterns and report the average NMSEs.\nThe three models are trained and evaluated in three representative settings: (1) noiseless: the three models’ NMSEs are -54.2 / -55.0 / -55.3 / -55.4 / -55.6dB, respectively; (2) noisy (SNR = 20): NMSEs -18.2 / -18.9 / -18.9 / -19.0 / -19.1 dB; (3) data-limited (training size 10,240): NMSEs -24.8 / -25.6 / -29.8 / -22.6 / -33.5 dB. Several observations and conclusions can be drawn:\n• The transferred connectivity pattern immediately boosts D-LADMM in all three settings;\n• The more dense variants perform comparably with our augmented one when training data is sufficient, but degrades even behind the original in the data-limited regime;\n• The Dense-in-Latter-Half does not outperform the transferred pattern, and especially degrades the performance when the training data becomes limited. It seems some earlier-layer extra connectivity might help the trainability of the early layer in those cases. That indicates that our learned connectivity pattern is more subtle than naively “denser for latter”;\n• The randomly connected models with the same amount of extra connections as the transferred one are in general on par with the Dense-in-Latter-Half, but even worse on limited data. Those show that although appropriate connection percentage is a useful factor, it does not constitute the major value of our searched specific connectivity." }, { "heading": "5 DISCUSSIONS AND FUTURE WORK", "text": "While unrolling often yields reasonably good architectures, we seem to discover consistently better networks in this study. But are we denying the advances and promise of the unrolling field? Absolutely Not. As veterans studying unrolling, we hope this work to provide a reference and spark more reflections, on when unrolling is useful, how to improve it more, and (broadly) how to mingle modelbased optimization and data-driven learning better. A good perspective was taken in (Monga et al., 2019), suggesting that with the original model-based optimization (Dittmer et al., 2019) induced as a prior, the unrolled architectures behave as “an intermediate state between generic networks and iterative algorithms”, possessing relatively low bias and variance simultaneously. Unrolled networks might be more data-efficient to learn (supposed by our data-limited training experiments). Meanwhile if training data is sufficient, or if linear sparse model is not the accurate prior, then unrolling does not have to be superior. We then recommend data-driven model search or selection, perhaps leveraging the unrolled model as a robust starting point in the design space.\nWe shall further comment that this study is in a completely different track from theoretically understanding LISTA as a learned optimization algorithm (Moreau & Bruna, 2017; Giryes et al., 2018; Chen et al., 2018; Liu et al., 2019; Aberdam et al., 2020). Those works’ interests are mainly on interpreting the unrolled architecture as an early-truncated iterative optimizer (with adaptive weights): unrolling plays the central role in bridging this optimization-wise interpretability. Unrolling can also connect to more desired stability or robustness results from the optimization field (Aberdam et al., 2020; Heaton et al., 2020), which makes great research questions yet is beyond this paper’s scope." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 HAND-CRAFTED CONNECTIVITY BASELINES", "text": "Other than the original unrolled LISTA, we also compare our searched connectivity pattern with two hand-crafted connectivity baselines LFISTA and Dense-LISTA. Figure 3 shows a straightforward visualization for these two hard-crafted connectivity patterns." }, { "heading": "A.1.1 LFISTA", "text": "The formulation of FISTA (Beck & Teboulle, 2009) is\nt(k+1) = 1 + √ 1 + 4t(k+1)2\n2\ny(k+1) = x(k) + t(k) − 1 t(k+1) (x(k) − x(k−1)) (7)\nx(k+1) = ηλ/L ( y(k+1) +DT (b−Dy(k+1))/L ) .\nIt is natural to parameterize the second step as\ny(k+1) = c (k) 1 x (k) + c (k) 2 x (k−1), (8)\nwhich will reduce FISTA to LWA that is introduced in the main text. Of course we can plug y(k+1) into the third equation in (7), re-arrange and combine terms and parameterize the iteration as\nx(k+1) = ηλ/L ( W (k+1) 1 x (k) +W (k+1) 2 x (k−1) +Wbb ) .\nHowever, this will introduce more parameters than LWA. As we mainly consider LWA in the main text, we use the parameterization in (8) in this paper for fair comparison. Introducing FISTA to our design space would add 14 extra skip connections." }, { "heading": "A.1.2 DENSE-LISTA", "text": "Inspired by successful application of dense connections in deep learning (Huang et al., 2017), we manually enable all possible skip connections in our connectivity design space. For this denselyconnected LISTA instantiation, each layer will take the outputs of all its previous layers as input, leading to 105 extra skip connections comparing to the original LISTA." }, { "heading": "A.2 COMPLEMENTARY EXPERIMENTS", "text": "" }, { "heading": "A.2.1 DIFFERENT PROBLEM DIMENSIONALITIES", "text": "We are also curious whether our finding is also applicable to sparse coding problems with different dimensionalities. Here we follow the same noiseless setting as Section3.2 in the main text, but enlarge the dictionary to 512×1024, i.e. m = 512, n = 1024, and regenerate the training, valiation and test data with the new dictionary. As shown in the first row of Table 2, our results with new dimensionality setting are highly consistent to the noiseless measurement setting, indicating our searched pattern does not overfit to a specific problem size.\nA.2.2 ILL-CONDITIONED DICTIONARIES\nMost dictionaries used in the experiments are well-conditioned Gaussian random matrices that enjoy ideal properties such as natural incoherence. However, in real-world applications, this is too ideal to be true. Here we manually create ill-conditioned matrices and use them as the dictionaries. We sample two Gaussian matrices U ∈ R250×200 and V ∈ R200×500. Then we use D = UV as the dictionary, which is inherently low-rank. With an ill-conditioned dictionary, our searched architecture still outperforms other counterparts as demonstrated in the second row of Table 2." }, { "heading": "A.2.3 REAL-WORLD COMPRESSIVE SENSING", "text": "Beyond synthesis experiments with generated signals with ideal sparsity, we further re-used the architecture searched on the synthetic data, and train/test this architecture on Natural Im-\nage Compressive Sensing, following the setting in Section 4.2 of (Chen et al., 2018). We extract 16×16 patches from the natural images in BSD500 dataset and downsample the patches to 128-dimension measurements using a 128×256 Gaussian sensing matrix (e.g., compressive sensing ratio = 50%). The dictionary (256×512) is learned using a dictionary learning algorithm (Xu & Yin, 2013). We train LISTA, LFISTA, Dense-LISTA and the searched architecture on 400 training images on BSD500 and test the trained model on 11 standard testing images (Kulkarni et al., 2016). The results are shown in Table 3 where the average PSNR in decibel on the testing images are reported. We run three repetitions of training for all architectures and report the average PSNR over the three runs. We observe that although the architecture is searched on a synthetic data/task and then directly reused for training on a different new task on new data (natural image patches), the searched architecture still performs robustly, outperforming LISTA by 0.11 dB, and slightly surpassed LFISTA and Dense-LISTA.\nNote that the synthetic data and natural images have significant gaps. Although the transferred architecture is not dedicatedly searched for the compressive sensing problem, it still performs very robustly; a re-searched architecture on natural images would naturally only expect better performance." }, { "heading": "A.3 CONNECTION INJECTION OPTIONS", "text": "Other than the Learnable Weighted Average (LWA) we use by default in the main text, we also consider two other options to inject new connections." }, { "heading": "A.3.1 NAIVE AVERAGE (NA)", "text": "The most naive approach to fuse extra connectivity into the unrolled LISTA (4) is to simply take the average of the outputs of all chosen layers, before we apply the linear transform W(k). Formally, we replace x(k) in (4) with\nx̃(k) = 1∑k\ni=1 gi,k k∑ i=1 gi,k · x(i), (NA)\nwhere gi,k ∈ {0, 1} is a gate that controls whether the i-th iterate is chosen as the input to the k-th layer (same hereinafter). Denote the gates that connect the k-th layer, e.g. all gates in (NA) as vector gk = (g1,k, . . . , gk,k) T , and then concatenate into a long ordered vector g = [ gT1 , . . . ,g T K−1 ]T . All possible combination of the gate values of the gates form the search spaceH = {g}." }, { "heading": "A.3.2 MOMENTUM (MM)", "text": "We also consider a more complicated way to manipulate extra skip connections, inspired by momentum approaches Sutskever et al. (2013) in optimization. Several acceleration approaches for ISTA leverage momentum terms, such as the renowned Fast ISTA (FISTA) Beck & Teboulle (2009); Bioucas-Dias & Figueiredo (2007). To allow for flexibly learnable momentum, we illustrate our idea with a simple example, where only last two iterates are considered in the momentum. In this way, the ISTA iteration (3) is extended as:\nx(k+1) = ηθ(k)(x (k) + β(k)ν(k)). (9)\nwhere β(k) is a hyperparameter that controls the strength of momentum. Note that (9) reduces to ISTA (3) if we set β(k) ≡ 1/L, θ(k) ≡ λ/L and ν(k) = ∇xf(x(k)) = DT (b−Dx(k)). ν(k) is the update direction that already takes momentum into consideration involving the last two iterations:\nν(k) = γ(k)∇xf(x(k)) + (1− γ(k))∇xf(x(k−1)) (10) = γ(k)DT (b−Dx(k)) + (1− γ(k))D(b−Dx(k−1) (11) = DTb− γ(k)DTDx(k) − (1− γ(k))DTDx(k−1) (12)\nSubstituting (12) in (9) gives x(k+1) = ηθ(k) ( β(k)DTb + (I− β(k)γ(k)DTD)x(k) + (−β(k)(1− γ(k))DTD)x(k−1) ) . (13)" }, { "heading": "Denote Ŵ(k)b = β", "text": "(k)DT , Ŵ(k)1 = I−β(k)γ(k)DTD and Ŵ (k) 2 = −β(k)(1−γ(k))DTD, and then we get the untied version (do not share Ŵ(k)b ,Ŵ (k) 1 ,Ŵ (k) 2 within layers) of (8):\nx(k+1) = ηθ(k)(Ŵ (k) b b + Ŵ (k) 1 x (k) + Ŵ (k) 2 x (k−1)), (14)\nAs suggested by the seminal work (Liu et al., 2019), sharing the above weight matrices (corresponding to using layer-invariant β and γ) across layers does no hurt to the performance while reducing the parameter complexity. However, we introduce two scaling parameters α(k)1 and α (k) 2 (will be trained using data) to loosen the constraint, yielding a simpler form of (14):\nx(k+1) = ηθ(k)(Ŵbb + α (k) 1 Ŵ1x (k) + α (k) 2 Ŵ2x (k−1)). (15)\nExtending (15) to the general case where all previous iterates could be chosen by a gate to get involved in the momentum term ν(k), we get\nx(k+1) = ηθ(k) ( Ŵbb + ∑k i=1 gk+1−i,k · ( α (k) i Ŵix (k+1−i) )) , (MM)\nwhere the subscript in Ŵi means its input to come from the i-th last iterate and α (k) i is the step size parameter associated with Ŵi in the k-th layer. Step sizes α (k) i are initialized with 1." }, { "heading": "A.3.3 EMPIRICAL RESULTS", "text": "Before launching our large-scale search experiment, we first compare three of our connection injection methods (LWA, NA and MM) by randomly sampling about 500 architectures from our design space, and train each of them individually following our single model training protocol in Section 3. Since we are focusing on evaluating injection methods, we use soft-thresholding as our neuron type by default.\nNoiseless measurement and Gaussian noise. We first conduct experiments under three most common settings (noiseless measurement, noisy measurement with 40dB and 20dB additive Gaussian noise). As shown in Figure 5, NA always hurts the performance in all three settings. While MM could perform slightly better than LWA in noiseless setting, LWA clearly beats MM in noisy setting for both SNR=40dB and SNR=20dB. Based on our observation that NA constantly yields worse performance, we conduct the rest comparative experiments using only LWA and MM.\nMore complicated settings. Other than additive Gaussian noise cases, we also evaluate and compare LWA and MM on the two more challenging and realistic settings: non-exactly sparse vector and limited training data. We follow the same setting as described in Section 3.3. We use 10240 as training data size in limited training data setting. From NMSE distributions in both Figure 6 and noisy setting in Figure 5, we can clearly tell that LWA has better robustness when the same connectivity pattern is applied. At the same time, probably due to the larger number of parameters introduced, MM suffers more from limited training data.\nSince MM only wins in the easiest noiseless case with sufficient training data, we can safely tell that LWA has more practical values as it is consistently the winner of settings where we have different noise in the data and where the data is limited. Since we are not in an ideal world, we choose to conduct all our main experiments using LWA to fuse injected connections." }, { "heading": "A.4 SIDE CONNECTION PRUNING", "text": "Despite of our focus on adding skip connections in this paper, we also empirically evaluate what will happen if we also allow for pruning connections. As shown in Figure 7(a), we insert gate functions to side connections to decide whether they will be removed from the original unrolled LISTA. Specifically, we allow all the side connections except the first one to have the probability being removed from the architecture. As we use K = 16 LISTA layers as our default setting, our pruning search space is consisted of 215 potential candidates for original unrolled LISTA architecture. For our joint injection and pruning setting, the search space is enlarged to 2105 × 215 = 2120. Note we also utilize the tool of random sampling here as in Section A.3.3 and choose to use soft-thresholding for all layers." }, { "heading": "A.4.1 PRUNING FROM ORIGINAL LISTA", "text": "We first apply the side connection pruning search space to the original LISTA. As shown in the Figure 7(b), removing side connections will always do harm to the unrolled LISTA model, which echos the finding observed by (Wang et al., 2016a)." }, { "heading": "A.4.2 JOINT INJECTION AND PRUNING", "text": "Next we apply both connection injection and side connection removal at the same time. Our results in Figure 8(a) illustrate that although connection injection can greatly reduce the influence of side\nconnection pruning, most architectures from this design space suffer from the removal of side connections. We also randomly select some models in this setting, keep their injected skip connections and reconnect all their side connections. Figure 8(b) shows how NMSEs change for LWA models. The arrows point from the “pruned side connections” case to “full side connections” case and the x axis means the total number of skip connections and side connections. As we can see, we can almost always gain performance by reconnecting all side connections, which is aligned with our observations in A.4.1. Our observation here indicates that we should always keep all side connections." }, { "heading": "A.5 STABILITY OF AVERAGED ARCHITECTURE", "text": "To further demonstrate the stability of our averaged architecture, we recalculate an averaged architecture from top-30 models, and compute its connection differences with the default top-50 average one. The difference map is visualized in the Figure 9. Clearly, they are highly aligned and echo our observations that those top architectures already have high agreement on connections to be activated." } ]
2,021
null
SP:287426061a33fd5cef9b00660c06e98f3af010d2
[ "The paper studies a method for mitigating robust overfitting. Rice et al., and others have observed that when training a neural network robustly on say CIFAR10, then the robust test error often overfits, i.e., it has a U-shaped curve as a function of training epochs. Rice et al. demonstrated that early stopping the robust training enables state-of-the-art robust performance. However, to realize this performance, it is necessary to find a good early stopping point, which can be difficult (but can be found with testing on a validation set). The paper proposes an alternative to early stopping: smoothing the logits and smoothing the weights, by using two existing techniques, namely self-training and stochastic weight averaging. The paper finds that smoothing mitigates robust overfitting, and reports even a slight improvement over early stopping at the optimal point." ]
A recent study (Rice et al., 2020) revealed overfitting to be a dominant phenomenon in adversarially robust training of deep networks, and that appropriate early-stopping of adversarial training (AT) could match the performance gains of most recent algorithmic improvements. This intriguing problem of robust overfitting motivates us to seek more remedies. As a pilot study, this paper investigates two empirical means to inject more learned smoothening during AT: one leveraging knowledge distillation and self-training to smooth the logits, the other performing stochastic weight averaging (Izmailov et al., 2018) to smooth the weights. Despite the embarrassing simplicity, the two approaches are surprisingly effective and hassle-free in mitigating robust overfitting. Experiments demonstrate that by plugging in them to AT, we can simultaneously boost the standard accuracy by 3.72% ∼ 6.68% and robust accuracy by 0.22% ∼ 2.03%, across multiple datasets (STL-10, SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet), perturbation types (`∞ and `2), and robustified methods (PGD, TRADES, and FSGM), establishing the new state-of-the-art bar in AT. We present systematic visualizations and analyses to dive into their possible working mechanisms. We also carefully exclude the possibility of gradient masking by evaluating our models’ robustness against transfer attacks. Codes are available at https: //github.com/VITA-Group/Alleviate-Robust-Overfitting.
[ { "affiliations": [], "name": "ERLY LEARNED SMOOTHENING" }, { "affiliations": [], "name": "Tianlong Chen" }, { "affiliations": [], "name": "Zhenyu Zhang" }, { "affiliations": [], "name": "Sijia Liu" }, { "affiliations": [], "name": "Shiyu Chang" }, { "affiliations": [], "name": "Zhangyang Wang" } ]
[ { "authors": [ "Maksym Andriushchenko", "Nicolas Flammarion" ], "title": "Understanding and improving fast adversarial training", "venue": "arXiv preprint arXiv:2007.02617,", "year": 2020 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": "arXiv preprint arXiv:1802.00420,", "year": 2018 }, { "authors": [ "Ben Athiwaratkun", "Marc Finzi", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "There are many consistent explanations of unlabeled data: Why you should average", "venue": "arXiv preprint arXiv:1806.05594,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Tianlong Chen", "Sijia Liu", "Shiyu Chang", "Yu Cheng", "Lisa Amini", "Zhangyang Wang" ], "title": "Adversarial robustness: From self-supervised pre-training to fine-tuning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Xuxi Chen", "Wuyang Chen", "Tianlong Chen", "Ye Yuan", "Chen Gong", "Kewei Chen", "Zhangyang Wang" ], "title": "Self-pu: Self boosted and calibrated positive-unlabeled training", "venue": "arXiv preprint arXiv:2006.11280,", "year": 2020 }, { "authors": [ "Minhao Cheng", "Qi Lei", "Pin-Yu Chen", "Inderjit Dhillon", "Cho-Jui Hsieh" ], "title": "Cat: Customized adversarial training for improved robustness", "venue": "arXiv preprint arXiv:2002.06789,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Guneet S. Dhillon", "Kamyar Azizzadenesheli", "Jeremy D. Bernstein", "Jean Kossaifi", "Aran Khanna", "Zachary C. Lipton", "Animashree Anandkumar" ], "title": "Stochastic activation pruning for robust adversarial defense", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Gavin Weiguang Ding", "Luyu Wang", "Xiaomeng Jin" ], "title": "AdverTorch v0.1: An adversarial robustness toolbox based on pytorch", "venue": "arXiv preprint arXiv:1902.07623,", "year": 2019 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp minima can generalize for deep nets", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Minjing Dong", "Yanxi Li", "Yunhe Wang", "Chang Xu" ], "title": "Adversarially robust neural architectures", "venue": "arXiv preprint arXiv:2009.00902,", "year": 2020 }, { "authors": [ "Yinpeng Dong", "Fangzhou Liao", "Tianyu Pang", "Hang Su", "Jun Zhu", "Xiaolin Hu", "Jianguo Li" ], "title": "Boosting adversarial attacks with momentum", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Gintare Karolina Dziugaite", "Zoubin Ghahramani", "Daniel M Roy" ], "title": "A study of the effect of jpg compression on adversarial images", "venue": "arXiv preprint arXiv:1608.00853,", "year": 2016 }, { "authors": [ "Logan Engstrom", "Andrew Ilyas", "Anish Athalye" ], "title": "Evaluating and understanding the robustness of adversarial logit pairing", "venue": "arXiv preprint arXiv:1807.10272,", "year": 2018 }, { "authors": [ "Chaohao Fu", "Hongbin Chen", "Na Ruan", "Weijia Jia" ], "title": "Label smoothing and adversarial robustness", "venue": "arXiv preprint arXiv:2009.08233,", "year": 2020 }, { "authors": [ "Tommaso Furlanello", "Zachary Lipton", "Michael Tschannen", "Laurent Itti", "Anima Anandkumar" ], "title": "Born again neural networks", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Timur Garipov", "Pavel Izmailov", "Dmitrii Podoprikhin", "Dmitry P Vetrov", "Andrew G Wilson" ], "title": "Loss surfaces, mode connectivity, and fast ensembling of dnns", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Morgane Goibert", "Elvis Dohmatob" ], "title": "Adversarial robustness via adversarial label-smoothing", "venue": "arXiv preprint arXiv:1906.11567,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Edward Grefenstette", "Robert Stanforth", "Brendan O’Donoghue", "Jonathan Uesato", "Grzegorz Swirszcz", "Pushmeet Kohli" ], "title": "Strength in numbers: Trading-off robustness and computation via adversarially-trained ensembles", "venue": "arXiv preprint arXiv:1811.09300,", "year": 2018 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens van der Maaten" ], "title": "Countering adversarial images using input transformations", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Vipul Gupta", "Santiago Akle Serrano", "Dennis DeCoste" ], "title": "Stochastic weight averaging in parallel: Large-batch training that generalizes well", "venue": "arXiv preprint arXiv:2001.02312,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko" ], "title": "Formal guarantees on the robustness of a classifier against adversarial manipulation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Matthias Hein", "Maksym Andriushchenko", "Julian Bitterwolf" ], "title": "Why relu networks yield highconfidence predictions far away from the training data and how to mitigate the problem", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Ting-Kuei Hu", "Tianlong Chen", "Haotao Wang", "Zhangyang Wang" ], "title": "Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference", "venue": "arXiv preprint arXiv:2002.10025,", "year": 2020 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Gauri Jagatap", "Animesh Basak Chowdhury", "Siddharth Garg", "Chinmay Hegde" ], "title": "Adversarially robust learning via entropic regularization", "venue": "arXiv preprint arXiv:2008.12338,", "year": 2020 }, { "authors": [ "Ziyu Jiang", "Tianlong Chen", "Ting Chen", "Zhangyang Wang" ], "title": "Robust pre-training by adversarial contrastive learning", "venue": "arXiv preprint arXiv:2010.13337,", "year": 2020 }, { "authors": [ "Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt" ], "title": "Testing robustness against unforeseen adversaries", "venue": "arXiv preprint arXiv:1908.08016,", "year": 2019 }, { "authors": [ "A. Krizhevsky", "G. Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis,", "year": 2009 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Zhizhong Li", "Derek Hoiem" ], "title": "Learning without forgetting", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "F. Liao", "M. Liang", "Y. Dong", "T. Pang", "X. Hu", "J. Zhu" ], "title": "Defense against adversarial attacks using high-level representation guided denoiser", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Xuanqing Liu", "Minhao Cheng", "Huan Zhang", "Cho-Jui Hsieh" ], "title": "Towards robust neural networks via random self-ensemble", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Wesley J Maddox", "Pavel Izmailov", "Timur Garipov", "Dmitry P Vetrov", "Andrew Gordon Wilson" ], "title": "A simple baseline for bayesian uncertainty in deep learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "arXiv preprint arXiv:1909.04068,", "year": 2019 }, { "authors": [ "Chengzhi Mao", "Ziyuan Zhong", "Junfeng Yang", "Carl Vondrick", "Baishakhi Ray" ], "title": "Metric learning for adversarial robustness", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "Pascal Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Marius Mosbach", "Maksym Andriushchenko", "Thomas Trost", "Matthias Hein", "Dietrich Klakow" ], "title": "Logit pairing methods can fool gradient-based attacks", "venue": null, "year": 1810 }, { "authors": [ "Rafael Müller", "Simon Kornblith", "Geoffrey E Hinton" ], "title": "When does label smoothing help", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": null, "year": 1912 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y. Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In NIPS Workshop on Deep Learning and Unsupervised Feature Learning", "year": 2011 }, { "authors": [ "Behnam Neyshabur", "Srinadh Bhojanapalli", "David McAllester", "Nati Srebro" ], "title": "Exploring generalization in deep learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Yinpeng Dong", "Chao Du", "Ning Chen", "Jun Zhu" ], "title": "Rethinking softmax cross-entropy loss for adversarial robustness", "venue": null, "year": 1905 }, { "authors": [ "Tianyu Pang", "Xiao Yang", "Yinpeng Dong", "Kun Xu", "Hang Su", "Jun Zhu" ], "title": "Boosting adversarial training with hypersphere embedding", "venue": "arXiv preprint arXiv:2002.08619,", "year": 2020 }, { "authors": [ "Henning Petzka", "Linara Adilova", "Michael Kamp", "Cristian Sminchisescu" ], "title": "A reparameterizationinvariant flatness measure for deep neural networks", "venue": "arXiv preprint arXiv:1912.00058,", "year": 2019 }, { "authors": [ "Leslie Rice", "Eric Wong", "J Zico Kolter" ], "title": "Overfitting in adversarially robust deep learning", "venue": "arXiv preprint arXiv:2002.11569,", "year": 2020 }, { "authors": [ "Jérôme Rony", "Luiz G Hafemann", "Luiz S Oliveira", "Ismail Ben Ayed", "Robert Sabourin", "Eric Granger" ], "title": "Decoupling direction and norm for efficient gradient-based l2 adversarial attacks and defenses", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Amin Ghiasi", "Furong Huang", "Tom Goldstein" ], "title": "Label smoothing and logit squeezing: A replacement for adversarial training", "venue": "arXiv preprint arXiv:1910.11585,", "year": 2019 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "arXiv preprint arXiv:1409.1556,", "year": 2014 }, { "authors": [ "David Stutz", "Matthias Hein", "Bernt Schiele" ], "title": "Confidence-calibrated adversarial training: Generalizing to unseen attacks", "venue": "In Proc. of the International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "C. Szegedy", "V. Vanhoucke", "S. Ioffe", "J. Shlens", "Z. Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": "In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Jiaxi Tang", "Rakesh Shivanna", "Zhe Zhao", "Dong Lin", "Anima Singh", "Ed H Chi", "Sagar Jain" ], "title": "Understanding and improving knowledge distillation", "venue": "arXiv preprint arXiv:2002.03532,", "year": 2020 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Haotao Wang", "Tianlong Chen", "Shupeng Gui", "Ting-Kuei Hu", "Ji Liu", "Zhangyang Wang" ], "title": "Once-forall adversarial training: In-situ tradeoff between robustness and accuracy for free", "venue": "arXiv preprint arXiv:2010.11828,", "year": 2020 }, { "authors": [ "Jianyu Wang", "Haichao Zhang" ], "title": "Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Xin Wang", "Jie Ren", "Shuyun Lin", "Xiangming Zhu", "Yisen Wang", "Quanshi Zhang" ], "title": "A unified approach to interpreting and boosting adversarial transferability", "venue": "In ICLR,", "year": 2021 }, { "authors": [ "Yisen Wang", "Xingjun Ma", "James Bailey", "Jinfeng Yi", "Bowen Zhou", "Quanquan Gu" ], "title": "On the convergence and robustness of adversarial training", "venue": null, "year": 2019 }, { "authors": [ "Yisen Wang", "Difan Zou", "Jinfeng Yi", "James Bailey", "Xingjun Ma", "Quanquan Gu" ], "title": "Improving adversarial robustness requires revisiting misclassified examples", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Eric Wong", "Leslie Rice", "J. Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-tao Xia" ], "title": "Revisiting loss landscape for adversarial robustness", "venue": "arXiv preprint arXiv:2004.05884,", "year": 2020 }, { "authors": [ "Dongxian Wu", "Yisen Wang", "Shu-Tao Xia", "James Bailey", "Xingjun Ma" ], "title": "Skip connections matter: On the transferability of adversarial examples generated with resnets", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Cihang Xie", "Jianyu Wang", "Zhishuai Zhang", "Zhou Ren", "Alan Yuille" ], "title": "Mitigating adversarial effects through randomization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Weilin Xu", "David Evans", "Yanjun Qi" ], "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "venue": "arXiv preprint arXiv:1704.01155,", "year": 2017 }, { "authors": [ "Guandao Yang", "Tianyi Zhang", "Polina Kirichenko", "Junwen Bai", "Andrew Gordon Wilson", "Chris De Sa" ], "title": "Swalp: Stochastic weight averaging in low precision training", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Yuzhe Yang", "Guo Zhang", "Dina Katabi", "Zhi Xu" ], "title": "ME-Net: Towards effective adversarial robustness with matrix estimation", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Li Yuan", "Francis EH Tay", "Guilin Li", "Tao Wang", "Jiashi Feng" ], "title": "Revisiting knowledge distillation via label smoothing regularization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "arXiv preprint arXiv:1611.03530,", "year": 2016 }, { "authors": [ "Dinghuai Zhang", "Tianyuan Zhang", "Yiping Lu", "Zhanxing Zhu", "Bin Dong" ], "title": "You only propagate once: Accelerating adversarial training via maximal principle", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P Xing", "Laurent El Ghaoui", "Michael I Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Jingfeng Zhang", "Xilie Xu", "Bo Han", "Gang Niu", "Lizhen Cui", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Attacks which do not kill training make adversarial learning stronger", "venue": "In International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Jingfeng Zhang", "Jianing Zhu", "Gang Niu", "Bo Han", "Masashi Sugiyama", "Mohan Kankanhalli" ], "title": "Geometry-aware instance-reweighted adversarial training", "venue": "arXiv preprint arXiv:2010.01736,", "year": 2020 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nAdversarial training (AT) (Madry et al., 2018), i.e., training a deep network to minimize the worst-case training loss under input perturbations, is recognized as the current best defense method to adversarial attacks. However, one of its pitfalls was exposed by a recent work (Rice et al., 2020): in contrast to the commonly-held belief that overparameterized deep networks hardly overfit in standard training (Zhang et al., 2016; Neyshabur et al., 2017; Belkin et al., 2019), overfitting turns out to be a dominant phenomenon in adversarially robust training of deep networks. After a certain point in AT, e.g., immediately after the first learning rate decay, the robust test errors will only continue to substantially increase with further training (see Figure 1 bottom for example).\nThat surprising phenomenon, termed as “robust overfitting”, has been prevalent on many datasets and models. As Rice et al. (2020) pointed out, it poses serious challenges to assess recent algorithmic\n∗Equal Contribution.\n1\nadvances upon AT: by just using an earlier checkpoint, the performance of AT be drastically boosted to match the more recently reported state-of-the-arts (Yang et al., 2019b; Zhang et al., 2019b). Even worse, Rice et al. (2020) tested several other implicit and explicit regularization methods, including weight decay, data augmentation and semi-supervised learning; they reported that none of those alternatives seem to combat robust overfitting (stably) better than simple early stopping. The authors thus advocated using the validation set to select a stopping point, although the manual picking would inevitably trade off between selecting either the peak point of robust test accuracy or that of standard accuracy, which often do not coincide (Chen et al., 2020a).\nDoes there exist more principled, hands-off, and hassle-free mitigation for this robust overfitting, for us to further unleash the competency of AT? This paper explores two options along the way, that draw two more sophisticated ideas from enhancing standard deep models’ generalization. Both could be viewed as certain types of learned smoothening, and are directly plugged into AT:\n• Our first approach is to smooth the logits in AT via self-training, using knowledge distillation with the same model pre-trained as a self-teacher. The idea is inspired by two facts: (1) label smoothening (Szegedy et al., 2016) can calibrate the notorious overconfidence of deep networks (Hein et al., 2019), and that was found to improve their standard generalization; (2) label smoothening can be viewed as a special case of knowledge distillation (Yuan et al., 2020), and self-training can produce more semantic-aware and discriminative soft label “self-teachers” than naive label smoothening (Chen et al., 2020b; Tang et al., 2020). • Our second approach is to smooth the weights in AT via stochastic weight averaging (SWA)\n(Izmailov et al., 2018), a popular training technique that leads to better standard generalization than SGD, with almost no computational overhead. While SWA has not yet be applied to AT, it is known to find flatter minima which are widely believed to indicate stronger robustness (Hein & Andriushchenko, 2017; Wu et al., 2020a). Meanwhile, SWA could also be interpreted as a temporal model ensemble, and therefore might bring the extra robustness of ensemble defense (Tramèr et al., 2018; Grefenstette et al., 2018) with the convenience of a single model. Those suggest that applying SWA is natural and promising for AT.\nTo be clear, neither knowledge-distillation/self-training nor SWA was invented by this paper: they have been utilized in standard training to alleviate (standard) overfitting and improve generalization, by fixing over-confidence and by finding flatter solutions, respectively. By introducing and adapting them to AT, our aim is to complement the existing study, demonstrating that while simpler regularizations were unable to fix robustness overfitting as Rice et al. (2020) found, our learned logit/weight smoothening could effectively regularize and mitigate it, without needing early stopping.\nExperiments demonstrate that by plugging in the two techniques to AT, we can simultaneously boost the standard accuracy by 3.72% ∼ 6.68% and robust accuracy by 0.22% ∼ 2.03%, across multiple datasets (STL-10, SVHN, CIFAR-10, CIFAR-100, and Tiny ImageNet), perturbation types (`∞ and `2), and robustified methods (PGD, TRADES, and FSGM), establishing the new state-of-the-art in AT. As shown in Figure 1 example, our method eliminates the robust overfitting phenomenon in AT, even when training up to 200 epochs. Our results imply that although robustness overfitting is more challenging than standard overfitting, its mitigation is still feasible with properly-chosen, advanced regularizations that were developed for the latter. Overall, our findings join (Rice et al., 2020) in re-establishing the competitiveness of the simplest AT baseline." }, { "heading": "1.1 BACKGROUND WORK", "text": "Deep networks are easily fooled by imperceivable adversarial samples. To tackle this vulnerability, numerous defense methods were proposed (Goodfellow et al., 2015; Kurakin et al., 2016; Madry et al., 2018), yet many of them (Liao et al., 2018; Guo et al., 2018; Xu et al., 2017; Dziugaite et al., 2016; Dhillon et al., 2018; Xie et al., 2018; Jiang et al., 2020) were later found to result from training artifacts, such as obfuscated gradients (Athalye et al., 2018) caused by input transformation or randomization. Among them, adversarial training (AT) (Madry et al., 2018) remains one of the most competitive options. Recently more improved defenses have been reported (Dong et al., 2018; Yang et al., 2019b; Mosbach et al., 2018; Hu et al., 2020; Wang et al., 2020a; Dong et al., 2020; Zhang et al., 2020a;b), with some of them also being variants of AT, e.g. TRADES (Zhang et al., 2019b) and AT with metric learning regularizers (Mao et al., 2019; Pang et al., 2019; 2020).\nWhile overfitting has become less a practical concern in training deep networks nowadays, it was not yet noticed nor addressed in the adversarial defense field until lately. An overfitting phenomenon was\n2\nfirst observed in a few fast adversarial training methods (Zhang et al., 2019a; Shafahi et al., 2019b; Wong et al., 2020) based on FGSM (Goodfellow et al., 2015), e.g., sometimes the robust accuracy against a PGD adversary suddenly drop to nearly zero after some training. (Andriushchenko & Flammarion, 2020) suggested it to be rooted in those methods’ local linearization assumptions of the loss landscape in those “fast” AT. The recently reported robust overfitting (Rice et al., 2020) seems to raise a completely new challenge for the classical AT (not fast): the model starts to irreversibly lose robustness after training with AT for a period, even the double-descent generalization curves still seemed to hold (Belkin et al., 2019; Nakkiran et al., 2019). Among various options tried in Rice et al. (2020), early-stopping was so far the only effective remedy found." }, { "heading": "2 METHODOLOGY", "text": "" }, { "heading": "2.1 LEARNING TO SMOOTH LOGITS IN AT", "text": "Rationale: Why AT enforces models robust against adversarial attacks of a specific type and certain magnitudes. However, it has been shown to “overfit” the threat model “seen” during training (Kang et al., 2019; Maini et al., 2019; Stutz et al., 2020), and its gained robustness does not extrapolate to larger perturbations nor unseen attack types. Stutz et al. (2020) hypothesized this to be an unwanted consequence of enforcing high-confidence predictions on adversarial examples since high-confidence predictions are difficult to extrapolate to arbitrary regions beyond the seen examples during training. We generalize this observation: during AT, the attacks generated at every iteration can be naturally considered as continuously varying/evolving, along with the model training. Therefore, we hypothesize one source of robust overfitting might lie in that the model “overfits” the attacks generated in the early stage of AT and fails to generalize or adapt to the attacks in the late stage.\nTo alleviate the overconfidence problem, we adapt the label smoothening (LS) technique in standard training (Szegedy et al., 2016). LS creates uncertainty in the one-hot labels, by computing crossentropy not with the “hard” targets from the dataset, but with a weighted mixture of these one-hot targets with the uniform distribution. This uncertainty helps to tackle alleviate the overconfidence problem Hein et al. (2019) and improves the standard generalization. The idea of LS was previously investigated in other defense methods (Shafahi et al., 2019a; Goibert & Dohmatob, 2019), but much of the observed robustness gains were later attributed to obfuscated gradients (Athalye et al., 2018). Two recent works (Stutz et al., 2020; Cheng et al., 2020) have integrated LS with AT to inject label uncertainty: Stutz et al. (2020) used a convex combination of uniform and one-hot distributions as target for the cross-entropy loss in AT, which resembles the LS regularizer, while Cheng et al. (2020) concurrently used an LS regularizer for AT.\nHowever, there is one pitfall of the naive LS in (Szegedy et al., 2016): over-smoothening labels in a data-blind way could cause loss of information in the logits, and hence weakened discriminative power of the trained models (Müller et al., 2019). That calls for a careful and adaptive balance between discriminative capability and confidence calibration of the model. In the context of AT, Stutz et al. (2020) crafted a perturbation-dependent parameter, to explicitly control the transition from one-hot to the uniform distribution when the attack magnitude grows from small to large. To identify more automated and principled means, we notice another recent work (Yuan et al., 2020), who explicitly connected knowledge distillation (KD) (Hinton et al., 2015) to LS. The authors pointed out that LS equals a special case of KD using a virtual and hand-crafted teacher; on the contrary, the conventional KD provides data-driven soften labels rather than simply mixing one-shot and uniform vectors. Together with many others (Furlanello et al., 2018; Chen et al., 2020b), these works demonstrated that using model-based and learned soft labels supplies much superior confidence calibration and logit geometry compared to the naive LS (Tang et al., 2020).\nFurthermore, (Furlanello et al., 2018; Chen et al., 2020b; Yuan et al., 2020) unanimously revealed that another strong teacher model with extra privileged information is NOT critical to the success of KD. Yuan et al. (2020) shows that even a poorly-trained teacher with much lower accuracy can still improve the student. Moreover, Chen et al. (2020b); Yuan et al. (2020) find self-teacher to be sufficiently effective for KD, that is, using soft-logit outputs from the student or designed manually as the KD regularization to train itself (also called teacher-free KD\n3\n(Tf-KD) in (Yuan et al., 2020)). These observations make the main cornerstone for our learned logit smoothening approach next.\nApproach: How We follow (Chen et al., 2020b; Yuan et al., 2020) to use self-training with the same model, but introduce one specific modification. The one model could be trained with at least two different ways: standard training, or robust training (AT or other cheaper ways; see ablation experiments). That can yield two self-teachers. We assume both to be available; and let x be the input, y the one-hot ground truth label, δ the adversarial perturbation bounded by `p norm ball with radius , and θr/θs the weights of the robust-/standard-trained self-teachers, respectively. Note the two self-teachers share the identical network architecture and training data with our target model. Our self-training smoothed loss function is expressed below (λ1 and λ2 are two hyperparameters):\nmin θ E(x,y)∈D {(1− λ1 − λ2) · max δ∈B (x) LXE(f(θ,x + δ), y)+\nλ1 · KDadv(f(θ,x + δ), f(θr,x + δ)) + λ2 · KDstd(f(θ,x + δ), f(θs,x + δ))}, (1)\nwhere LXE is robustified cross-entropy loss adopted in the original AT; KDadv and KDstd are the Kullback–Leibler divergence loss with the robust-trained and standard-trained self-teachers, respectively. λ1 = 0.5 and λ2 = 0.25 are default in all experiments. More details are in Appendix A2.1.\nFigure 2 visualizes an example of logit distributions, generated by naive LS (Szegedy et al., 2016), the Tf-KD regularizer using manually-designed self-teacher in (Yuan et al., 2020), as well our standard- and robust-trained teachers, respectively. We observe both standard and robust selfteachers are more discriminative than the other two baseline smoothenings, while the robust selfteacher is relatively more conservative as one shall expect." }, { "heading": "2.2 LEARNING TO SMOOTH WEIGHTS IN AT", "text": "Rationale: Why Another measure that is often believed to indicate the standard generalization is the flatness: the loss surface at the final learned weights for well-generalizing models is relatively “flat”. Similarly, Wu et al. (2020a) advocated that a flatter adversarial loss landscape shrinks the robustness generalization gap. This is aligned with (Hein & Andriushchenko, 2017) where the authors called it local Lipschitz and proved that the Lipschitz constant can be used to formally measure the robustness of machine learning models. The flatness preference of a robust model has been echoed by many empirical defense methods, such as hessian/curvature-based regularization (Moosavi-Dezfooli et al., 2019), gradient magnitude penalty (Wang & Zhang, 2019), smoothening with random noise (Liu et al., 2018), or entropy regularization (Jagatap et al., 2020). However, all those methods will incur (sometimes heavy) computational or memory overhead; and many can cause standard accuracy drops, e.g., hessian/curvature-based methods (Gupta et al., 2020).\nStochastic weight averaging (SWA) (Izmailov et al., 2018) was proposed to enforce the weight smoothness, by simply averaging multiple checkpoints along the training trajectory. SWA is known to find much flatter solutions than SGD, is extremely easy to implement, improves standard generalization, and has almost no computational overhead. SWA has been successfully adopted in semi-supervised learning (Athiwaratkun et al., 2018), Bayesian inference (Maddox et al., 2019), and low-precision training (Yang et al., 2019a). In this paper, we introduce SWA to AT for the first time, in order to smooth the weights and find flatter minima that may improve the adversarially robust generalization. Note that we choose SWA mainly due to its simplicity for proofs-of-concept; while extensively comparing alternative “flatness” regularizations is beyond our current work’s scope.\nOne additional bonus of adopting SWA in AT is the temporal ensemble effect of SWA. It has been widely observed (Tramèr et al., 2018; Grefenstette et al., 2018; Wu et al., 2020b; Wang et al., 2021) that training a model with the attack transferred from another could reduce “trivial robustness” caused by locally nonlinear loss surfaces, and therefore constructed model ensembles for a stronger defense. SWA was interpreted as approximating the fast geometric ensembling (Garipov et al., 2018), by aggregating multiple checkpoint weights at different training time. Applying SWA to AT therefore may lead to stronger and more transferable attacks, and consequently stronger defense due to ensembling, with the convenience of a single model.\nApproach: How Following (Izmailov et al., 2018), applying SWA to AT is straightforward:\nWTSWA = WT−1SWA × n+WT\nn+ 1 , WT =WT−1 + ∆WT (2)\n4\nwhere T indexs the training epoch, n the number of past checkpoints to be averaged, WSWA the averaged network weight,W the current network weight, and ∆W the SGD update." }, { "heading": "3 EXPERIMENT AND ANALYSIS", "text": "Datasets We consider five datasets in our experiments: CIFAR-10, CIFAR-100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), STL-10 (Coates et al., 2011) and Tiny-ImageNet (Deng et al., 2009). In all experiments, we randomly split the original training set into one training and one validation sets with a 9:1 ratio. Due to the limited space, we place the SVHN results in Appendix A1.3. The ablation studies and the visualizations are mainly on CIFAR-10 and CIFAR-100.\nAttack Methods We consider three representative attacks: FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018), and TRADES (Zhang et al., 2019b). All of them are applied with (`2, = 128/255) or (`∞, = 8/255) setting as in (Madry et al., 2018), to generate adversarial samples. We use FSGM-1/PGD-10/TRADES-10 for training and PGD-20 for testing as the default setting, following Madry et al. (2018); Chen et al. (2020a). In addition, we use Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017) for a more rigorous evaluation. More details are provided in the Appendix A2.2.\nTraining and Evaluation Details For all experiments, we by default use ResNet-18 (He et al., 2016), with the exception of VGG-16 (Simonyan & Zisserman, 2014) and Wide-ResNet (Zagoruyko & Komodakis, 2016) adopted in Table 3. For training, we adopt an SGD optimizer with a momentum of 0.9 and weight decay of 5×10−4, for a total of 200 epochs, with a batch size of 128. The learning rate starts from 0.1 (0.01 for SVHN (Rice et al., 2020)), decay to one-tenth at epochs 50 and 150 respectively. For Tiny-ImageNet, we train for 100 epochs, and the learning rate decay at epochs 50 and 80 with other settings unchanged. The self-training KD regularization is applied throughout the entire training, and SWA is employed after the first learning rate decay (when the robust overfitting usually starts to occur). We evaluate two common metrics that are widely adopted (Zhang et al., 2019b; Chen et al., 2020a): Standard Testing Accuracy (SA), and Robust Testing Accuracy (RA), which are the classification accuracies on the original and the attacked test sets, respectively." }, { "heading": "3.1 TACKLING ROBUST OVERFITTING", "text": "Superior Performance Across Datasets Table 1 demonstrates our proposal on STL-10, CIFAR10, CIFAR-100, and Tiny-ImageNet. We consider PGD-AT (Madry et al., 2018) as Baseline; and denote our two training techniques as +KDstd&adv (KD with standard and robust self-teachers), and +SWA, respectively. To numerically show the gap of robust overfitting, we also report the best RA values when early stopping during training, the final RA in the last epoch, and as the difference between final minus best. For reference, we also report the corresponding SA for the same best-RA checkpoint (not the best SA value throughout training), the final epoch SA, and their difference.\nWe first observe that the robust overfitting prevails in all Baseline cases, with RA differences between final and best early-stopping values as large as 9.34% (CIFAR-10). In comparison, SA stays stable (with negative gaps on STL-10 and CIFAR-10) or continues to improve along with more training epochs (with small positive gaps on CIFAR-100 and Tiny-ImageNet). Fortunately, the gaps were significantly reduced by +KDstd&adv; and further diminished to only 0.4% to 0.6% when SWA is also applied.\n5\nFurther, we observe our methods to push the best RA higher by 0.22% ∼ 2.03%. For example, the best RA on Tiny-ImageNet rises from 19.81% to 21.84%. Meanwhile, since there is no longer robust overfitting early in training, the best RA checkpoints become to select late epochs (often close to the end). Consequently, the SA values of the selected best RA models are all substantially improved. For example on CIFAR-100, the standard accuracy of our methods (best RA checkpoint) surpasses the baseline’s best RA checkpoint by 6.68%, and by 7.29% for the final checkpoint.\nFigure 3 further plots the RA and SA curves during training, from which we can clearly observe the diminishing of robust overfitting, after applying KDstd&adv, SWA and a combination of two methods. The training curves robustly improve until the end, without compromising the best achievable RA results, and further leads to a much-improved trade-off between RA and SA by avoiding early stopping (e.g., selecting an early checkpoint for RA, when SA might still be half-baked).\nAcross Perturbations and Robustified Methods Our success can extend beyond PGD-AT. Table 2 presents more results in different perturbations (i.e. `2, `∞) and diverse robustified methods (i.e. FSGM in (Wong et al., 2020), TRADES in (Zhang et al., 2019b)). Consistent observations can be made: almost eliminated robust overfitting gaps, and significant gains on RA (by 0.61% ∼ 3.11%) and SA (by 1.80% ∼ 4.22%). We also compare with previous state-of-the-art results in (Rice et al., 2020) under the same setting. As shown in Table A7 (Appendix), our methods shrink the gap between the RA best checkpoint and the final epoch RA from 5.70% to 0.17% and simultaneously improve 4.50% by RA and 3.04% by SA. More results can be found in Appendix A1.\nAcross Architectures and Improved Attacks Table 3 demonstrates the effectiveness of our methods across different architectures, including VGG-16, Wide-ResNet-34-4, and Wide-ResNet-34-10. Specifically, our methods reduce the drop of robust accuracy from 5.83% to 0.06% with VGG-16 on CIFAR-10 while achieving an extra robust accuracy improvement of 2.57%, 1.69% and 1.23% with VGG-16, Wide-ResNet-34-4 and Wide-ResNet-34-10 on CIFAR10, respectively. To further verify the improvements achieved by our methods, we conduct extra evaluations under improved attacks. As shown in Table 4, after applying the combination of KD and SWA, the overfitting problem is largely mitigated under both Auto-Attack (Croce & Hein, 2020) and CW attack (Carlini & Wagner, 2017). Take CIFAR-10 `∞ adversary as an example, our approaches shrink the drop of robust accuracy from 7.04% to −0.09% under Auto-Attack, and 14.96% to 0.79% under CW attack, when comparing the best model to the eventually converged model. These results indicate that our methods can generalize to different architectures and improved attacks.\nExcluding Obfuscated Gradients An often argued “counterfeit” of improved robustness is caused by less effectiveness of generated adversarial examples due to obfuscated gradients (Athalye et al., 2018). To exclude this possibility, we show that our methods maintain improved robustness under unseen transfer attacks. To start with, the left figure in Figure 4 shows the transfer testing performance on an unseen robust model (here we use a separately robustified ResNet-50 with PGD-10\n6\non CIFAR-100), using attacks generated by the different epoch checkpoints of PGD-AT Baseline, Baseline + KDstd&adv, and Baseline + KDstd&adv + SWA. A higher robust accuracy on the unseen robust model corresponds to a weaker attack. Apparently, our methods consistently yield stronger and more transferable attacks, while the attacking quality generated by the baseline quickly drops with deteriorated transferability. Similarly, the right figure of Figure 4 transfers the attack from\n7\nan unseen robust model to the above three methods, while our methods consistently defend better. Those empirical pieces of evidence suggest that our RA gains are not a result of gradient masking." }, { "heading": "3.2 ABLATION STUDY AND VISUALIZATION", "text": "KDadv, KDstd and SWA We study the effectiveness of each component in logit and weight smoothening. We also specifically decompose KDstd&adv into two ablation methods: KDstd (by setting λ2 = 0 in Eqn. (1)), andKDadv (by setting λ1 = 0), respectively. Table 5 shows thatKDstd, KDadv and SWA all substantially contribute to suppressing the robust overfitting and enhancing the SA-RA trade-off. We notice that while KDstd seems to (understandably) sacrifice the best RA a bit for improving TA, combining it with KDadv brings the RA compromise back and boosts them both.\nNaive LS versus learned logit smoothening As KD could be viewed as a learned version of LS (Yuan et al., 2020), we next quantify the benefit of using KDstd&adv, compared to naive LS (Szegedy et al., 2016), and the teacher-free knowledge distillation regularization (Tf-\nKDreg) in (Yuan et al., 2020), all incorporated with PGD-AT on CIFAR-10. Table 6 show that both naive LS and Tf-KDreg also reduce robust overfitting to some extent, but far less competitive than KDstd&adv. Moreover, the robustness gains of naive LS and Tf-KDreg no longer hold under transfer attacks, implying that they are susceptible to obfuscated gradients. Further visualization in Figure 5 demonstrates that our methods smooth the logits without compromising the class-wise discriminative information, while naive LS and Tf-KD might suffer from weaker gradients here.\nQuality of Self-Teachers An extra price for our learned logit smoothening is the pre-training of self-teachers, although this is already quite common in similar literature (Chen et al., 2020a;b). To further reduce this burden, we explore whether high-quality and more expensive pre-training is necessary for us, and fortunately find that is not the case. For example, Table 6 shows only marginal performance difference when the robust self-teacher is pre-trained using FGSM or PGD-10/100.\n8\nVisualizing Flatness and Local Linearity We expect SWA to find flatter minima for AT to improve its generalization, and we show it to indeed happen by visualizing the loss landscape w.r.t. both input and weight spaces. Figure 6 shows that our methods notably flatten the rugged landscape w.r.t. the input space, compared to the PGD-AT baseline, which aligns with the robust generalization claims in (Moosavi-Dezfooli et al., 2019; Wu et al., 2020a). Figure 7 follows (Izmailov et al., 2018) to perturb the trained model in the weight space and show how the robust testing loss changes over the perturbation radius. We perturb 10 different random directions at each different `2 distance. Our methods present better weight smoothness around the achieved local minima too, which suggests improved generalization (Dinh et al., 2017; Petzka et al., 2019).\nWe additionally look at the local linearity measurement proposed in (Andriushchenko & Flammarion, 2020), which originally addresses catastrophic overfitting in fast AT. As shown in Figure A11, our methods also achieve consistently better local linearity." }, { "heading": "4 CONCLUSION", "text": "This paper takes one more step towards addressing the recently discovered robust overfitting issue in AT. We present two empirical solutions to smooth the logits and weights respectively; both are motivated by successful practice in improving standard generalization, and we adapt them for AT. While Rice et al. (2020) found simpler regularizations unable to fix robustness overfitting, our learned smoothening regularization seems to largely mitigate that. Extensive experiments show our proposal to establish new state-of-the-art performance on AT. While promising progress has been made, the underlying cause of robust overfitting is not yet fully explained. Our future work will connect to more theoretical understandings of this issue (Wang et al., 2019; 2020b).\n9" }, { "heading": "A1 MORE EXPERIMENT RESULTS", "text": "" }, { "heading": "A1.1 STATE-OF-THE-ART BENCHMARK ON CIFAR-100", "text": "We implement our methods with exactly the same setting as (Rice et al., 2020) and compare it with the baseline result reported from the original paper. As shown in Table A7 and Figure A8, our methods achieve great improvements both on robust accuracy and standard accuracy (1.64% in RA and 3.78% in SA for `∞, 4.50% in RA and 3.04% in SA for `2), which establish a new state-of-theart bar.\nTable A7: Comparative Experiment on CIFAR-100, we follow the same setting and compare with the baseline result from (Rice et al., 2020). Best refers to the model with best robust accuracy during training and Final is an average of accuracy over last 5 epochs.\nAdversary Norm Radius Settings Robust Accuracy (RA) Standard Accuracy (SA)Best Final Diff. Best Final Diff.\nPGD `2 = 128255 Baseline 43.20 37.50±0.09 5.70 62.50 60.10±0.22 2.40\nOur Methods 47.70 47.53±0.03 0.17 65.54 65.56±0.01 -0.02\nPGD `∞ = 8255 Baseline 28.10 21.40±0.39 6.70 52.70 54.10±0.23 -1.40\nOur Methods 29.74 29.40±0.02 0.34 56.48 57.69±0.03 -1.21\n0 50 100 150 200 Epochs\n10\n20\n30\n40\n50\n60\nAc cu\nra cy\n(% )\nCIFAR-100\nBaseline SA KD SA KD&SWA SA\nBaseline RA KD SA KD&SWA SA\n0 50 100 150 200 Epochs\n10\n20\n30\n40\n50\n60\nAc cu\nra cy\n(% )\nCIFAR-100 2\nBaseline SA KD SA KD&SWA SA\nBaseline RA KD SA KD&SWA SA\nFigure A8: Results of testing accuracy over epochs for ResNet-18 trained on CIFAR-100 with the same setting as Rice et al. (2020). Dash lines show the standard accuracy (SA); solid lines represent the robust accuracy (RA). Blue, Green and Orange curves represent the performance of Baseline, KD and KD&SWA respectively." }, { "heading": "A1.2 T-SNE RESULT ON CIFAR-100", "text": "We visualize the learned feature space with all training images and their corresponding adversarial images from PGD-10 on CIFAR-100. As shown in Figure A9, our learned features have a larger distance between classes while being more clustered within the same class. The more distinguishable feature embedding justifies the improvement of both robust and standard accuracy." }, { "heading": "A1.3 SUPERIOR PERFORMANCE ON SVHN", "text": "We conduct our experiments on SVHN with ResNet-18 (He et al., 2016) architecture and adopt an SGD optimizer with a momentum of 0.9 and a weight decay of 5 × 10−4 for 80 epochs in total with a batch size of 128. The learning rate starts from 0.01 and follows a cosine annealing schedule. The result can be found on Table A8 and Figure A10. As we can see, the robust accuracy of the best checkpoint for `∞ is improved from 52.60% to 53.65%, and robust overfitting is alleviated by 6.30%. In the meantime, standard accuracy has also been improved by 2.47%. The superior performance on SVHN aligns with results on other datasets, which shows the effectiveness of our methods.\nA15\nFigure A9: t-SNE results of different models trained on CIFAR-100. Dots and stars represent for clean and adversarial images respectively. Red, Blue and Green represent classes A, B and C respectively. For each class, we visualize all training images and their corresponding adversarial images from PGD-10. The left figure is Baseline; the right figure is Our Methods.\n0 10 20 30 40 50 60 70 80 Epochs\n20\n30\n40\n50\n60\n70\n80\n90\nAc cu\nra cy\n(% )\nSVHN\nBaseline SA KD SA SWA SA KD&SWA SA\nBaseline RA KD RA SWA RA KD&SWA RA\nFigure A10: Results of testing accuracy over epochs for ResNet-18 trained on SVHN. Dash lines show the standard accuracy (SA); solid lines represent the robust accuracy (RA). Blue, Green, Black and Orange curves represent the performance of Baseline, KD, SWA and KD&SWA respectively.\nTable A8: Performance showing the occurrence of robust overfitting and effectiveness of our proposed remedies with ResNet-18 on SVHN. The difference between best and final robust accuracy indicates degradation in performance during training. We pick the checkpoint which has the best robust accuracy on validation dataset. The best results and the minimum performance difference are marked in bold.\nDataset Settings Robust Accuracy (RA) Standard Accuracy (SA)Best Final Diff. Best Final Diff.\nSVHN Baseline 52.60 43.30 9.30 87.93 89.94 -2.01 Baseline + KDstd&adv 52.93 48.46 4.47 87.62 91.36 -3.74 Baseline + KDstd&adv + SWA 53.65 50.65 3.00 90.40 91.70 -1.30" }, { "heading": "A1.4 LOCAL LINEARITY", "text": "As proposed by (Andriushchenko & Flammarion, 2020), the catastrophic overfitting problem is mainly due to the local linearity reduction when adversarial training with FGSM(Rice et al., 2020). So we borrow this measurement in the robust overfitting scenario, which calculates the expectation of the cosine similarity of the gradient between the original input and randomly perturbed one with a uniform distribution, as shown in Eqn. 3. The result shown in figure A11 indicates that our methods help to slow the decline of local linearity and the maintenance of local linearity is also helpful for preventing robust overfitting.\nA16\nE(x,y)∈D,η∈U([− , ]d) [ cos ( ∇xL(f(θ,x), y),∇xL(f(θ,x + η), y) )] (3)" }, { "heading": "The Local Linearity", "text": "" }, { "heading": "A1.5 ABLATION OF TRANSFER ATTACK", "text": "With the purpose of fully comparing the effects of label smoothing and knowledge distillation, we introduce a transfer attack with an unseen non-robust model with the same architecture, follow the same setting as (Fu et al., 2020). A higher accuracy on the unseen model indicates a weaker attack generated by the corresponding setting while a higher accuracy from the unseen model means better robustness. As shown in Table A9, only knowledge distillation shows significant improvement with both accuracies, compared with baseline(PGD-AT) methods. The strength of generated adversarial images is improved by 4.32% and the robustness is improved by 2.91% for the best model. We also experiment with an unseen robust model and get consistent improvement. This improvement indicates that knowledge distillation introduces more discriminating information from teacher models, which is better than manually designed label smoothing methods.\nTable A9: Ablation of Transfer attack. The accuracy on unseen model is the accuracy of unseen model with adversarial images generated by source models from different settings and the accuracy from unseen model means the opposite. We generated adversarial images for all test images on CIFAR-10 with `∞ PGD-20. Baseline represents the PGD-AT methods.\nSettings Accuracy on unseen model Accuracy from unseen modelBest Final Best Final\nBaseline 69.87 80.43 79.72 81.77 Label smoothing 70.94 81.24 78.44 82.82\nTf-KDreg 73.29 82.21 79.47 82.71 KDstd&adv−PGD10 65.55 72.63 82.63 84.01" }, { "heading": "A1.6 SWA VERSUS ISWA", "text": "One possible extension of SWA is to replaceWT−1 withWT−1SWA in Eqn. 2. We name this variant as iSWA and compare it with the original SWA in Table A10. Both weight smoothing techniques can mitigate robust overfitting, and iSWA performs slightly better on RA while sacrificing some SA.\nA17\nTable A10: Ablation of SWA on CIFAR-10.Best refers to the model selected with best robust accuracy on validation dataset and Final is the model at the end of training process.\nSettings Robust Accuracy (RA) Standard Accuracy (SA)Best Final Diff. Best Final Diff.\nKDstd&adv−PGD10 + SWA 52.14 51.53 0.61 84.65 85.40 -0.75 KDstd&adv−PGD10 + iSWA 52.36 52.33 0.03 83.17 83.54 -0.37" }, { "heading": "A2 MORE METHODOLOGY AND IMPLEMENTATION DETAILS", "text": "" }, { "heading": "A2.1 KNOWLEDGE DISTILLATION", "text": "We state KD as follows:\nKD(y, ŷ) = −H(t(y), t(ŷ)) = − ∑ j t(y)j log t(ŷ)j\nwhere t(y)i = (yi) 1/T∑ j(yj)\n1/T , T = 2 in our case, following the standard setting in (Hinton et al., 2015; Li & Hoiem, 2017)." }, { "heading": "A2.2 ADVERSARIAL TRAINING", "text": "Adversarial training incorporates generated adversarial examples into the training process and significantly improves the robustness of networks. In our paper, we implemented three different adversarial training schemes: FGSM, PGD and TRADES, which can be described as the optimization problem below: Eqn.4 for FGSM and PGD, Eqn.5 for TRADES.\nmin θ E(x,y)∈D max δ∈B (x) L(f(θ,x + δ), y) (4)\nmin θ\nE(x,y)∈D [ L(f(θ,x), y) + β · max\nδ∈B (x) KL(f(θ,x + δ), f(θ,x))\n] (5)\nAs for the maximization process, FGSM perturbs the input with a single step in the direction of the sign of the gradient and PGD is the iterative form of FGSM with random restarts, which works as follows.\nδt+1 = projP\n( δt + α · sgn ( ∇xL(f(θ,x + δt), y) )) (6)\nδt+1 = projP\n( δt + α · sgn ( ∇xKL(f(θ,x + δ), f(θ,x) )) (7)\nTRADES (Eqn.7) replaces the cross entropy loss in PGD with the Kull-back–Leibler divergence of network output for the clean input and the adversarial input. Where f is the network with parameters θ, (x, y) is the data. α is the step size and δt is the adversarial perturbation after t times iterations. The perturbation is constrained in an `p norm ball, i.e.‖δ‖p ≤ , which is realized by projection. we consider both `∞ and `2 in our paper. For `∞ adversary, we use = 8255 and α = 2 255 for PGD and TRADES with 10 steps in training and 20 steps in testing, while using α = 7255 for FGSM during training. As for `2 adversary, we use = 128255 and α = 15 255 with the same steps as `∞ adversary in all three attack methods.\nFor a comprehensive evaluation, we consider two improved attacks, i.e., Auto-Attack (Croce & Hein, 2020) and CW Attack (Carlini & Wagner, 2017). We use the official implementation and default settings for Auto-Attack (`∞ with = 8255 and `2 with = 128 255 ) and the implementation\nA18\nfrom AdverTorch (Ding et al., 2019) for CW attack with the same setting as Rony et al. (2019), specifically, 1 search step on C with an initial constant of 0.1, with 100 iterations for each search step and 0.01 learning rate. Detailed links are provided below:\n• The Official Repository: https://github.com/fra31/auto-attack • The Leaderboard: https://robustbench.github.io/\nA19" } ]
2,021
null
SP:7699e91f9cf5149401b1adf811bcdaa643869262
[ "This paper presents a novel Equivariant Continous COnvolution (ECCO) method for vehicle and pedestrian trajectory prediction. ECCO extends the previous continuous convolution method and makes it rotationally equivariant. To achieve that, they constrain the convolution kernel function K and make only the K(0, r) component freely trainable, and use it to derive the other components. They also propose a torus kernel that can be used on functions on circles. They evaluated their approach on two real-world trajectory prediction datasets, Argoverse and TrajNet++. They compared to basic baselines including constant velocity, nearest neighbor, and LSTM, as well as CtsConv (non-equivariant continuous convolution) and VectorNet. Results show that ECCO achieves significantly lower prediction errors than CtsConv and has fewer model weights. However, their prediction error is slightly higher than VectorNet at 2s and 3s." ]
Trajectory prediction is a critical part of many AI applications, for example, the safe operation of autonomous vehicles. However, current methods are prone to making inconsistent and physically unrealistic predictions. We leverage insights from fluid dynamics to overcome this limitation by considering internal symmetry in real-world trajectories. We propose a novel model, Equivariant Continous COnvolution (ECCO) for improved trajectory prediction. ECCO uses rotationallyequivariant continuous convolutions to embed the symmetries of the system. On both vehicle and pedestrian trajectory datasets, ECCO attains competitive accuracy with significantly fewer parameters. It is also more sample efficient, generalizing automatically from few data points in any orientation. Lastly, ECCO improves generalization with equivariance, resulting in more physically consistent predictions. Our method provides a fresh perspective towards increasing trust and transparency in deep learning models. Our code and data can be found at https://github.com/Rose-STL-Lab/ECCO.
[ { "affiliations": [], "name": "Robin Walters" }, { "affiliations": [], "name": "Jinxi Li" } ]
[ { "authors": [ "Alexandre Alahi", "Kratarth Goel", "Vignesh Ramanathan", "Alexandre Robicquet", "Li Fei-Fei", "Silvio Savarese" ], "title": "Social lstm: Human trajectory prediction in crowded spaces", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Matan Atzmon", "Haggai Maron", "Yaron Lipman" ], "title": "Point convolutional neural networks by extension operators", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Erkao Bao", "Linqi Song" ], "title": "Equivariant neural networks and equivarification", "venue": "arXiv preprint arXiv:1906.07172,", "year": 2019 }, { "authors": [ "Peter W Battaglia", "Jessica B Hamrick", "Victor Bapst", "Alvaro Sanchez-Gonzalez", "Vinicius Zambaldi", "Mateusz Malinowski", "Andrea Tacchetti", "David Raposo", "Adam Santoro", "Ryan Faulkner" ], "title": "Relational inductive biases, deep learning, and graph networks", "venue": "arXiv preprint arXiv:1806.01261,", "year": 2018 }, { "authors": [ "Erik J Bekkers" ], "title": "B-spline CNNs on Lie groups", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Richard C Brown", "Gerton Lunter" ], "title": "An equivariant Bayesian convolutional network predicts recombination hotspots and accurately resolves binding", "venue": "motifs. Bioinformatics,", "year": 2019 }, { "authors": [ "Ming-Fang Chang", "John Lambert", "Patsorn Sangkloy", "Jagjeet Singh", "Slawomir Bak", "Andrew Hartnett", "De Wang", "Peter Carr", "Simon Lucey", "Deva Ramanan" ], "title": "Argoverse: 3d tracking and forecasting with rich maps", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Benjamin Chidester", "Minh N. Do", "Jian Ma" ], "title": "Rotation equivariance and invariance in convolutional neural networks", "venue": "arXiv preprint arXiv:1805.12301,", "year": 2018 }, { "authors": [ "Taco S. Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International conference on machine learning (ICML),", "year": 2016 }, { "authors": [ "Taco S Cohen", "Mario Geiger", "Maurice Weiler" ], "title": "A general theory of equivariant CNNs on homogeneous spaces", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Taco S. Cohen", "Maurice Weiler", "Berkay Kicanaoglu", "Max Welling" ], "title": "Gauge equivariant convolutional networks and the icosahedral CNN", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Sander Dieleman", "Jeffrey De Fauw", "Koray Kavukcuoglu" ], "title": "Exploiting cyclic symmetry in convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Nemanja Djuric", "Vladan Radosavljevic", "Henggang Cui", "Thi Nguyen", "Fang-Chieh Chou", "Tsung-Han Lin", "Jeff Schneider" ], "title": "Short-term motion prediction of traffic actors for autonomous driving using deep convolutional networks", "venue": "arXiv preprint arXiv:1808.05819,", "year": 2018 }, { "authors": [ "Carlos Esteves", "Christine Allen-Blanchette", "Xiaowei Zhou", "Kostas Daniilidis" ], "title": "Polar transformer networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Marc Finzi", "Samuel Stanton", "Pavel Izmailov", "Andrew Gordon Wilson" ], "title": "Generalizing convolutional neural networks for equivariance to Lie groups on arbitrary continuous data", "venue": "arXiv preprint arXiv:2002.12880,", "year": 2020 }, { "authors": [ "Fabian B Fuchs", "Daniel E Worrall", "Volker Fischer", "Max Welling" ], "title": "SE(3)-transformers: 3D roto-translation equivariant attention networks", "venue": "arXiv preprint arXiv:2006.10503,", "year": 2020 }, { "authors": [ "Jiyang Gao", "Chen Sun", "Hang Zhao", "Yi Shen", "Dragomir Anguelov", "Congcong Li", "Cordelia Schmid" ], "title": "Vectornet: Encoding HD maps and agent dynamics from vectorized representation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Brian Hall" ], "title": "Lie groups, Lie algebras, and representations: an elementary introduction, volume 222", "venue": null, "year": 2015 }, { "authors": [ "Pedro Hermosilla", "Tobias Ritschel", "Pere-Pau Vázquez", "Àlvar Vinacua", "Timo Ropinski" ], "title": "Monte carlo convolution for learning on non-uniformly sampled point clouds", "venue": "ACM Transactions on Graphics (TOG),", "year": 2018 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Binh-Son Hua", "Minh-Khoi Tran", "Sai-Kit Yeung" ], "title": "Pointwise convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Gurtej Kanwar", "Michael S Albergo", "Denis Boyda", "Kyle Cranmer", "Daniel C Hackett", "Sébastien Racanière", "Danilo Jimenez Rezende", "Phiala E Shanahan" ], "title": "Equivariant flow-based sampling for lattice gauge theory", "venue": null, "year": 2003 }, { "authors": [ "Arne Kesting", "Martin Treiber", "Dirk Helbing" ], "title": "Enhanced intelligent driver model to access the impact of driving strategies on traffic capacity", "venue": "Philosophical Transactions of the Royal Society A: Mathematical,", "year": 1928 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Anthony W Knapp" ], "title": "Lie groups beyond an introduction, volume 140", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Risi Kondor", "Shubhendu Trivedi" ], "title": "On the generalization of equivariance and convolution in neural networks to the action of compact groups", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Parth Kothari", "Sven Kreiss", "Alexandre Alahi" ], "title": "Human trajectory forecasting in crowds: A deep learning perspective", "venue": "arXiv preprint arXiv:2007.03639,", "year": 2020 }, { "authors": [ "Huan Lei", "Naveed Akhtar", "Ajmal Mian" ], "title": "Octree guided CNN with spherical kernels for 3D point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Karel Lenc", "Andrea Vedaldi" ], "title": "Understanding image representations by measuring their equivariance and equivalence", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Yangyan Li", "Rui Bu", "Mingchao Sun", "Wei Wu", "Xinhan Di", "Baoquan Chen" ], "title": "Pointcnn: Convolution on x-transformed points", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Ming Liang", "Bin Yang", "Rui Hu", "Yun Chen", "Renjie Liao", "Song Feng", "Raquel Urtasun" ], "title": "Learning lane graph representations for motion forecasting", "venue": "arXiv preprint arXiv:2007.13732,", "year": 2020 }, { "authors": [ "Haggai Maron", "Or Litany", "Gal Chechik", "Ethan Fetaya" ], "title": "On learning sets of symmetric elements", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Louis Albert Pipes" ], "title": "Car following models and the fundamental diagram of road traffic", "venue": "Transportation Research/UK/,", "year": 1966 }, { "authors": [ "Andrey Rudenko", "Luigi Palmieri", "Michael Herman", "Kris M Kitani", "Dariu M Gavrila", "Kai O Arras" ], "title": "Human motion trajectory prediction: A survey", "venue": "The International Journal of Robotics Research,", "year": 2020 }, { "authors": [ "Amir Sadeghian", "Vineet Kosaraju", "Agrim Gupta", "Silvio Savarese", "Alexandre Alahi" ], "title": "Trajnet: Towards a benchmark for human trajectory prediction", "venue": null, "year": 2018 }, { "authors": [ "Alvaro Sanchez-Gonzalez", "Jonathan Godwin", "Tobias Pfaff", "Rex Ying", "Jure Leskovec", "Peter W Battaglia" ], "title": "Learning to simulate complex physics with graph networks", "venue": "arXiv preprint arXiv:2002.09405,", "year": 2020 }, { "authors": [ "Connor Schenck", "Dieter Fox" ], "title": "Spnets: Differentiable fluid dynamics for deep neural networks", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Rajiv Shah", "Rob Romijnders" ], "title": "Applying deep learning to basketball trajectories", "venue": "arXiv preprint arXiv:1608.03793,", "year": 2016 }, { "authors": [ "Hang Su", "Varun Jampani", "Deqing Sun", "Subhransu Maji", "Evangelos Kalogerakis", "Ming-Hsuan Yang", "Jan Kautz" ], "title": "Splatnet: Sparse lattice networks for point cloud processing", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Nathaniel Thomas", "Tess Smidt", "Steven Kearnes", "Lusann Yang", "Li Li", "Kai Kohlhoff", "Patrick Riley" ], "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds", "venue": "arXiv preprint arXiv:1802.08219,", "year": 2018 }, { "authors": [ "Benjamin Ummenhofer", "Lukas Prantl", "Nils Thuerey", "Vladlen Koltun" ], "title": "Lagrangian fluid simulation with continuous convolutions", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Matthew Veres", "Medhat Moussa" ], "title": "Deep learning for intelligent transportation systems: A survey of emerging trends", "venue": "IEEE Transactions on Intelligent transportation systems,", "year": 2019 }, { "authors": [ "Rui Wang", "Robin Walters", "Rose Yu" ], "title": "Incorporating symmetry into deep dynamics models for improved generalization", "venue": "arXiv preprint arXiv:2002.03061,", "year": 2020 }, { "authors": [ "Shenlong Wang", "Simon Suo", "Wei-Chiu Ma", "Andrei Pokrovsky", "Raquel Urtasun" ], "title": "Deep parametric continuous convolutional neural networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Maurice Weiler", "Gabriele Cesa" ], "title": "General E(2)-equivariant steerable CNNs", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Maurice Weiler", "Fred A. Hamprecht", "Martin Storath" ], "title": "Learning steerable filters for rotation equivariant CNNs", "venue": "Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Xinshuo Weng", "Shangxuan Wu", "Fares Beainy", "Kris M Kitani" ], "title": "Rotational rectification network: Enabling pedestrian detection for mobile vision", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Daniel E Worrall", "Stephan J Garbin", "Daniyar Turmukhambetov", "Gabriel J Brostow" ], "title": "Harmonic networks: Deep translation and rotation equivariance", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Wenxuan Wu", "Zhongang Qi", "Li Fuxin" ], "title": "Pointconv: Deep convolutional networks on 3d point clouds", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Shi Xingjian", "Zhourong Chen", "Hao Wang", "Dit-Yan Yeung", "Wai-Kin Wong", "Wang-chun Woo" ], "title": "Convolutional lstm network: A machine learning approach for precipitation nowcasting", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Yifan Xu", "Tianqi Fan", "Mingye Xu", "Long Zeng", "Yu Qiao" ], "title": "Spidercnn: Deep learning on point sets with parameterized convolutional filters", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 } ]
[ { "heading": null, "text": "Trajectory prediction is a critical part of many AI applications, for example, the safe operation of autonomous vehicles. However, current methods are prone to making inconsistent and physically unrealistic predictions. We leverage insights from fluid dynamics to overcome this limitation by considering internal symmetry in real-world trajectories. We propose a novel model, Equivariant Continous COnvolution (ECCO) for improved trajectory prediction. ECCO uses rotationallyequivariant continuous convolutions to embed the symmetries of the system. On both vehicle and pedestrian trajectory datasets, ECCO attains competitive accuracy with significantly fewer parameters. It is also more sample efficient, generalizing automatically from few data points in any orientation. Lastly, ECCO improves generalization with equivariance, resulting in more physically consistent predictions. Our method provides a fresh perspective towards increasing trust and transparency in deep learning models. Our code and data can be found at https://github.com/Rose-STL-Lab/ECCO." }, { "heading": "1 INTRODUCTION", "text": "Trajectory prediction is one of the core tasks in AI, from the movement of basketball players to fluid particles to car traffic (Sanchez-Gonzalez et al., 2020; Gao et al., 2020; Shah & Romijnders, 2016). A common abstraction underlying these tasks is the movement of many interacting agents, analogous to a many-particle system. Therefore, understanding the states of these particles, their dynamics, and hidden interactions is critical to accurate and robust trajectory forecasting.\nEven for purely physical systems such as in particle physics, the complex interactions among a large number of particles makes this a difficult problem. For vehicle or pedestrian trajectories, this challenge is further compounded with latent factors such as human psychology. Given these difficulties, current approaches require large amounts of training data and many model parameters. State-of-the-art methods in this domain such as Gao et al. (2020) are based on graph neural networks. They do not exploit the physical properties of system and often make predictions which are not self-consistent or physically meaningful. Furthermore, they\npredict a single agent trajectory at a time instead of multiple agents simultaneously.\n∗Equal Contribution\nOur model is built upon a key insight of many-particle systems pertaining to intricate internal symmetry. Consider a model which predicts the trajectory of cars on a road. To be successful, such a model must understand the physical behavior of vehicles together with human psychology. It should distinguish left from right turns, and give consistent outputs for intersections rotated with different orientation. As shown in Figure 1, a driver’s velocity rotates with the entire scene, whereas vehicle interactions are invariant to such a rotation. Likewise, psychological factors such as reaction speed or attention may be considered vectors with prescribed transformation properties. Data augmentation is a common practice to deal with rotational invariance, but it cannot guarantee invariance and requires longer training. Since rotation is a continuous group, augmentation requires sampling from infinitely many possible angles.\nIn this paper, we propose an equivariant continuous convolutional model, ECCO, for trajectory forecasting. Continuous convolution generalizes discrete convolution and is adapted to data in manyparticle systems with complex local interactions. Ummenhofer et al. (2019) designed a model using continuous convolutions for particle-based fluid simulations. Meanwhile, equivariance to group symmetries has proven to be a powerful tool to integrate physical intuition in physical science applications (Wang et al., 2020; Brown & Lunter, 2019; Kanwar et al., 2020). Here, we test the hypothesis that an equivariant model can also capture internal symmetry in non-physical human behavior. Our model utilizes a novel weight sharing scheme, torus kernels, and is rotationally equivariant.\nWe evaluate our model on two real-world trajectory datasets: Argoverse autonomous vehicle dataset (Chang et al., 2019) and TrajNet++ pedestrian trajectory forecasting challenge (Kothari et al., 2020). We demonstrate on par or better prediction accuracy to baseline models and data augmentation with fewer parameters, better sample efficiency, and stronger generalization properties. Lastly, we demonstrate theoretically and experimentally that our polar coordinate-indexed filters have lower equivariance discretization error due to being better adapted to the symmetry group.\nOur main contributions are as follows:\n• We propose Equivariant Continous COnvolution (ECCO), a rotationally equivariant deep neural network that can capture internal symmetry in trajectories.\n• We design ECCO using a novel weight sharing scheme based on orbit decomposition and polar coordinate-indexed filters. We implement equivariance for both the standard and regular representation L2(SO(2)).\n• On benchmark Argoverse and TrajNet++ datasets, ECCO demonstrates comparable accuracy while enjoying better generalization, fewer parameters, and better sample complexity." }, { "heading": "2 RELATED WORK", "text": "Trajectory Forecasting For vehicle trajectories, classic models in transportation include the CarFollowing model (Pipes, 1966) and Intelligent Driver model (Kesting et al., 2010). Deep learning has also received considerable attention; for example, Liang et al. (2020) and Gao et al. (2020) use graph neural networks to predict vehicle trajectories. Djuric et al. (2018) use rasterizations of the scene with CNN. See the review paper by Veres & Moussa (2019) for deep learning in transportation. For human trajectory modeling, Alahi et al. (2016) propose Social LSTM to learn these humanhuman interactions. TrajNet (Sadeghian et al., 2018) and TrajNet++ (Kothari et al., 2020) introduce benchmarking for human trajectory forecasting. We refer readers to Rudenko et al. (2020) for a comprehensive survey. Nevertheless, many deep learning models are data-driven. They require large amounts of data, have many parameters, and can generate physically inconsistent predictions.\nContinuous Convolution Continuous convolutions over point clouds (CtsConv) have been successfully applied to classification and segmentation tasks (Wang et al., 2018; Lei et al., 2019; Xu et al., 2018; Wu et al., 2019; Su et al., 2018; Li et al., 2018; Hermosilla et al., 2018; Atzmon et al., 2018; Hua et al., 2018). More recently, a few works have used continuous convolution for modeling trajectories or flows. For instance, Wang et al. (2018) uses CtsConv for inferring flow on LIDAR data. Schenck & Fox (2018) and Ummenhofer et al. (2019) model fluid simulation using CtsConv. Closely related to our work is Ummenhofer et al. (2019), who design a continuous convolution network for particle-based fluid simulations. However, they use a ball-to-sphere mapping which is not well-adapted for rotational equivariance and only encode 3 frames of input. Graph neural networks (GNNs) are a related strategy which have been used for modeling particle system\ndynamics (Sanchez-Gonzalez et al., 2020). GNNs are also permutation invariant, but they do not natively encode relative positions and local interaction as a CtsConv-based network does.\nEquivariant and Invariant Deep Learning Developing neural nets that preserve symmetries has been a fundamental task in image recognition (Cohen et al., 2019b; Weiler & Cesa, 2019; Cohen & Welling, 2016a; Chidester et al., 2018; Lenc & Vedaldi, 2015; Kondor & Trivedi, 2018; Bao & Song, 2019; Worrall et al., 2017; Cohen & Welling, 2016b; Weiler et al., 2018; Dieleman et al., 2016; Maron et al., 2020). Equivariant networks have also been used to predict dynamics: for example, Wang et al. (2020) predicts fluid flow using Galilean equivariance but only for gridded data. Fuchs et al. (2020) use SE(3)-equivariant transformers to predict trajectories for a small number of particles as a regression task. As in this paper, both Bekkers (2020) and Finzi et al. (2020) address the challenge of parameterizing a kernel over continuous Lie groups. Finzi et al. (2020) apply their method to trajectory prediction on point clouds using a small number of points following strict physical laws. Worrall et al. (2017) also parameterizes convolutional kernels using polar coordinates, but maps these onto a rectilinear grid for application to image data. Weng et al. (2018) address rotational equivariance by inferring a global canonicalization of the input. Similar to our work, Esteves et al. (2018) use functions evenly sampled on the circle, however, their features are only at a single point whereas we assign feature vectors to each point in a point cloud. Thomas et al. (2018) introduce Tensor Field Networks which are SO(3)-equivariant continuous convolutions. Unlike our work, both Worrall et al. (2017) and Thomas et al. (2018) define their kernels using harmonic functions. Our weight sharing method using orbits and stabilizers is simpler as it does not require harmonic functions or Clebsch-Gordon coefficients. Unlike previous work, we implement a regular representation for the continuous rotation group SO(2) which is compatible with pointwise nonlinearities and enjoys an empirical advantage over irreducible representations." }, { "heading": "3 BACKGROUND", "text": "We first review the necessary background of continuous convolution and rotational equivariance." }, { "heading": "3.1 CONTINUOUS CONVOLUTION", "text": "Continuous convolution (CtsConv) generalizes the discrete convolution to point clouds. It provides an efficient and spatially aware way to model the interactions of nearby particles. Let f (i) ∈ Rcin denote the feature vector of particle i. Thus f is a vector field which assigns to the points x(i) a vector in Rcin . The kernel of the convolution K : R2 → Rcout×cin is a matrix field: for each point x ∈ R2, K(x) is a cout × cin matrix. Let a be a radial local attention map with a(r) = 0 for r > R. The output feature vector g(i) of particle i from the continous convolution is given by\ng(i) = CtsConvK,R(x, f ;x (i)) = ∑ j a(‖x(j) − x(i)‖)K(x(j) − x(i)) · f (j). (1)\nCtsConv is naturally equivariant to permutation of labels and is translation invariant. Equation 1 is closely related to graph neural network (GNN) (Kipf & Welling, 2017; Battaglia et al., 2018), which is also permutation invariant. Here the graph is dynamic and implicit with nodes x(i) and edges eij if ‖x(i) − x(j)‖ < R. Unlike a GNN which applies the same weights to all neighbours, the kernel K depends on the relative position vector x(i) − x(j)." }, { "heading": "3.2 ROTATIONAL EQUIVARIANCE", "text": "Continuous convolution is not naturally rotationally equivariant. Fortunately, we can translate the technique of rotational equivariance on CNNs to continuous convolutions. We use the language of Lie groups and their representations. For more background, see Hall (2015) and Knapp (2013).\nMore precisely, we denote the symmetry group of 2D rotations by SO(2) = {Rotθ : 0 ≤ θ < 2π}. As a Lie group, it has both a group structure Rotθ1 ◦ Rotθ2 = Rot(θ1+θ2)mod2π which a continuous map with respect to the topological structure. As a manifold, SO(2) is homomeomorphic to the circle S1 ∼= {x ∈ R2 : ‖x‖ = 1}. The group SO(2) can act on a vector space Rc by specifying a representation map ρ : SO(2) → GL(Rc) which assigns to each element of SO(2) an element of the set of invertible c × c matrices GL(Rc). The map ρ must a be homomorphism\nρ(Rotθ1)ρ(Rotθ1) = ρ(Rotθ1 ◦ Rotθ2). For example, the standard representation ρ1 on R2 is by 2 × 2 rotation matrices. The regular representation ρreg on L2(SO(2)) = {ϕ : SO(2) → R : |ϕ|2 is integrable} is ρreg(Rotφ)(ϕ) = ϕ◦Rot−φ. Given input f with representation ρin and output with representation ρout, a map F is SO(2)-equivariant if\nF (ρin(Rotθ)f) = ρout(Rotθ)F (f).\nDiscrete CNNs are equivariant to a group G if the input, output, and hidden layers carry a G-action and the linear layers and activation functions are all equivariant (Kondor & Trivedi, 2018). One method for constructing equivariant discrete CNNs is steerable CNN (Cohen & Welling, 2016b). Cohen et al. (2019a) derive a general constraint for when a convolutional kernelK : Rb → Rcout×cin is G-equivariant. Assume G acts on Rb and that Rcout and Rcin are G-representations ρout and ρin respectively, then K is G-equivariant if for all g ∈ G,x ∈ R2,\nK(gx) = ρout(g)K(x)ρin(g −1). (2)\nFor the group SO(2), Weiler & Cesa (2019) solve this constraint using circular harmonic functions to give a basis of discrete equivariant kernels. In contrast, our method is much simpler and uses orbits and stabilizers to create continuous convolution kernels." }, { "heading": "4 ECCO: TRAJECTORY PREDICTION USING ROTATIONALLY EQUIVARIANT CONTINUOUS CONVOLUTION", "text": "In trajectory prediction, given historical position and velocity data of n particles over tin timesteps, we want to predict their positions over the next tout timesteps. Denote the ground truth dynamics as ξ, which maps ξ(xt−tin:t,vt−tin:t) = xt:t+tout . Motivated by the observation in Figure 1, we wish to learn a model f that approximates the underlying dynamics while preserving the internal symmetry in the data, specifically rotational equivariance.\nWe introduce ECCO, a model for trajectory prediction based on rotationally equivariant continuous convolution. We implement rotationally equivariant continuous convolutions using a weight sharing scheme based on orbit decomposition. We also describe equivariant per-particle linear layers which are a special case of continuous convolution with radius R = 0 analogous to 1x1 convolutions in CNNs. Such layers are useful for passing information between layers from each particle to itself." }, { "heading": "4.1 ECCO MODEL OVERVIEW", "text": "The high-level architecture of ECCO is illustrated in Figure 2. It is important to remember that the input, output, and hidden layers are all vector fields over the particles. Oftentimes, there is also\nenvironmental information available in the form of road lane markers. Denote marker positions by xmap and direction vectors by vmap. This data is thus also a particle field, but static.\nTo design an equivariant network, one must choose the group representation. This choice plays an important role in shaping the learned hidden states. We focus on two representations of SO(2): ρ1 and ρreg. The representation ρ1 is that of our input features, and ρreg is for the hidden layers. For ρ1, we constrain the kernel in Equation 1. For ρreg, we further introduce a new operator, convolution with torus kernels.\nIn order to make continuous convolution rotationally equivariant, we translate the general condition for discrete CNNs developed in Weiler & Cesa (2019) to continuous convolution. We define the convolution kernel K in polar coordinates K(θ, r). Let Rcout and Rcin be SO(2)-representations ρout and ρin respectively, then the equivariance condition requires the kernel to satisfy\nK(θ + φ, r) = ρout(Rotθ)K(φ, r)ρin(Rot −1 θ ). (3)\nImposing such a constraint for continuous convolution requires us to develop an efficient weight sharing scheme for the kernels, which solve Equation 3." }, { "heading": "4.2 WEIGHT SHARING BY ORBITS AND STABILIZERS.", "text": "Given a point x ∈ R2 and a group G, the set Ox = {gx : g ∈ G} is the orbit of the point x. The set of orbits gives a partition of R2 into the origin and circles of radius r > 0. The set of group elements Gx = {g : gx = x} fixing x is called the stabilizer of the point x. We use the orbits and stabilizers to constrain the weights of K. Simply put, we share weights across orbits and constrain weights according to stabilizers, as shown in Figure 3-Left.\nThe ray D = {(0, r) : r ≥ 0} is a fundamental domain for the action of G = SO(2) on base space R2. That is, D contains exactly one point from each orbit. We first define K(0, r) for each (0, r) ∈ D. Then we compute K(θ, r) from K(0, r) by setting φ = 0 in Equation 3 as such\nK(θ, r) = ρout(Rotθ)K(0, r)ρin(Rot −1 θ ). (4)\nFor r > 0, the group acts freely on (0, r), i.e. the stabilizer contains only the identity. This means that Equation 3 imposes no additional constraints on K(0, r). Thus K(0, r) ∈ Rcout×cin is a matrix of freely learnable weights.\nFor r = 0, however, the orbit O(0,0) is only one point. The stabilizer of (0, 0) is all of G, which requires\nK(0, 0) = ρout(Rotθ)K(0, 0)ρin(Rot −1 θ ) for all θ. (5)\nThus K(0, 0) is an equivariant per-particle linear map ρin → ρout.\nWe can analytically solve Equation 5 for K(0, 0) using representation theory. Table 1 shows the unique solutions for different combinations of ρ1 and ρreg. For details see subsection A.3.\nNote that 2D and 3D rotation equivariant continuous convolutions are implemented in Worrall et al. (2017) and Thomas et al. (2018) respectively. They both use harmonic functions which require expensive evaluation of analytic functions at each point. Instead, we provide a simpler solution. We require only knowledge of the orbits, stabilizers, and input/output representations. Additionally, we bypass Clebsch-Gordon decomposition used in Thomas et al. (2018) by mapping directly between the representations in our network. Next, we describe an efficient implementation of equivariant continuous convolution." }, { "heading": "4.3 POLAR COORDINATE KERNELS", "text": "Rotational equivariance informs our kernel discretization and implementation. We store the kernel K of continuous convolution as a 4-dimensional tensor by discretizing the domain. Specifically, we\ndiscretize R2 using polar coordinates with kθ angular slices and kr radial steps. We then evaluate K at any (θ, r) using bilinear interpolation from four closest polar grid points. This method accelerates computation since we do not need to use Equation 4 to repeatedly compute K(θ, r) from K(0, r). The special case of K(0, 0) results in a polar grid with a “bullseye” at the center (see Figure 3-Left).\nWe discretize angles finely and radii more coarsely. This choice is inspired by real-world observation that drivers tend to be more sensitive to the angle of an incoming car than its exact distance, Our equivariant kernels are computationally efficient and have very few parameters. Moreover, we will discuss later in Section 4.5 that despite discretization, the use of polar coordinates allows for very low equivariance error." }, { "heading": "4.4 HIDDEN LAYERS AS REGULAR REPRESENTATIONS", "text": "Regular representation ρreg has shown better performance than ρ1 for finite groups (Cohen et al., 2019a; Weiler & Cesa, 2019). But the naive ρreg = {ϕ : G→ R} for an infinite groupG is too large to work with. We choose the space of square-integrable functions L2(G). It contains all irreducible representations of G and is compatible with pointwise non-linearities.\nDiscretization. However, L2(SO(2)) is still infinite-dimensional. We resolve this by discretizing the manifold S1 underlying SO(2) into kreg even intervals. We represent functions f ∈ L2(SO(2)) by the vector of values [f(Rot2πi/kreg)]0≤i<kreg . We then evaluate f(Rotθ) using interpolation.\nWe separate the number of angular slices kθ and the size of the kernel kreg. If we tie them together and set kθ = kreg, this is equivalent to implementing cyclic group Ckreg symmetry with the regular representation. Then increasing kθ would also increases kreg, which incurs more parameters.\nConvolution with Torus Kernel. In addition to constraining the kernel K of Equation 1 as in ρ1, ρreg poses an additional challenge as it is a function on a circle. We introduce a new operator from functions on the circle to functions on the circle called a torus kernel.\nFirst, we replace input feature vectors in f ∈ Rc with elements of L2(SO(2)). The input feature f becomes a ρreg-field, that is, for each x ∈ R2, f (x) is a real-value function on the circle S1 → R. For the kernel K, we replace the matrix field with a map K : R2 → ρreg⊗ ρreg. Instead of a matrix, K(x) is a map S1 × S1 → R. Here (φ1, φ2) ∈ S1 × S1 plays the role of continuous matrix indices and we may consider K(x)(φ1, φ2) ∈ R analogous to a matrix entry. Topologically, S1 × S1 is a torus and hence we call K(x) a torus kernel. The matrix multiplication K(x) · f (x) in Equation 1\nmust be replaced by the integral transform K(x) } f (x)(φ2) = ∫ φ1∈S1 K(x)(φ2, φ1)f (x)(φ1)dφ1, (6) which is a linear transformation L2(SO(2)) → L2(SO(2)). K(θ, r)(φ2, φ1) denotes the (φ2, φ1) entry of the matrix at point x = (θ, r), see the illustration in Figure 3-Right. We compute Equation 3 for ρreg → ρreg as K(Rotθ(x))(φ2, φ1) = K(x)(φ2 − θ, φ1 − θ). We can use the same weight sharing scheme as in Section 4.2.\n4.5 ANALYSIS: EQUIVARIANCE ERROR\nThe practical value of equivariant neural networks has been demonstrated in a variety of domains. However, theoretical analysis (Kondor & Trivedi, 2018; Cohen et al., 2019a; Maron et al., 2020) of continuous Lie group symmetries is usually performed assuming continuous functions and using the integral representation of the convolution operator. In practice, discretization can cause the model f to be not exactly equivariant, with some equivariance error (EE) EE = ‖f(T (x))− T ′(f(x))‖ with respect to group transformations T and T ′ of input and output respectively (Wang et al., 2020, A6). Rectangular grids are well-suited to translations, but poorlysuited to rotations. The resulting equivariance error can be so large to practically undermine the advantages of a theoretically equivariant network.\nOur polar-coordinate indexed circular filters are designed specifically to adapt well to the rotational symmetry. In Figure 4, we demonstrate experimentally that expected EE is inversely proportional to the number of angular slices kθ. For example, choosing kθ ≥ 16 gives very low EE and does not increase the number of parameters. We also prove for ρ1 features that the equivariance error is low in expectation. See Appendix A.6 for the precise statement and proof. Proposition. Let α = 2π/kθ, and θ̄ be θ rounded to nearest value in Zα, and θ̂ = |θ − θ̄|. Let F = CtsConvK,R and T = ρ1(Rotθ). For some constant C, the expected EE is bounded\nEK,f ,x[T (F (f ,x))− F (T (f), T (x))] ≤ | sin(θ̂)|C ≤ 2πC/kθ." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present experiments in two different domains, traffic and pedestrian trajectory prediction, where interactions among agents are frequent and influential. We first introduce the statistics of the datasets and the evaluation metrics. Secondly, we compare different feature encoders and hidden feature representation types. Lastly, we compare our model with baselines." }, { "heading": "5.1 EXPERIMENTAL SET UP", "text": "Dataset We discuss the performances of our models on (1) Argoverse autonomous vehicle motion forecasting (Chang et al., 2019), a recently released vehicle trajectory prediction benchmark, and (2) TrajNet++ pedestrian trajectory forecasting challenge (Kothari et al., 2020). For Argoverse, the task is to predict three-second trajectories based on all vehicles history in the past 2 seconds. We split 32K samples from the validation set as our test set.\nBaselines We compare against several state-of-the-art baselines used in Argoverse and TrajNet++. We use three original baselines from (Chang et al., 2019): Constant velocity, Nearest Neighbour, and Long Short Term Memory (LSTM). We also compare with a non-equivariant continuous convolutional model, CtsConv (Ummenhofer et al., 2019) and a hierarchical GNN model VectorNet (Gao et al., 2020). Note that VectorNet only predicts a single agent at a time, which is not directly comparable with ours. We include VectorNet as a reference nevertheless.\nEvaluation Metrics We use domain standard metrics to evaluate the trajectory prediction performance, including (1) Average Displacement Error (ADE): the average L2 displacement error for the whole 30 timestamps between prediction and ground truth, (2) Displacement Error at t seconds (DE@ts): the L2 displacement error at a given timestep t. DE@ts for the last timestamp is also called Final Displacement Error (FDE). For Argoverse, we report ADE and DE@ts for t ∈ {1, 2, 3}. For TrajNet++, we report ADE and FDE." }, { "heading": "5.2 PREDICTION PERFORMANCE COMPARISON", "text": "We evaluate the performance of different models from multiple aspects: forecasting accuracy, parameter efficiency and the physical consistency of the predictions. The goal is to provide a comprehensive view of various characteristics of our model to guide practical deployment. See Appendix A.9 for an additional ablative study.\nForecasting Accuracy We compare the trajectory prediction accuracy across different models on Argoverse and TrajNet++. Table 2 displays the prediction ADE and FDE comparision. We can see that ECCO with the regular representation ρreg achieves on par or better forecasting accuracy on both datasets. Comparing ECCO and a non-equivariant counterpart of our model CtsConv, we observe a significant 14.8% improvement in forecasting accuracy. Compare with data augmentation, we also observe a 9% improvement over the non-equivariant CtsConv trained on random-rotationaugmented dataset. These results demonstrate the benefits of incorporating equivariance principles into deep learning models.\nParameter Efficiency Another important feature in deploying deep learning models to embedded systems such as autonomous vehicles is parameter efficiency. We report the number of parameters in each of the models in Table 2. Compare with LSTM, our forecasting performance is significantly better. CtsConv and VectorNet have competitive forecasting performance, but uses much more parameters than ECCO. By encoding equivariance into CtsConv, we drastically reduce the number of the parameters needed in our model. For VectorNet, Gao et al. (2020) only provided the number of parameters for their encoder; a fair decoder size can be estimated based on MLP using 59 polygraphs with each 64 dimensions as input, predicting 30 timestamps, that is 113K.\nRuntime and Memory Efficiency We compare the runtime and memory usage with VectorNet Gao et al. (2020). Since VectorNet is not open-sourced, we compare with a version of VectorNet that we implement. Firstly, we compare floating point operations (FLOPs). VectorNet reported n × 0.041 GFLOPs for the encoder part of their model alone, where n is the number of predicted vehicles. We tested ECCO on a scene with 30 vehicles and approximately 180 lane marker nodes, which is similar to the test conditions used to compute FLOPs in Gao et al. (2020). Our full model used 1.03 GFLOPs versus 1.23 GFLOPs for VectorNet’s encoder. For runtimes on the same test machine, ECCO runs 684ms versus 1103ms for VectorNet. Another disadvantage of VectorNet is needing to reprocess the scene for each agent, whereas ECCO predicts all agents simultaneously. For memory usage in the same test ECCO uses 296 MB and VectorNet uses 171 MB.\nSample Efficiency A major benefit of incorporating the inductive bias of equivariance is to improve the sample efficiency of learning. For each sample which an equivariant model is trained on, it learns as if it were trained on all transformations of that sample by the symmetry group (Wang et al., 2020, Prop 3). Thus ECCO requires far fewer samples to learn from. In Figure 5, we plot a comparison of validation FDE over number of training samples and show the equivariant models converge faster.\nPhysical Consistency We also visualize the predictions from ECCO and non-equivariant\nCtsConv, as shown in Figure 6. Top row visualizes the predictions on the original data. In the bottom row, we rotate the whole scene by 160◦ and make predictions on rotated data. This mimics the covariate shift in the real world. Note that CtsConv predicts inconsistently: a right turn in the top row but a left turn after the scene has been rotated. We see similar results for TrajNet++ (see Figure 8 in Appendix A.10)." }, { "heading": "6 CONCLUSION", "text": "We propose Equivariant Continuous Convolution (ECCO), a novel model for trajectory prediction by imposing symmetries as inductive biases. On two real-world vehicle and pedestrians trajectory datasets, ECCO attains competitive accuracy with significantly fewer parameters. It is also more sample efficient; generalizing automatically from few data points in any orientation. Lastly, equivariance gives ECCO improved generalization performance. Our method provides a fresh perspective towards increasing trust in deep learning models through guaranteed properties. Future directions include applying equivariance to probabilistic predictions with many possible trajectories, or developing a faster version of ECCO which does not require autoregressive computation. Moreover, our methods may be generalized from 2-dimensional space to Rn. The orbit-stabilizer weight sharing scheme and discretized regular representation may be generalized by replacing SO(2) with SO(n), and polar coordinate kernels may be generalized using spherical coordinates." }, { "heading": "ACKNOWLEDGEMENT", "text": "This work was supported in part by Google Faculty Research Award, NSF Grant #2037745, and the U. S. Army Research Office under Grant W911NF-20-1-0334. Walters is supported by a Postdoctoral Fellowship from the Institute for Experiential AI at the Roux Institute." }, { "heading": "A APPENDIX", "text": "A.1 CONTINUOUS CONVOLUTION INVOLVING ρreg\nThis section is a more detailed version of Section 4.4.\nDefine the input f to be ρreg-field, that is, a distribution over R2 valued in ρreg. Define K : R2 → ρreg ⊗ ρreg. After identifying SO(2) with its underlying manifold S1, we can identify K(x) as a map S1 × S1 → R and f (x) : S1 → R. Define the integral transform\nK(x) } f (x)(φ2) = ∫ φ1∈S1 K(x)(φ2, φ1)f (x)(φ1)dφ1.\nFor y ∈ R2, define the convolution g = K ? f by\ng(y) = ∫ x∈R2 K(x) } f(x+ y)dx.\nThe }-operation parameterizes linear maps ρreg → ρreg and is thus analogous to matrix multiplication. If we chose to restrict our choice of κ to κ(φ2, φ1) = κ̃(φ2−φ1) for some function κ̃ : S1 → R then this becomes the circular convolution operation.\nThe SO(2)-action on ρreg by Rotθ(f)(φ) = f(φ− θ) induces an action on κ : S1 × S1 → R by\nRotθ(κ)(φ2, φ1) = κ(φ2 − θ, φ1 − θ).\nThis, in turn, gives an action on the torus-field K by\nRotθ(K)(x)(φ2, φ1) = K(Rot−θ(x))(φ2 − θ, φ1 − θ).\nThus Equation 3, the convolutional kernel constraint, implies that K is equivariant if and only if\nK(Rotθ(x))(φ2, φ1) = K(x)(φ2 − θ, φ1 − θ).\nWe use this to define a weight sharing scheme as described in Section 3.2. The cases of continuous convolution ρ1 → ρreg and ρreg → ρ1 may be derived similarly." }, { "heading": "A.2 COMPLEXITY OF CONVOLUTION WITH TORUS KERNEL", "text": "The complexity class of the convolution with torus kernel is O(n · k2reg · cout · cin), where n is the number of particles, the regular representation is discretized into kreg pieces, and the input and output contain cin and cout copies of the regular representation respectively. We are not counting the complexity of the interpolation operation for looking up K(θ, r)." }, { "heading": "A.3 EQUIVARIANT PER-PARTICLE LINEAR LAYERS", "text": "Since this operation is pointwise, unlike positive radius continuous convolution, we cannot map between different irreducible representations of SO(2). Consider as input a ρin-field I and output a ρout-field O where ρin and ρout are finite-dimensional representations of SO(2). We define O(i) = WI(i) using the same W , an equivariant linear map, for each particle 1 ≤ i ≤ N . Denote the decomposition of ρin and ρout into irreducible representations of SO(2) as ρin ∼= ρi11 ⊕ . . .⊕ρinn and ρout ∼= ρj11 ⊕ . . .⊕ ρjnn respectively. By Schur’s lemma, the equivariant linear map W : ρin → ρout is defined by a block diagonal matrix with blocks {Wk}nk=1 where Wk is an ik × jk matrix. That is, maps between different irreducible representations are zero and each map ρk → ρk is given by a single scalar.\nPer-particle linear mapping ρ1 → ρreg and ρ1 → ρreg. Since the input and output features are ρ1-fields, but the hidden features may be represented by ρreg, we need mappings between ρ1 and ρreg. In all cases we pair continuous convolutions with dense per-particle mappings, this we must describe per-particle mappings between ρ1 and ρreg.\nBy the Peter-Weyl theorem, L2(SO(2)) ∼= ⊕∞\ni=0 ρi. In the case of SO(2), this decomposition is also called the Fourier decomposition or decomposition into circular harmonics. Most importantly, there is one copy of ρ1 inside of L2(SO(2)). Hence, up to scalar, there is a unique linear map i1 : ρ1 → L2(SO(2)) given by (a, b) 7→ a cos(θ) + b sin(θ). The reverse mapping pr1 : L2(SO(2))→ ρ1 is projection onto the ρ1 summand and is given by the Fourier transform pri(f) = ( ∫ S1 f(θ) cos(θ)dθ, ∫ S1 f(θ) sin(θ)dθ).\nPer-particle linear mapping ρreg → ρreg. Though ρreg is not finite-dimensional, the fact that it decomposes into a direct sum of irreducible representations means that we may take ρin = ρout = ρreg above. Practically, however, it is easier to realize the linear equivariant map ρireg → ρjreg as a convolution over S1,\nO(θ) = ∫ φ∈S1 κ(θ − φ)I(φ)\nwhere κ(θ) is an i× j matrix of trainable weights, independent for each θ." }, { "heading": "A.4 ENCODING INDIVIDUAL PARTICLE PAST BEHAVIOR", "text": "We can encode these individual attributes using a per vehicle LSTM (Hochreiter & Schmidhuber, 1997). Let X(i)t denote the position of car i at time t. Denote a fully connected LSTM cell by ht, ct = LSTM(X (i) t , ht−1, ct−1). Define h0 = c0 = 0. We then use the concatenation of the hidden states [h(1)tin . . . h (n) tin ] of all particles as Z ∈ R N ⊗ Rk as the encoded per-vehicle latent features." }, { "heading": "A.5 ENCODING PAST INTERACTIONS", "text": "In addition, we also encode past interactions of particles by introducing a continuous convolution LSTM. Similar to convLSTM we replace the fully connected layers of the original LSTM above with another operation Xingjian et al. (2015). While convLSTM is well-suited for capturing spatially local interactions over time, it requires gridded information. Since the particle system we consider are distributed in continuous space, we replace the standard convolution with rotation-equivariant continuous convolutions.\nWe can now define Ht, Ct = CtsConvLSTM(Xt, Ht−1, Ct−1) which is an LSTM cell using equivariant continuous convolutions throughout. Note that in this case Xt, Ht−1, Ct−1 are all particle feature fields, that is, functions {1, . . . , n} → Rk. Define CtsConvLSTM by\nit = σ(Wix ?cts X (i) t +Wih ?cts ht−1 +Wic ◦ ct−1 + bi)\nft = σ(Wfx ?cts X (i) t +Wfh ?cts ht−1 +Wfc ◦ ct−1 + bi)\nct = ft ◦ ct−1 + it ◦ tanh(Wcx ?cts X(i)t +Wch ?cts ht−1 + bc)\not = σ(Wox ?cts X (i) t +Woh ?cts ht−1 +Woc ◦ ct + bo) ht = ot ◦ tanh(ct),\nwhere ?cts denotes CtsConv. We then can use Htin as input feature for the prediction network." }, { "heading": "A.6 EQUIVARIANCE ERROR", "text": "We prove the proposition in Section 4.5.\nProposition. Let α = 2π/kθ. Let θ̄ be θ rounded to nearest value in Zα. Set θ̂ = |θ − θ̄|. Assume n particles samples uniformly in a ball of radius R with features f ∈ ρc1. Let f and K have entries sampled uniformly in [−a, a]. Let the bullseye have radius 0 < Re < R. Let F = CtsConvK,R and Tθ = ρ1(Rotθ). Then the expected EE is bounded\nEK,f ,x[T (F (f ,x))− F (T (f), T (x))] ≤ | sin(θ̂)|C ≤ 2πC/kθ where C = 4cna2(1−R2e/R2).\nProof. We may compute for a single particle x = (ψ, r) and multiply our result by n by linearity. We separate two cases: x in bullseye with probabilityR2e/R\n2 and x in angular slice with probability 1−R2e/R2. If x is in the bullseye, then there is no equivariance error since K(x) is a scalar matrix. Assume x is an angular sector.\nFor nearest interpolation, the equivariance error is then\n‖ρ1(θ̄)K(x)ρ1(−θ̄)ρ1(θ)f − ρ1(θ)K(x)f‖.\nSince ρ1(θ) is length preserving, this is\n‖ρ1(−θ)ρ1(θ̄)K(x)ρ1(−θ̄)ρ1(θ)f −K(x)f‖ =‖ρ1(β)K(x)ρ1(−β)f −K(x)f‖ (7)\nwhere β = ±θ̂. We consider only a single factor of ρ1 in f . The result will then be multiplied by c. Let\nK(x) = ( k11 k12 k21 k22 ) , f = ( f1 f2 ) .\nWe can factor out an a from K(x) and an a from f and assume kij , fi samples from Uniform([−1, 1]). One may then directly compute that Equation 7 equals√\n((k21 + k12)2 + (k11 − k22)2)(f21 + f22 ) sin 2(β)\nThis is bounded above by 4| sin(β)| = 4| sin(θ̂)|. Collecting the above factors, this proves the bound C| sin(β)|. The further bound follows by the first order bound,\n| sin(θ̂)| ≤ |θ̂| ≤ 2π/kθ.\nThe relationship EE ≈ 2πC/kθ is visible in Figure 4. We can also see clearly the significance of the term | sin(θ̂)| by plotting equivariance error against θ as in Figure 7." }, { "heading": "A.7 DATA DETAILS", "text": "Argoverse dataset includes 324K samples, which are split into 206K training data, 39K validation and 78K test set. All the samples are real data extracted from Miami and Seattle, and the dataset provides HD maps of lanes in each city. Every sample contains data for 5 seconds long, and is sampled in 10Hz frequency.\nTrajNet++ Real dataset contains 200K samples. All the tracking in this dataset is captured in both indoor and outdoor locations, for example, university, hotel, Zara, and train stations. Every sample in this dataset contains 21 timestamps, and the goal is to predict the 2D spatial positions for each pedestrain in the future 12 timestamps.\nA.8 IMPLEMENTATION DETAILS\nArgoverse dataset is not fully observed, so we only use cars with complete observation as our input. Since every sample doesn’t include the same number of cars, we only choose those scenes with less than or equal to 60 cars and insert dummy cars into them to achieve consistent car numbers. TrajNet++ Real dataset is also not fully observed. And here we keep our pedestrain number consistent to 160.\nMoreover, for each car, we use the average velocity in the past 0.1 second as an approximate to the current instant velocity, i.e. vt = (pt − pt−1)/2. As for map information, we only include center lanes with lane directions as features. Also, we introduce dummy lane node into each scene to make lane numbers consistently equal to 650.\nIn TrajNet++ task, no map information is included. And since pedestrians don’t have a speedometers to tell them exactly how fast they are moving as drivers, instead they depends more on the relative velocities and relative positions to other pedestrians, we tried different combination of features in ablative study besides only using history velocities.\nOur models are all trained by Adam optimizer with base learning rate 0.001, and the gamma rate for linear rate scheduler is set to be 0.95. All our models without map information are trained for 15K iterations with batch size 16 and learning rate is updated every 300 iterations; for models with map information, we train them for 30K iterations with batch size 16 and learning rate is updated every 600 iterations.\nFor CtsConv, we set the layer sizes to be 32, 64, 64, 64, and kernel size 4× 4× 4; for ρ1-ECCO, the layer sizes are 16, 32, 32, 32, kθ is 16, kr is 3; for ρreg-ECCO, we choose layer size 8, 16, 8, 8, kθ 16, kr 3, and regular feature dimension is set to be 8. For Argoverse task, we set the CtsConv radius to be 40, and for TrajNet++ task we set it to be 6." }, { "heading": "A.9 ABLATIVE STUDY", "text": "We perform ablative study for ECCO to further diagnose different encoders, usage of HD maps and other model design choices.\nChoice of encoders Unlike fluid simulations (Ummenhofer et al., 2019) where the dynamics are Markovian, human behavior exhibit long-term dependency. We experiment with three different encoders refered to as Enc to model such long-term dependency: (1) concatenating the velocities from the past m frames as input feature, (2) passing the past velocities of each particle to the same LSTM to encode individual behavior of each particle, and (3) implementing continuous convolution LSTM to encode past particle interactions. Our continuous convolution LSTM is similar to convLSTM (Xingjian et al., 2015) but uses continuous convolutions instead of discrete gridded convolutions.\nWe use different encoders to time-aggregate features and compare their performances (Table 3).\nUse of HD Maps In Table 4, we compare performance with and without map input features.\nChoice of features for pedestrian Unlike vehicles, people do not have a velocity meter to tell him how fast they actually walk. We realize that people actually tend to adjust their velocities based on others’ relative velocity and relative position. We experiment different combination of features (Table 5), finding using relative velocities and relative positions as feature has the best performance." }, { "heading": "A.10 QUALITATIVE RESULTS FOR TRAJNET++", "text": "Figure 8 show qualitative results for TrajNet++. Note that the non-equivariant baseline (2nd column) depends highly on the global orientation whereas the ground truth and equivariant models do not." } ]
2,021
null
SP:f2f1c3e0201395340e06f5873299639a8f4d16ee
[ "This paper presents a latent variable model where the variables in the latent space are causally disentangled, i.e. the disentanglement is ensured according to a structural causal model (SCM). The resulting model is made up of two parts. The first one, a generative unsupervised part, is essentially a VAE and is defined with the VAE ELBO loss. The second part is supervised and accounts for the causal disentanglement of the factors that are assumed to underlie the distribution; the authors claim the fewer supervised samples are required to estimate the second part of the loss alone. The two parts are then combined with a hyperparameter." ]
This paper proposes a Disentangled gEnerative cAusal Representation (DEAR) learning method. Unlike existing disentanglement methods that enforce independence of the latent variables, we consider the general case where the underlying factors of interests can be causally correlated. We show that previous methods with independent priors fail to disentangle causally correlated factors. Motivated by this finding, we propose a new disentangled learning method called DEAR that enables causal controllable generation and causal representation learning. The key ingredient of this new formulation is to use a structural causal model (SCM) as the prior for a bidirectional generative model. The prior is then trained jointly with a generator and an encoder using a suitable GAN loss incorporated with supervision. Theoretical justification on the proposed formulation is provided, which guarantees disentangled causal representation learning under appropriate conditions. We conduct extensive experiments on both synthesized and real datasets to demonstrate the effectiveness of DEAR in causal controllable generation, and the benefits of the learned representations for downstream tasks in terms of sample efficiency and distributional robustness.
[]
[ { "authors": [ "Martin Arjovsky", "Léon Bottou", "Ishaan Gulrajani", "David Lopez-Paz" ], "title": "Invariant risk minimization", "venue": "arXiv preprint arXiv:1907.02893,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Yoshua Bengio", "Tristan Deleu", "Nasim Rahaman", "Rosemary Ke", "Sébastien Lachapelle", "Olexa Bilaniuk", "Anirudh Goyal", "Christopher Pal" ], "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Christopher P. Burgess", "Irina Higgins", "Arka Pal", "Loı̈c Matthey", "Nicholas Watters", "Guillaume Desjardins", "Alexander Lerchner" ], "title": "Understanding disentangling in beta-vae", "venue": "NIPS Workshop of Learning Disentangled Features,", "year": 2017 }, { "authors": [ "Tian Qi Chen", "Xuechen Li", "Roger B. Grosse", "David K. Duvenaud" ], "title": "Isolating sources of disentanglement in variational autoencoders", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jeff Donahue", "Philipp Krähenbühl", "Trevor Darrell" ], "title": "Adversarial feature learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Vincent Dumoulin", "Ishmael Belghazi", "Ben Poole", "Alex Lamb", "Martı́n Arjovsky", "Olivier Mastropietro", "Aaron C. Courville" ], "title": "Adversarially learned inference", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Irina Higgins", "Loı̈c Matthey", "Arka Pal", "Christopher Burgess", "Xavier Glorot", "Matthew Botvinick", "Shakir Mohamed", "Alexander Lerchner" ], "title": "beta-vae: Learning basic visual concepts with a constrained variational framework", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Ilyes Khemakhem", "Diederik Kingma", "Ricardo Monti", "Aapo Hyvarinen" ], "title": "Variational autoencoders and nonlinear ica: A unifying framework", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2020 }, { "authors": [ "Hyunjik Kim", "Andriy Mnih" ], "title": "Disentangling by factorising", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Murat Kocaoglu", "Christopher Snyder", "Alexandros G Dimakis", "Sriram Vishwanath" ], "title": "Causalgan: Learning causal implicit generative models with adversarial training", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Abhishek Kumar", "Prasanna Sattigeri", "Avinash Balakrishnan" ], "title": "Variational inference of disentangled latent concepts from unlabeled observations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Felix Leeb", "Yashas Annadani", "Stefan Bauer", "Bernhard Schölkopf" ], "title": "Structural autoencoders improve representations for generation and transfer", "venue": "arXiv preprint arXiv:2006.07796,", "year": 2020 }, { "authors": [ "Zinan Lin", "Kiran K Thekumparampil", "Giulia Fanti", "Sewoong Oh" ], "title": "Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep learning face attributes in the wild", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2015 }, { "authors": [ "F. Locatello", "S. Bauer", "M. Lucic", "G. Raetsch", "S. Gelly", "B. Schölkopf", "O. Bachem" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": "In Proceedings of the 36th International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Francesco Locatello", "Ben Poole", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem", "Michael Tschannen" ], "title": "Weakly-supervised disentanglement without compromises", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Francesco Locatello", "Michael Tschannen", "Stefan Bauer", "Gunnar Rätsch", "Bernhard Schölkopf", "Olivier Bachem" ], "title": "Disentangling factors of variation using few labels", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Lars Mescheder", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Mehdi Mirza", "Simon Osindero" ], "title": "Conditional generative adversarial nets", "venue": "arXiv preprint arXiv:1411.1784,", "year": 2014 }, { "authors": [ "Raha Moraffah", "Bahman Moraffah", "Mansooreh Karami", "Adrienne Raglin", "Huan Liu" ], "title": "Can: A causal adversarial network for learning observational and interventional distributions", "venue": "arXiv preprint arXiv:2008.11376,", "year": 2020 }, { "authors": [ "Ignavier Ng", "AmirEmad Ghassami", "Kun Zhang" ], "title": "On the role of sparsity and dag constraints for learning linear dags", "venue": "arXiv preprint arXiv:2006.10201,", "year": 2020 }, { "authors": [ "Judea Pearl" ], "title": "Probabilistic reasoning in intelligent systems: networks of plausible inference", "venue": null, "year": 2014 }, { "authors": [ "Judea Pearl" ], "title": "Models, reasoning and inference", "venue": null, "year": 2000 }, { "authors": [ "Ali Razavi", "Aaron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with vq-vae-2", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": null, "year": 1911 }, { "authors": [ "Bernhard Schölkopf" ], "title": "Causality for machine learning", "venue": "arXiv preprint arXiv:1911.10500,", "year": 2019 }, { "authors": [ "Bernhard Schölkopf", "Dominik Janzing", "Jonas Peters", "Eleni Sgouritsa", "Kun Zhang", "Joris Mooij" ], "title": "On causal and anticausal learning", "venue": "In ICML,", "year": 2012 }, { "authors": [ "Xinwei Shen", "Tong Zhang", "Kani Chen" ], "title": "Bidirectional generative modeling using adversarial gradient estimation", "venue": "arXiv preprint arXiv:2002.09161,", "year": 2020 }, { "authors": [ "Rui Shu", "Yining Chen", "Abhishek Kumar", "Stefano Ermon", "Ben Poole" ], "title": "Weakly supervised disentanglement with guarantees", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Mengyue Yang", "Furui Liu", "Zhitang Chen", "Xinwei Shen", "Jianye Hao", "Jun Wang" ], "title": "Causalvae: Structured causal disentanglement in variational autoencoder", "venue": "arXiv preprint arXiv:2004.08697,", "year": 2020 }, { "authors": [ "Yue Yu", "Jie Chen", "Tian Gao", "Mo Yu" ], "title": "Dag-gnn: Dag structure learning with graph neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Han Zhang", "Ian Goodfellow", "Dimitris Metaxas", "Augustus Odena" ], "title": "Self-attention generative adversarial networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jiji Zhang", "Peter Spirtes" ], "title": "Intervention, determinism, and the causal minimality", "venue": "condition. Synthese,", "year": 2011 }, { "authors": [ "Shengjia Zhao", "Jiaming Song", "Stefano Ermon" ], "title": "Learning hierarchical features from generative models", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Xun Zheng", "Bryon Aragam", "Pradeep K Ravikumar", "Eric P Xing" ], "title": "Dags with no tears: Continuous optimization for structure learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Shen" ], "title": "Let l be the dimension of parameter β. To simplify notation, let random vector Z = Fβ($) and X = G(Z) ∈ R and Y = (X,Z) ∈ R, and let p be the probability density of Y", "venue": "For each i =", "year": 2020 }, { "authors": [ "Shen" ], "title": "Specifically, for such realistic data, we adopt the SAGAN (Zhang et al., 2019) architecture for D and G. The D network consists of three modules as shown in Figure 7 and detailed described in (Shen et al., 2020). Details for newtork G and Dx are given in Figure 7 and Table 3. The encoder architecture is the ResNet50 (He et al., 2016) followed by a 4-layer MLP of size 1024", "venue": "Network architectures", "year": 2020 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nConsider the observed data x from a distribution qx on X ⊆ Rd and the latent variable z from a prior pz on Z ⊆ Rk. In bidirectional generative models (BGMs), we are normally interested in learning an encoder E : X → Z to infer latent variables and a generator G : Z → X to generate data, to achieve both representation learning and data generation. Classical BGMs include Variational Autoencoder (VAE) (Kingma & Welling, 2014) and BiGAN (Donahue et al., 2017). In representation learning, it was argued that an effective representation for downstream learning tasks should disentangle the underlying factors of variation (Bengio et al., 2013). In generation, it is highly desirable if one can control the semantic generative factors by aligning them with the latent variables such as in StyleGAN (Karras et al., 2019). Both goals can be achieved with the disentanglement of latent variable z, which informally means that each dimension of z measures a distinct factor of variation in the data (Bengio et al., 2013).\nEarlier unsupervised disentanglement methods mostly regularize the VAE objective to encourage independence of learned representations (Higgins et al., 2017; Burgess et al., 2017; Kim & Mnih, 2018; Chen et al., 2018; Kumar et al., 2018). Later, Locatello et al. (2019) show that unsupervised learning of disentangled representations is impossible: many existing unsupervised methods are actually brittle, requiring careful supervised hyperparameter tuning or implicit inductive biases. To promote identifiability, recent work resorts to various forms of supervision (Locatello et al., 2020b; Shu et al., 2020; Locatello et al., 2020a). In this work, we also incorporate supervision on the ground-truth factors in the form stated in Section 3.2.\nMost of these existing methods are built on the assumption that the underlying factors of variation are mutually independent. However, in many real world cases the semantically meaningful factors of interests are not independent (Bengio et al., 2020). Instead, semantically meaningful high-level variables are often causally correlated, i.e., connected by a causal graph. In this paper, we prove formally that methods with independent priors fail to disentangle causally correlated factors. Motivated by this observation, we propose a new method to learn disentangled generative causal representations called DEAR. The key ingredient of our formulation is a structured causal model (SCM) (Pearl et al.,\n2000) as the prior for latent variables in a bidirectional generative model. With some background knowledge on the binary causal structure, the causal prior is then learned jointly with a generator and an encoder using a suitable GAN (Goodfellow et al., 2014) loss. We establish theoretical guarantees for DEAR to learn disentangled causal representations under appropriate conditions.\nAn immediate application of DEAR is causal controllable generation, which can generate data from any desired interventional distributions of the latent factors. Another useful application of disentangled representations is to use such representations in downstream tasks, leading to better sample complexity (Bengio et al., 2013; Schölkopf et al., 2012). Moreover, it is believed that causal disentanglement is invariant and thus robust under distribution shifts (Schölkopf, 2019; Arjovsky et al., 2019). In this paper, we demonstrate these conjectures in various downstream prediction tasks for the proposed DEAR method, which has theoretically guaranteed disentanglement property.\nWe summarize our main contributions as follows: • We formally identify a problem with previous disentangled representation learning methods using\nthe independent prior assumption, and prove that they fail to disentangle when the underlying factors of interests are causally correlated. • We propose a new disentangled learning method, DEAR, which integrates an SCM prior into a bidirectional generative model, trained with a suitable GAN loss. • We provide theoretical justification on the identifiability of the proposed formulation. • Extensive experiments are conducted on both synthesized and real data to demonstrate the effec-\ntiveness of DEAR in causal controllable generation, and the benefits of the learned representations for downstream tasks in terms of sample efficiency and distributional robustness.\n2 OTHER RELATED WORK\nGAN-based disentanglement methods. Existing methods, including InfoGAN (Chen et al., 2016) and InfoGAN-CR (Lin et al., 2020), differ from our proposed formulation mainly in two folds. First they still assume an independent prior for latent variables, so suffer from the same problem with previous VAE-based methods mentioned above. Besides, the idea of InfoGAN-CR is to encourage each latent code to make changes that are easy to detect, which actually applies well only when the underlying factors are independent. Second, InfoGAN as a bidirectional generative modeling method further requires variational approximation apart from adversarial training, which is inferior to the principled formulation in BiGAN and AGES (Shen et al., 2020) that we adopt.\nCausality with generative models. CausalGAN (Kocaoglu et al., 2018) and a concurrent work (Moraffah et al., 2020) of ours, are unidirectional generative models (i.e., a generative model that learns a single mapping from the latent variable to data) that build upon a cGAN (Mirza & Osindero, 2014). They assign an SCM to the conditional attributes while leave the latent variables as independent Gaussian noises. The limit of a cGAN is that it always requires full supervision on attributes to apply conditional adversarial training. And the ground-truth factors are directly fed into the generator as the conditional attributes, without an extra effort to align the dimensions between the latent variables and the underlying factors, so their models have nothing to do with disentanglement learning. Moreover their unidirectional nature makes it impossible to learn representations. Besides they only consider binary factors whose consequent semantic interpolations appear nonsmooth, as shown in Appendix D. CausalVAE (Yang et al., 2020) assigns the SCM directly on the latent variables, but built upon iVAE (Khemakhem et al., 2020), it adopts a conditional prior given the ground-truth factors so is also limited to fully supervised setting.\n3 PROBLEM SETTING\n3.1 GENERATIVE MODEL\nWe first describe the probabilistic framework of disentangled learning with supervision. We follow the commonly assumed two-step data generating process that first samples the underlying generative factors, and then conditional on those factors, generates the data (Kingma & Welling, 2014). During the generation process, the generator induces the generated conditional pG(x|z) and generated joint distribution pG(x, z) = pz(z)pG(x|z). During the inference process, the encoder induces the encoded conditional qE(z|x) which can be a factorized Gaussian and the encoded joint distribution\nqE(x, z) = qx(x)qE(z|x). We consider the following objective for generative modeling: Lgen = DKL(qE(x, z), pG(x, z)), (1)\nwhich is shown to be equivalent to the evidence lower bound used in VAEs up to a constant, and allows a closed form only with factorized Gaussian prior, encoder and generator (Shen et al., 2020).\nSince constraints on the latent space are required to enforce disentanglement, it is desired that the distribution family of qE(x, z) and pG(x, z) should be large enough, especially for complex data like images. Normally more general implicit distributions are favored over factorized Gaussians in terms of expressiveness (Karras et al., 2019; Mescheder et al., 2017). Then minimizing (1) requires adversarial training, as discussed detailedly in Section 4.3.\n3.2 SUPERVISED REGULARIZER\nTo guarantee disentanglement, we incorporate supervision when training the BGM, following the similar idea in Locatello et al. (2020b) but with a different formulation. Specifically, let ξ ∈ Rm be the underlying ground-truth factors of interests of x, following distribution pξ, and yi be some continuous or discrete observation of the underlying factor ξi satisfying ξi = E(yi|x) for i = 1, . . . ,m. For example, in the case of human face images, y1 can be the binary label indicating whether a person is young or not, and ξ1 = E(y1|x) = P(y1 = 1|x) is the probability of being young given one image x.\nLet Ē(x) be the deterministic part of the stochastic transformation E(x), i.e., Ē(x) = E(E(x)|x), which is used for representation learning. We consider the following objective:\nL(E,G) = Lgen(E,G) + λLsup(E), (2)\nwhere Lsup = ∑m\ni=1 Ex,y[CE(Ēi(x), yi)] if yi is the binary or bounded continuous label of the i-th factor ξi, where CE(l, y) = y log σ(l) + (1 − y) log(1 − σ(l)) is the cross-entropy loss with σ(·) being the sigmoid function; Lsup = ∑m i=1 Ex,y[Ēi(x) − yi]2 if yi is the continuous observation of ξi, and λ > 0 is the coefficient to balance both terms. We empirically find the choice of λ quite insensitive to different tasks and datasets, and hence set λ = 5 in all experiments. Estimating of Lgen requires the unlabelled dataset {x1, . . . , xN} while estimating Lsup requires a labeled dataset {(xj , yj) : j = 1, . . . , Ns} where Ns can be much smaller than N . In contrast, Locatello et al. (2020b) propose the regularizer Lsup = ∑m i=1 Ex,z[CE(Ēi(x), zi)] involving only the latent variable z which is a part of the generative model, without distinguishing from the ground-truth factor ξ and its observation y. Hence they do not establish any theoretical justification on disentanglement. Besides, they adopt a VAE loss for Lgen with an independent prior, which suffers from the unidentifiability problem described in the next section.\n3.3 UNIDENTIFIABILITY WITH AN INDEPENDENT PRIOR\nIntuitively, the above supervised regularizer aims at ensuring some alignment between factor ξ and latent variable z. We start with the definition of a disentangled representation following this intuition.\nDefinition 1 (Disentangled representation). Given the underlying factor ξ ∈ Rm of data x, a deterministic encoder E is said to learn a disentangled representation with respect to ξ if ∀i = 1, . . . ,m, there exists a 1-1 function gi such that Ei(x) = gi(ξi). Further, a stochastic encoder E is said to be disentangled wrt ξ if its deterministic part Ē(x) is disentangled wrt ξ.\nAs stated above, we consider the general case where the underlying factors of interests are causally correlated. Then the goal becomes to disentangle the causal factors. Previous methods mostly use an independent prior for z, which contradicts with the truth. We make this formal through the following proposition, which indicates that the disentangled representation is generally unidentifiable with an independent prior.\nProposition 1. Let E∗ be any encoder that is disentangled wrt ξ. Let b∗ = Lsup(E∗), a = minG Lgen(E\n∗, G), and b = min{(E,G):Lgen=0} Lsup(E). Assume the elements of ξ are connected by a causal graph whose adjacency matrix A0 is not a zero matrix. Suppose the prior pz is factorized, i.e., pz(z) = ∏k i=1 pi(zi). Then we have a > 0, and either when b\n∗ ≥ b or b∗ < b and λ < ab−b∗ , there exists a solution (E\n′, G′) such that for any generator G, we have L(E′, G′) < L(E∗, G).\nAll proofs are given in Appendix A. This proposition directly suggests that minimizing (2) favors the solution (E′, G′) over one with a disentangled encoder E∗. Thus, with an independent prior we have no way to identify the disentangled solution with λ that is not large enough. However, in real applications it is impossible to estimate the threshold, and too large λ makes it difficult to learn the BGM. In the following section we propose a solution to this problem.\n4 CAUSAL DISENTANGLEMENT LEARNING\n4.1 GENERATIVE MODEL WITH A CAUSAL PRIOR\nWe propose to use a causal model as the prior pz . Specifically we use the generalized nonlinear Structural Causal Model (SCM) proposed by Yu et al. (2019) as follows\nz = f((I −A⊤)−1h($)) := Fβ($), (3) where A is the weighted adjacency matrix of the directed acyclic graph (DAG) upon the k elements of z (i.e., Aij ∕= 0 if and only if zi is the parent of zj), $ denotes the exogenous variables following N (0, I), f and h are element-wise nonlinear transformations, and β = (f, h,A) denotes the set of parameters of f , h and A, with the parameter space B. Further let 1A = I(A ∕= 0) denote the corresponding binary adjacency matrix, where I is the element-wise indicator function.\nWhen f is invertible, (3) is equivalent to f−1(z) = A⊤f−1(z) + h($) (4)\nwhich indicates that the factors z satisfy a linear SCM after nonlinear transformation f , and enables interventions on latent variables as discussed later. The model structure is presented in Figure 1. Note that different from our model where z is the latent variable following the prior (3) with the goal of causal disentanglement, Yu et al. (2019) proposed a causal discovery method where variables z are observed with the aim of learning the causal structure among z.\nIn causal structure learning, the graph is required to be acyclic. Zheng et al. (2018) propose an equality constraint whose satisfaction ensures acyclicity and solve the problem with augmented Lagrangian method, which however leads to optimization difficulties (Ng et al., 2020). In this paper, to avoid dealing with the non-convex constraint but focus on disentangling, we assume to have some prior knowledge of the binary causal structure. Specifically, we assume the super-graph of the true binary graph 1A∗ is given, the best case of which is the true graph while the worst is that only the causal ordering is available. Then we learn the weights of the non-zero elements of the prior adjacency matrix that indicate the sign and scale of causal effects, jointly with other parameters using the formulation and algorithm described in later sections. To incorporate structure learning methods and jointly learn the structure from scratch with guarantee of identifiability could be explored in future work. An ablation study is done in Appendix B regarding this prior knowledge.\nTo enable causal controllable generation, we use invertible f and h and describe the mechanism to generate images from interventional distributions of latent variables. Note that interventions can be formalized as operations that modify a subset of equations in (4) (Pearl et al., 2000). Suppose we would like to intervene on the i-th dimension of z, i.e., Do(zi = c), where c is a constant. Once we have latent factors z inferred from data x, i.e., z = E(x), or sampled from prior pz , we follow the intervened equations in (4) to obtain z′ on the left hand side using ancestral sampling by performing (4) iteratively. Then we decode the intervened latent factor z′ to generate the sample G(z′). In Section 5.1 we define the two types of interventions of most interests in applications.\nAnother issue of the model is the latent dimension, to handle which we propose the so-called composite prior. Recall that m is the number of generative factors that we are interested to disentangle,\ne.g., all the semantic concepts related to some field, where m tends to be smaller than the total number M of generative factors. The latent dimension k of the generative model should be no less than M to allow a sufficient degree of freedom in order to generate or reconstruct data well. Since M is generally unknown in reality, we set a sufficiently large k, at least larger than m which is a trivial lower bound of M . The role of the remaining k−m dimensions is to capture other factors necessary for generation whose structure is not cared or explicitly modeled. Then we propose to use a prior that is a composition of a causal model for the first m dimensions and another distribution for the other k −m dimensions to capture other factors necessary for generation, like a standard Gaussian.\n4.2 FORMULATION AND IDENTIFIABILITY OF DISENTANGLEMENT\nIn this section, we present the formulation of DEAR and establish the theoretical justification of it. Compared with the BGM described in Section 3.1, here we have one more module to learn that is the SCM prior. Thus pG(x, z) becomes pG,F (x, z) = pF (z)pG(x|z) where pF (z) or pβ(z) denotes the marginal distribution of Fβ($) with $ ∼ N (0, I). We then rewrite the generative loss as follows\nLgen(E,G, F ) = DKL(qE(x, z), pG,F (x, z)). (5)\nThen we propose the following formulation to learn causal generative causal representations:\nmin E,G,F L(E,G, F ) := Lgen(E,G, F ) + λLsup(E). (6)\nIn order to achieve causal disentanglement, we make two assumptions on the causal model. Assumption 1 supposes a sufficiently large capacity of the SCM in (3) to contain the underlying distribution pξ, which is reasonable due to the generalization of the nonlinear SCM. Assumption 2 states the identifiability of the true causal structure 1A0 of ξ, which is applicable given the true causal ordering under basic Markov and causal minimality conditions (Pearl, 2014; Zhang & Spirtes, 2011). Assumption 1 (SCM capacity). The underlying distribution pξ belongs to the distribution family {pβ : β ∈ B}, i.e., there exits β0 = (f0, h0, A0) such that pξ = pβ0 . Assumption 2 (Structure identifiability). For all β = (f, h,A) ∈ B with pβ = pβ0 , it holds that 1A = 1A0 .\nThe following theorem then guarantees that under appropriate conditions the DEAR formulation can learn the disentangled representations defined in Definition 1. Theorem 1. Assume the infinite capacity of E and G. Further under Assumption 1-2, DEAR formulation (6) learns the disentangled encoder E∗. Specifically, we have gi(ξi) = σ−1(ξi) if CE loss is used for the supervised regularizer, and gi(ξi) = ξi if L2 loss is used.\nNote that the identifiability we establish in this paper differs from some previous work on the parameter identifiability, e.g., Khemakhem et al. (2020). We argue that to learn disentangled representations, the form in Definition 1, i.e., the existence but not the uniqueness of gi’s, is sufficient to identify the relation among the representations and the data. In contrast, parameter identifiability may not be achievable in many cases like over-parametrization. Thus the identifiability discussed here is more realistic in terms of the goal of disentangling. Later we provide empirical evidence to support the theory directly through the application in causal controllable generation.\n4.3 ALGORITHM\nIn this section we propose the algorithm to solve the formulation (6). The SCM prior pF (z) and implicit generated conditional pG(x|z) make (5) lose an analytic form. Hence we adopt a GAN method to adversarially estimate the gradient of (5). We parametrize Eφ(x) and Gθ(z) by neural networks. Different from Shen et al. (2020), the prior also involves learnable parameters. We present in the following lemma the gradient formulas of (5). Lemma 1. Let r(x, z) = q(x, z)/p(x, z) and D(x, z) = log r(x, z). Then we have\n∇θLgen = −Ez∼pβ(z)[s(x, z)∇xD(x, z) ⊤|x=Gθ(z)∇θGθ(z)], ∇φLgen = Ex∼qx [∇zD(x, z) ⊤|z=Eφ(x)∇φEφ(x)],\n∇βLgen = −E$[s(x, z)(∇xD(x, z)⊤∇βG(Fβ(!)) +∇zD(x, z)⊤∇βFβ(!))| x=G(Fβ($))\nz=Fβ($) ],\n(7)\nwhere s(x, z) = eD(x,z) is the scaling factor.\nWe then estimate the gradients in (7) by training a discriminator Dψ via empirical logistic regression: minψ′ [\n1 |Se| ∑ (x,z)∈Se log(1 + e −Dψ′ (x,z)) + 1|Sg| ∑ (x,z)∈Sg log(1 + e Dψ′ (x,z))], where Se and Sg\nare finite samples from qE(x, z) and pG(x, z) respectively, leading to a GAN approach.\nBased on above, we propose Algorithm 1 to learn disentangled generative causal representation. Algorithm 1: Disentangled gEnerative cAusal Representation (DEAR) Learning Input: training set {x1, . . . , xN , y1, . . . , yNs}, initial parameters φ, θ,β,ψ, batch-size n\n1 while not convergence do 2 for multiple steps do 3 Sample {x1, . . . , xn} from the training set, {!1, . . . , !n} from N (0, I) 4 Generate from the causal prior zi = Fβ(!i), i = 1, . . . n 5 Update ψ by descending the stochastic gradient:\n1 n ∑n i=1 ∇ψ [ log(1 + e−Dψ(xi,Eφ(xi))) + log(1 + eDψ(Gθ(zi),zi)) ]\n6 Sample {x1, . . . , xn, y1, . . . , yns}, {!1, . . . , !n} as above; generate zi = Fβ(!i) 7 Compute θ-gradient: − 1\nn ∑n i=1 s(Gθ(zi), zi)∇θDψ(Gθ(zi), zi)\n8 Compute φ-gradient: 1 n ∑n i=1 ∇φDψ(xi, Eφ(xi)) + 1 ns ∑ns i=1 ∇φLsup(φ;xi, yi) 9 Compute β-gradient: − 1 n ∑n i=1 s(G(zi), zi)∇βDψ(Gθ(Fβ(!i)), Fβ(!i))\n10 Update parameters φ, θ,β using the gradients Return: φ, θ,β\nRemark: without loss of generality, assume the first Ns samples in the training set and the first ns samples in each mini-batch has available labels; ns may vary across different iterations.\n5 EXPERIMENTS\nWe evaluate our methods on two datasets. The first one is a synthesized dataset Pendulum similar to the one in Yang et al. (2020). As shown in Figure 3, each image is generated by four continuous factors: pendulum angle, light angle, shadow length and shadow position whose underlying structure is given in Figure 2(a) following physical mechanisms. To make the dataset realistic, we introduce random noises when generating the two effects from the causes, representing the measurement error. We further introduce 20% corrupted data whose shadow is randomly generated, mimicking some environmental disturbance. The sample sizes for training, validation and test set are all 6,724.1\nThe second one is a real human face dataset CelebA (Liu et al., 2015), containing 202,599 images with 40 labelled binary attributes. Among them we consider two groups of causally correlated factors shown in 2(b,c). We believe these two datasets are diverse enough to assess our methods. All the details of experimental setup and architectures are given in Appendix C.\npendulum_ ngle(1) light_angle(2)\nshadow_length(3) shadow_position(4)\npendulum_ angle(0)\nlight_ angle(1)\nshadow_ position(3) shadow_ length(2)\n(a) Pendulum\nsmile(1)\ncheckbone(3) narrow_eye(5)mouth_open(4)\ngender(2)\nsmile6\nchubby(6)\nsmile(0) gender(1)\nnarrow_ eye(4)chubby(5) mouth_ open(3)\ncheekbone(2)\n(b) CelebA-Smile\nAge6 young(1) gend r(2)\nreceding_hairline(3)make_up(4)chubby(5)\neye_bag(6)\nyoung(0) gender(1)\nreceding_ hairline(2) make_ up(3)chubby(4)\neye_bag(2)\n(c) CelebA-Attractive\nFigure 2: Underlying causal structures.\n5.1 CONTROLLABLE GENERATION\nWe first investigate the performance of our methods in disentanglement through applications in causal controllable generation (CG). Traditional CG methods mainly manipulate the independent generative factors (Karras et al., 2019), while we consider the general case where the factors are causally correlated. With a learned SCM as the prior, we are able to generate images from any desired interventional distributions of the latent factors. For example, we can manipulate only the\n1The Pendulum dataset will be released as a causal disentanglement benchmark soon.\ncause factor while leave its effects unchanged. Besides, the BGM framework enables controllable generation either from scratch or a given unlabeled image.\nWe consider two types of intervention. In traditional traversals, we manipulate one dimension of the latent vector while keep the others fixed to either their inferred or sampled values (Higgins et al., 2017). A causal view of such operations is an intervention on all the variables by setting them as constants with only one of them varying. Another interesting type of interventional distribution is to intervene on only one latent variable, i.e., Pdo(zi=c)(z). The proposed SCM prior enables us to conduct such intervention though the mechanism given in Section 4.1.\nFigure 3-4 illustrate the results of causal controllable generation of the proposed DEAR and the baseline method with an independent prior, S-β-VAE (Locatello et al., 2020b). Results from other baselines including S-TCVAE, S-FactorVAE (which essentially make no difference due to the independence assumption) and CausalGAN are given in Appendix D. Note that we do not compare with unsupervised disentanglement methods because of fairness and their lack of justification.\nIn each figure, we first infer the latent representations from a test image in block (c). The traditional traversals of two models are given in blocks (a,b). We see that in each line when manipulating one latent dimension, the generated images from our model vary only in a single factor, indicating that our method can disentangle the causally correlated factors. It is worth pointing out that we are the first to achieve the disentanglement between the cause and its effects, while other methods tend to entangle them. In block (d), we show the results of intervention on the latent variables representing the cause factors, which clearly show that intervening on a cause variable changes its effect variables. Results in Appendix D further show that intervening on an effect node does not influence its cause.\nSince the underlying factors are causally correlated, all previous quantitative metrics for disentanglement no longer apply. We provide more qualitative traversals in Appendix D to show the overall performance. A quantitative metric for causal disentanglement is worth exploring in future work.\n5.2 DOWNSTREAM TASK\nThe previous section verifies the good disentanglement performance of DEAR. In this section, equipped with DEAR, we investigate and demonstrate the benefits of learned disentangled causal representations in sample efficiency and distributional robustness.\nWe state the downstream tasks. On CelebA, we consider the structure CelebA-Attractive in Figure 2(c). We artificially create a target label τ = 1 if young=1, gender=0, receding hairline=0, make up=1, chubby=0, eye bag=0, and τ = 0 otherwise, indicating one kind of attractiveness as a slim young woman with makeup and thick hair.2 On the pendulum dataset, we regard the label of data corruption as the target τ , i.e., τ = 1 if the data is corrupted and τ = 0 otherwise. We consider the downstream tasks of predicting the target label. In both cases, the factors of interests in Figure 2(a,c) are causally related to τ , which are the features that humans use to do the task. Hence it is conjectured that a disentangled representation of these causal factors tends to be more data efficient and invariant to distribution shifts.\n5.2.1 SAMPLE EFFICIENCY\nFor a BGM including the previous state-of-the-art supervised disentangling methods S-VAEs (Locatello et al., 2020b) and DEAR, we use the learned encoder to embed the training data to the latent space and train a MLP classifier on the representations to predict the target label. Without an encoder, one normally needs to train a convolutional neural network with raw images as the input. Here we adopt the ResNet50 as the baseline classifier which is the architecture of the BGM encoder. Since disentangling methods use additional supervision of the generative factors, we consider another baseline that is pretrained using multi-label prediction of the factors on the same training set.\nTo measure the sample efficiency, we use the statistical efficiency score defined as the average test accuracy based on 100 samples divided by the average accuracy based on 10,000/all samples, following Locatello et al. (2019). Table 1 presents the results, showing that DEAR owns the highest sample efficiency on both datasets. ResNet with raw data inputs has the lowest efficiency, although multi-label pretraining improves its performance to a limited extent. S-VAEs have better efficiency than the ResNet baselines but lower accuracy under the case with more training data, which we think is mainly because the independent prior conflicts with the supervised loss as indicated in Proposition 1, making the learned representations entangled (as shown in the previous section) and less informative. Besides, we also investigate the performance of DEAR under the semi-supervised setting where only 10% of the labels are available. We find that DEAR with fewer labels has comparable sample efficiency with that in the fully supervised setting, with a sacrifice in accuracy that is yet still comparable to other baselines with more supervision.\n2Note that the definition of attractiveness here only refers to one kind of attractiveness, which has nothing to do with the linguistic definition of attractiveness.\n(b) Pendulum\n5.2.2 DISTRIBUTIONAL ROBUSTNESS\nWe manipulate the training data to inject spurious correlations between the target label and some spurious attributes. On CelebA, we regard mouth open as the spurious factor; on Pendulum, we choose background color ∈ {blue(+), white(−)}. We manipulate the training data such that the target label is more strongly correlated with the spurious attributes, i.e., the target label and the spurious attribute of 80% of the examples are both positive or negative, while those of 20% examples are opposite. For example, in the manipulated training set, 80% smiling examples in CelebA have an open mouth; 80% corrupted examples in Pendulum are masked with a blue background. The test set however does not have these correlations, leading to a distribution shift.\nIntuitively these spurious attributes are not causally correlated to the target label, but normal independent and identically distributed (IID) based methods like empirical risk minimization (ERM) tend to exploit these easily learned spurious correlations in prediction, and hence face performance degradation when the such correlation no longer exists during test. In contrast, causal factors are regarded invariant and thus robust under such shifts. Previous sections justify both theoretically and empirically that DEAR can learn disentangled causal representations. We then apply those representations by training a classifier upon them, which is conjectured to be invariant and robust. Baseline methods include ERM, multi-label ERM to predict target label and all the factors considered in disentangling to have the same amount of supervision, and S-VAEs that can not disentangle well in the causal case.\nTable 2 shows the average and worst-case (Sagawa et al., 2019) test accuracy to assess both the overall classification performance and distributional robustness, where we group the test set according to the two binary labels, the target one and the spurious attribute, into four cases and regard the one with the worst accuracy as the worst-case, which usually owns the opposite correlation to the training data. We see that the classifiers trained upon DEAR representations outperform the baselines in both metrics. Particularly, when comparing the worst-case accuracy with the average one, we observe a slump from around 80 to around 60 for other methods on CelebA, while DEAR enjoys an acceptable small decline. These results support the above conjecture and the benefits of causal disentanglement in distributional robustness.\n6 CONCLUSION\nThis paper showed that previous methods with the independent latent prior assumption fail to learn disentangled representation when the underlying factors of interests are causally correlated. We then proposed a new disentangled learning method called DEAR with theoretical guarantees. Extensive experiments demonstrated the effectiveness of DEAR in causal generation, and the benefits of the learned representations for downstream tasks.\nREFERENCES\nMartin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.\nYoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.\nYoshua Bengio, Tristan Deleu, Nasim Rahaman, Rosemary Ke, Sébastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal. A meta-transfer objective for learning to disentangle causal mechanisms. In ICLR, 2020.\nChristopher P. Burgess, Irina Higgins, Arka Pal, Loı̈c Matthey, Nicholas Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in beta-vae. NIPS Workshop of Learning Disentangled Features, 2017.\nTian Qi Chen, Xuechen Li, Roger B. Grosse, and David K. Duvenaud. Isolating sources of disentanglement in variational autoencoders. In NeurIPS, 2018.\nXi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pp. 2172–2180, 2016.\nJeff Donahue, Philipp Krähenbühl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2017.\nVincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martı́n Arjovsky, Olivier Mastropietro, and Aaron C. Courville. Adversarially learned inference. In ICLR, 2017.\nIan Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.\nIrina Higgins, Loı̈c Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.\nTero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4401–4410, 2019.\nIlyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In International Conference on Artificial Intelligence and Statistics, pp. 2207–2217, 2020.\nHyunjik Kim and Andriy Mnih. Disentangling by factorising. In ICML, 2018.\nDiederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014.\nMurat Kocaoglu, Christopher Snyder, Alexandros G Dimakis, and Sriram Vishwanath. Causalgan: Learning causal implicit generative models with adversarial training. In ICLR, 2018.\nAbhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018.\nFelix Leeb, Yashas Annadani, Stefan Bauer, and Bernhard Schölkopf. Structural autoencoders improve representations for generation and transfer. arXiv preprint arXiv:2006.07796, 2020.\nZinan Lin, Kiran K Thekumparampil, Giulia Fanti, and Sewoong Oh. Infogan-cr and modelcentrality: Self-supervised model training and selection for disentangling gans. In ICML, 2020.\nZiwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738, 2015.\nF. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, pp. 4114–4124. PMLR, June 2019. URL http://proceedings.mlr.press/v97/locatello19a.html.\nFrancesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. In ICML, 2020a.\nFrancesco Locatello, Michael Tschannen, Stefan Bauer, Gunnar Rätsch, Bernhard Schölkopf, and Olivier Bachem. Disentangling factors of variation using few labels. In ICLR, 2020b.\nLars Mescheder, Sebastian Nowozin, and Andreas Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2391–2400. JMLR. org, 2017.\nMehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.\nRaha Moraffah, Bahman Moraffah, Mansooreh Karami, Adrienne Raglin, and Huan Liu. Can: A causal adversarial network for learning observational and interventional distributions. arXiv preprint arXiv:2008.11376, 2020.\nIgnavier Ng, AmirEmad Ghassami, and Kun Zhang. On the role of sparsity and dag constraints for learning linear dags. arXiv preprint arXiv:2006.10201, 2020.\nJudea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Elsevier, 2014.\nJudea Pearl et al. Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress, 2000.\nAli Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. In Advances in Neural Information Processing Systems, pp. 14866–14876, 2019.\nShiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.\nBernhard Schölkopf. Causality for machine learning. arXiv preprint arXiv:1911.10500, 2019.\nBernhard Schölkopf, Dominik Janzing, Jonas Peters, Eleni Sgouritsa, Kun Zhang, and Joris Mooij. On causal and anticausal learning. In ICML, 2012.\nXinwei Shen, Tong Zhang, and Kani Chen. Bidirectional generative modeling using adversarial gradient estimation. arXiv preprint arXiv:2002.09161, 2020.\nRui Shu, Yining Chen, Abhishek Kumar, Stefano Ermon, and Ben Poole. Weakly supervised disentanglement with guarantees. In ICLR, 2020.\nMengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, and Jun Wang. Causalvae: Structured causal disentanglement in variational autoencoder. arXiv preprint arXiv:2004.08697, 2020.\nYue Yu, Jie Chen, Tian Gao, and Mo Yu. Dag-gnn: Dag structure learning with graph neural networks. In ICML, 2019.\nHan Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International Conference on Machine Learning, pp. 7354–7363. PMLR, 2019.\nJiji Zhang and Peter Spirtes. Intervention, determinism, and the causal minimality condition. Synthese, 182(3):335–347, 2011.\nShengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from generative models. In ICML, 2017.\nXun Zheng, Bryon Aragam, Pradeep K Ravikumar, and Eric P Xing. Dags with no tears: Continuous optimization for structure learning. In Advances in Neural Information Processing Systems, pp. 9472–9483, 2018.\nAPPENDIX A PROOFS\nA.1 PROOF OF PROPOSITION 1\nProof. On one hand, by the assumption that the elements of ξ are connected by a causal graph whose adjacency matrix is not a zero matrix. Then exist i ∕= j such that ξi and ξj are not independent, indicating that the probability density of ξ cannot be factorized. Since E∗ is disentangled wrt ξ, by Definition 1, ∀i = 1, . . . ,m there exists gi such that E∗i (x) = gi(ξi). This implies that the probability density of E∗(x) is not factorized.\nOn the other hand, notice that the distribution family of the latent prior is {pz : pz is factorized}. Hence the intersection of the marginal distribution families of z and E∗(x) is an empty set. Then the joint distribution families of (x,E∗(x)) and (G(z), z) also have an empty intersection.\nWe know that Lgen(E,G) = 0 implies qE(x, z) = pG(x, z) which contradicts the above. Therefore, we have a = minG Lgen(E∗, G) > 0.\nLet (E′, G′) be the solution of the optimization problem min{(E,G):Lgen=0} Lsup(E). Then we have L′ = L(E′, G′) = λb, and L∗ = L(E∗, G) ≥ a + λb∗ > λb∗ for any generator G. When b∗ ≥ b we directly have L′ < L∗. When b∗ < b and λ is not large enough, i.e., λ < ab−b∗ , we have L′ < L∗.\nA.2 PROOF OF THEOREM 1\nProof. Assume E is deterministic.\nOn one hand, for each i = 1, . . . ,m, first consider the cross-entropy loss Lsup,i(E) = E(x,y)[CE(Ei(x), yi)] = ∫ p(x)p(yi|x)(yi log σ(Ei(x))+(1−yi) log(1−σ(Ei(x))))dxdyi,\nwhere p(yi|x) is the probability mass function of the binary label yi given x, characterized by P(yi = 1|x) = E(yi|x) and P(yi = 0|x) = 1− E(yi|x). Let\n∂Lsup,i ∂σ(Ei(x)) =\n∫ p(x)p(yi|x) ( yi\n1 σ(Ei)(1− σ(Ei)) − 1 1− σ(Ei)\n) dxdyi = 0.\nThen we know that E∗i (x) = σ −1(E(yi|x)) = σ−1(ξi) minimizes Lsup,i.\nConsider the L2 loss\nLsup,i(φ) = E(x,y)[Ēi(x)− yi]2 = ∫ p(x)p(yi|x)‖Ei(x)− yi‖2dxdyi.\nLet ∂Lsup,i ∂Ei(x) = 2 ∫ p(x)p(yi|x)(Ei(x)− yi)dxdyi = 0.\nThen we know that E∗i (x) = E(yi|x) = ξi minimizes Lsup,i in this case. On the other hand, by Assumption 1 there exists β0 = (f0, h0, A0) such that pξ = pβ0 . Then the distribution of E∗(x) is given by pβ∗ with β∗ = (g ◦ f0, h0, A0). Assumption 2 ensures that there is no β′ = (f ′, h′, A′) such that A′ ∕= A0 but pβ′ = pβ∗ . Let F ∗ = Fβ∗ . Further due to the infinite capacity of G, we have the distribution family of pG,F∗(x, z) contains qE∗(x, z). Then by minimizing the loss in (6) over G, we can find G∗ such that pG∗,F∗(x, z) matches qE∗(x, z) and thus Lgen(E∗, G∗, F ∗) reaches 0.\nHence minimizing L = Lgen +λLsup, which is the DEAR formulation (6), leads to the solution with E∗i (x) = gi(ξi) with gi(ξi) = σ\n−1(ξi) if CE loss is used, and gi(ξi) = ξi if L2 loss is used, and the true binary adjacency matrix.\nFor a stochastic encoder, we establish the disentanglement of its deterministic part as above, and follow Definition 1 to obtain the desired result.\nA.3 PROOF OF LEMMA 1\nWe follow the same proof scheme as in Shen et al. (2020) where the only difference lies in the gradient wrt the prior parameter β. To make this paper self-contained, we restate some proof steps here using our notations.\nLet ‖ · ‖ denote the vector 2-norm. For a scalar function h(x, y), let ∇xh(x, y) denote its gradient with respect to x. For a vector function g(x, y), let ∇xg(x, y) denote its Jacobi matrix with respect to x. Given a differentiable vector function g(x) : Rk → Rk, we use ∇ · g(x) to denote its divergence, defined as\n∇ · g(x) := k∑\nj=1\n∂[g(x)]j ∂[x]j ,\nwhere [x]j denotes the j-th component of x. We know that ∫\n∇ · g(x)dx = 0\nfor all vector function g(x) such that g(∞) = 0. Given a matrix function w(x) = (w1(x), . . . , wl(x)) : Rk → Rk×l where each wi(x), i = 1 . . . , l is a k-dimensional differentiable vector function, its divergence is defined as ∇ · w(x) = (∇ · w1(x), . . . ,∇ · wl(x)). To prove Lemma 1, we need the following lemma which specifies the dynamics of the generator joint distribution pg(x, z) and the encoder joint distribution pe(x, z), denoted by pθ(x, z) and pφ(x, z) here.\nLemma 2. Using the definitions and notations in Lemma 1, we have\n∇θpθ,β(x, z) = −∇xpθ,β(x, z)⊤gθ(x)− pθ,β(x, z)∇ · gθ(x), (8) ∇φqφ(x, z) = −∇zqφ(x, z)⊤eφ(z)− qφ(x, z)∇ · eφ(z), (9) ∇βpθ,β(x, z) = ∇xpθ,β(x, z)⊤f̃β(x)−∇zpθ,β(x, z)⊤fβ(z)− pθ,β(x, z)∇ · ( f̃β(x)\nfβ(z)\n) , (10)\nfor all data x and latent variable z, where gθ(Gθ(z, $)) = ∇θGθ(z, $), eφ(Eφ(x, $)) = ∇φEφ(x, $), fβ(Fβ($)) = ∇βFβ($), and f̃β(G(Fβ($))) = ∇βG(Fβ($)).\nProof of Lemma 2. We only prove (10) which is the distinct part from Shen et al. (2020).\nLet l be the dimension of parameter β. To simplify notation, let random vector Z = Fβ($) and X = G(Z) ∈ Rd and Y = (X,Z) ∈ Rd+k, and let p be the probability density of Y . For each i = 1, . . . , l, let ∆ = δei where ei is a l-dimensional unit vector whose i-th component is one and all the others are zero, and δ is a small scalar. Let Z ′ = Fβ+δ($), X ′ = G(Z ′) and Y ′ = (X ′, Z ′) so that Y ′ is a random variable transformed from Y by\nY ′ = Y +\n( f̃β(X)\nfβ(Z)\n) ∆+ o(δ).\nLet p′ be the probability density of Y ′. For an arbitrary y′ = (x′, z′) ∈ Rd+k, let y′ = y +(f̃β(x) fβ(z) ) ∆+ o(δ) and y = (x, z). Then we have\np′(y′) = p(y)| det(dy′/dy)|−1\n= p(y)| det(Id + (∇f̃β(x),∇fβ(z))⊤∆+ o(δ))|−1\n= p(y)(1 +∆⊤∇ · (f̃β(x), fβ(z))⊤ + o(δ))−1\n= p(y)(1−∆⊤∇ · (f̃β(x), fβ(z))⊤ + o(δ)) = p(y)−∆⊤p(y′)∇ · (f̃β(x′), fβ(z′))⊤ + o(δ) = p(y′)−∆⊤(f̃β(x′), fβ(z′)) ·∇x′p(x′, z)−∆⊤p(y′)(∇ · f̃β(x′),∇ · fβ(z′))⊤ + o(δ).\nSince y′ is arbitrary, above implies that\np′(x, z) = p(x, z)−∆⊤(f̃β(x), fβ(z)) · (∇xp(x, z),∇zp(x, z))⊤ ·∇xp(x, z) −∆⊤p(x, z)(∇ · f̃β(x′),∇ · fβ(z′))⊤ + o(δ)\nfor all x ∈ Rd, z ∈ Rk and i = 1, . . . , l, leading to (10) by taking δ → 0, and noting that p = pβ and p′ = pβ+∆. Similarly we can obtain (8) and (9).\nProof of Lemma 1. Recall the objective DKL(q, p) = ∫ q(x, z) log(p(x, z)/q(x, z))dxdz. Denote its integrand by ℓ(q, p). Let ℓ′2(q, p) = ∂ℓ(q, p)/∂p. We have\n∇βℓ(q(x, z), p(x, z)) = ℓ′2(q(x, z), p(x, z))∇βpθ,β(x, z) where ∇βpθ,β(x, z) is computed in Lemma 2. Besides, we have\n∇x · [ℓ′2(q, p)p(x, z)f̃β(x)] = ℓ′2(q, p)p(x, z)∇ · f̃β(x) + ℓ′2(q, p)∇xp(x, z) · f̃β(x) +∇xℓ′2(q, p)p(x, z)f̃β(x), ∇z · [ℓ′2(q, p)p(x, z)fβ(z)] = ℓ′2(q, p)p(x, z)∇ · fβ(z) + ℓ′2(q, p)∇p(x, z) · fβ(z) +∇ℓ′2(q, p)p(x, z)fβ(z).\nThus,\n∇βLgen = ∫ ∇βℓ(q(x, z), p(x, z))dxdz = ∫ p(x, z)[∇xℓ′2(q, p)f̃β(x) +∇zℓ′2(q, p)fβ(z)]\nwhere we can compute ∇xℓ′2(q, p) = s(x, z)∇xD(x, z) and ∇xℓ′2(q, p) = s(x, z)∇zD(x, z). Hence\n∇βLgen = −E(x,z)∼p(x,z) [ s(x, z)(∇xD(x, z)⊤f̃β(x) +∇zD(x, z)⊤fβ(z)) ]\n= −E' [ s(x, z)(∇xD(x, z)⊤∇βG(Fβ($)) +∇zD(x, z)⊤∇βFβ($))| x=G(Fβ('))\nz=Fβ(')\n] .\nwhere the second equality follows reparametrization.\nLemma 3. For any a, b ∈ R (a < b), the set of continuous piece-wise linear function P is dense in C[a, b] where the metric d(f, g) = supx∈[a,b] |f(x)− g(x)|. Note that P is defined as\nP = ∪h∈{(b−a)/n|n∈N+}Ph\nPh =\n \nk + (b−a)/h−1∑\ni=0\nwi(x− a− ih)1(x ≥ a+ ih) ∣∣∣∣wi, k ∈ R\n \n ,\nwhere [·] here is floor function.\nProof. Since [a, b] is compact, any function f ∈ C[a, b] is uniform continuous, i.e., ∀$ > 0, there exists δ > 0 such that |x− y| < δ =⇒ |f(x)− f(y)| < $/2. Let [a, b] = ∪N−1n=0 [an, bn], and gn(x) be a linear function, such that\nan = a+ nh,\nbn = a+ (n+ 1)h,\ngn(an) = f(an),\ngi(bn) = f(bn),\nNh = b− a.\nAssume that h < δ. For any x ∈ [an, bn], we have |f(x)− gi(x)| ≤ min {|f(x)− f(an)|+ |gi(x)− gi(an)|, |f(x)− f(bn)|+ |gi(x)− gi(bn)|}\n≤ |gi(an)− gi(bn)|+min {|f(x)− f(an)|, |f(x)− f(bn)|} ≤ |f(an)− f(bn)|+min {|f(x)− f(an)|, |f(x)− f(bn)|} < $.\nThus, sup\nx∈[an,bn] |f(x)− gn(x)| < $.\nWe define\ng(x) =\nN−1∑\nn=1\ngn(x)1(x ∈ [an, bn])\nwhich is obvious that g(x) ∈ Ph ⊂ P . And we have sup\nx∈[a,b] |f(x)− g(x)| < $\nTherefore, P is dense in C[a, b] and Ph is $-dense.\nAPPENDIX B LEARNING THE STRUCTURE\nAs mentioned in Section 4.1, our DEAR algorithm requires the prior knowledge on the super-graph of the true graph over the underlying factors of interests. The experiments shown in the main text are all based on the assumption that the true graph is given. In this section we investigate the performance of the learned weighted adjacency matrix and present an ablation study on different extents of prior knowledge on the structure.\nB.1 GIVEN THE TRUE GRAPH\nFigure 5 shows the learned weighted adjacency matrices when the true binary structure is given, whose weights show sensible signs and scalings consistent with common knowledge. For example, smile and its effect mouth open are positively correlated. The corresponding element in the weighted adjacency A03 of (a) turns out positive, which makes sense. Also gender (the logit of male) and its effect make up are negatively correlated. Then A13 of (b) turns out negative.\nB.2 GIVEN THE TRUE CAUSAL ORDERING\nConsider the Pendulum dataset, whose ground-truth structure is given in Figure 2(a). Consider a causal ordering pendulum angle, light angle, shadow position, shadow length, given which we start with a full graph whose elements are randomly initialized around 0 as shown in Figure 6(a). Figure 6 presents the adjacency matrices learned by DEAR at different training epochs, from which we see that it eventually obtains the learned structure that nearly coincides with the one learned given the true graph shown in Figure 5(c). This experiment shows the potential of DEAR to incorporate structure learning methods to learn the latent causal structure from scratch, which will be explored in future research.\nAPPENDIX C IMPLEMENTATION DETAILS\nIn this section we state the details of experimental setup and the network architectures used for all experiments.\nPreprocessing and hyperparameters. We pre-process the images by taking a center crops of 128 × 128 for CelebA and resizing all images in CelebA and Pendulum to the 64 × 64 resolution. We adopt Adam with β1 = 0, β2 = 0.999, and a learning rate of 1× 10−4 for D, 5× 10−5 for E, G and F , and 1 × 10−3 for the adjacency matrix A. We use a mini-batch size of 128. For adversarial training in Algorithm 1, we train the D once on each mini-batch. The coefficient λ of the supervised regularizer is set to 5. We use CE supervised loss for both CelebA with binary observations of the underlying factors and Pendulum with bounded continuous observations. Note that L2 loss works comparable to CE loss on Pendulum. In downstream tasks, for BGMs with an encoder, we train a two-level MLP classifier with 100 hidden nodes using Adam with a learning rate of 1× 10−2 and a mini-batch size of 128. Models were trained for around 150 epochs on CelebA and 600 epochs on Pendulum on NVIDIA RTX 2080 Ti.\nNetwork architectures. We follow the architectures used in Shen et al. (2020). Specifically, for such realistic data, we adopt the SAGAN (Zhang et al., 2019) architecture for D and G. The D network consists of three modules as shown in Figure 7 and detailed described in (Shen et al., 2020). Details for newtork G and Dx are given in Figure 7 and Table 3. The encoder architecture is the ResNet50 (He et al., 2016) followed by a 4-layer MLP of size 1024.\nImplementation of the SCM. Recall the nonlinear SCM as the prior\nZ = f((I −A⊤)−1h($)) := Fβ($). We find Gaussians are expressive enough as unexplained noises, so we set h as the identity mapping. As mentioned in Section 4.1 we require the invertibility of f . We implement both linear and nonlinear ones. For a linear f , we formally refer to\nf(z) = Wz + b,\nwhere W and b are learnable weights and biases. Note that W is a diagonal matrix to model the element-wise transformation. Its inverse function can be easily computed by\nf−1(z) = W−1(z − b). For a non-linear f , we use piece-wise linear functions defined by\nf (i)(z(i)) = w (i) 0 z (i) +\nNa∑\nt=1\nw (i) t (z (i) − ai)1(z(i) ≥ ai) + b(i)\nwhere ·(i) denote the i-th dimension of a vector or a vector-function, a0 < a1 < · · · < aNa are points of division, and 1(·) is the indicator function. From its denseness shown in lemma 3, the family of such piece-wise linear functions is expressive enough to model general element-wise nonlinear invertible transformations.\nExperimental details for baseline methods. We reproduce the S-VAEs including S-VAE, S-βVAE and S-TCVAE using E and G with the same architecture as ours and adopt the same optimization algorithm for training. The coefficient for the independence regularizer is set to 4 since we\nnotice that setting a larger independence regularizer hurts disentanglement in the correlated case. For the supervised regularizer, we use λ = 1000 for a balance of generative model and supervision. The ERM ResNet is trained using the same optimizer with a learning rate of 1× 10−4.\nAPPENDIX D ADDITIONAL RESULTS OF CAUSAL CONTROLLABLE GENERATION\nIn this section we present more qualitative results of causal controllable generation on two datasets using DEAR and baseline methods, including S-VAEs (Locatello et al., 2020b) and CausalGAN (Kocaoglu et al., 2018). We consider three underlying structures on two datasets: Pendulum in Figure 2(a), CelebA-Smile in Figure 2(b), and CelebA-Attractive in Figure 2(c)." } ]
2,020
null
SP:829267dac365b8cdd66c0188da58602135a0c4b9
[ "The paper defines simple differential operators at nodes in a graph (gradient, Laplacian) and uses them in the proposed graph convolutional layer. The claim is that simple operators will limit the representation power of the layer leading to better generalization. While this might be true at some level, it goes against the trends in deep learning which move away from such predefined constraints on the representation power of models. " ]
We present a novel graph convolutional layer that is fast, conceptually simple, and provides high accuracy with reduced overfitting. Based on pseudo-differential operators, our layer operates on graphs with relative position information available for each pair of connected nodes. We evaluate our method on a variety of supervised learning tasks, including superpixel image classification using the MNIST, CIFAR10, and CIFAR100 superpixel datasets, node correspondence using the FAUST dataset, and shape classification using the ModelNet10 dataset. The new layer outperforms multiple recent architectures on superpixel image classification tasks using the MNIST and CIFAR100 superpixel datasets and performs comparably with recent results on the CIFAR10 superpixel dataset. We measure test accuracy without bias to the test set by selecting the model with the best training accuracy. The new layer achieves a test error rate of 0.80% on the MNIST superpixel dataset, beating the closest reported rate of 0.95% by a factor of more than 15%. After dropping roughly 70% of the edge connections from the input by performing a Delaunay triangulation, our model still achieves a competitive error rate of 1.04%.
[]
[ { "authors": [ "James Atwood", "Don Towsley" ], "title": "Superpixel image classification with graph attention networks, 2020", "venue": "Diffusion-convolutional neural networks,", "year": 2015 }, { "authors": [ "Rianne van den Berg", "Thomas N Kipf", "Max Welling" ], "title": "Graph convolutional matrix completion", "venue": "arXiv preprint arXiv:1706.02263,", "year": 2017 }, { "authors": [ "Federica Bogo", "Javier Romero", "Matthew Loper", "Michael J Black" ], "title": "Faust: Dataset and evaluation for 3d mesh registration", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2014 }, { "authors": [ "Davide Boscaini", "Jonathan Masci", "Emanuele Rodol", "Michael M. Bronstein" ], "title": "Learning shape correspondence with anisotropic convolutional neural networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": null, "year": 2013 }, { "authors": [ "A. Cheraghian", "L. Petersson" ], "title": "3dcapsule: Extending the capsule architecture to classify 3d point clouds", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2019 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Inderjit S Dhillon", "Yuqiang Guan", "Brian Kulis" ], "title": "Weighted graph cuts without eigenvectors a multilevel approach", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 1944 }, { "authors": [ "M. Dominguez", "R. Dhamdhere", "A. Petkar", "S. Jain", "S. Sah", "R. Ptucha" ], "title": "General-purpose deep point cloud feature extractor", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Matthias Fey", "Jan E. Lenssen" ], "title": "Fast graph representation learning with PyTorch Geometric", "venue": "In ICLR Workshop on Representation Learning on Graphs and Manifolds,", "year": 2019 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen", "Frank Weichert", "Heinrich Mller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": null, "year": 2017 }, { "authors": [ "A. Garcia-Garcia", "F. Gomez-Donoso", "J. Garcia-Rodriguez", "S. Orts-Escolano", "M. Cazorla", "J. Azorin-Lopez" ], "title": "Pointnet: A 3d convolutional neural network for real-time object class recognition", "venue": "In 2016 International Joint Conference on Neural Networks (IJCNN),", "year": 2016 }, { "authors": [ "Justin Gilmer", "Samuel S. Schoenholz", "Patrick F. Riley", "Oriol Vinyals", "George E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "CoRR, abs/1704.01212,", "year": 2017 }, { "authors": [ "Shunwang Gong", "Lei Chen", "Michael Bronstein", "Stefanos Zafeiriou" ], "title": "Spiralnet++: A fast and highly efficient mesh convolution operator, 2019", "venue": null, "year": 2019 }, { "authors": [ "Lindsey Gray", "Thomas Klijnsma", "Shamik Ghosh" ], "title": "A dynamic reduction network for point clouds, 2020", "venue": null, "year": 2020 }, { "authors": [ "Pim de Haan", "Maurice Weiler", "Taco Cohen", "Max Welling" ], "title": "Gauge equivariant mesh cnns: Anisotropic convolutions on geometric graphs, 2020", "venue": null, "year": 2020 }, { "authors": [ "Mikael Henaff", "Joan Bruna", "Yann LeCun" ], "title": "Deep convolutional networks on graph-structured data", "venue": null, "year": 2015 }, { "authors": [ "D. Hong", "L. Gao", "J. Yao", "B. Zhang", "A. Plaza", "J. Chanussot" ], "title": "Graph convolutional networks for hyperspectral image classification", "venue": "IEEE Transactions on Geoscience and Remote Sensing,", "year": 2020 }, { "authors": [ "Chiyu Max Jiang", "Jingwei Huang", "Karthik Kashinath", "Prabhat", "Philip Marcus", "Matthias Niessner" ], "title": "Spherical CNNs on unstructured grids", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Asifullah Khan", "Anabia Sohail", "Umme Zahoora", "Aqsa Saeed Qureshi" ], "title": "A survey of the recent architectures of deep convolutional neural networks", "venue": "Artificial Intelligence Review,", "year": 2020 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks, 2016", "venue": null, "year": 2016 }, { "authors": [ "Boris Knyazev", "Xiao Lin", "Mohamed R Amer", "Graham W Taylor" ], "title": "Image classification with hierarchical multigraph networks", "venue": "arXiv preprint arXiv:1907.09000,", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Ron Levie", "Federico Monti", "Xavier Bresson", "Michael M. Bronstein" ], "title": "Cayleynets: Graph convolutional neural networks with complex rational spectral filters, 2017", "venue": null, "year": 2017 }, { "authors": [ "Xinhai Liu", "Zhizhong Han", "Yu-Shen Liu", "Matthias Zwicker" ], "title": "Point2sequence: Learning the shape representation of 3d point clouds with an attention-based sequence to sequence network, 2018", "venue": null, "year": 2018 }, { "authors": [ "Claudio Mancinelli", "Marco Livesu", "Enrico Puppo" ], "title": "A comparison of methods for gradient field estimation on simplicial meshes", "venue": "Computers and Graphics,", "year": 2019 }, { "authors": [ "Jonathan Masci", "Davide Boscaini", "Michael Bronstein", "Pierre Vandergheynst" ], "title": "Geodesic convolutional neural networks on riemannian manifolds", "venue": "In Proceedings of the IEEE international conference on computer vision workshops,", "year": 2015 }, { "authors": [ "A. Micheli" ], "title": "Neural network for graphs: A contextual constructive approach", "venue": "IEEE Transactions on Neural Networks,", "year": 2009 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M. Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Mathias Niepert", "Mohamed Ahmed", "Konstantin Kutzkov" ], "title": "Learning convolutional neural networks for graphs, 2016", "venue": null, "year": 2016 }, { "authors": [ "Holger R. Roth", "Amal Farag", "Le Lu", "Evrim B. Turkbey", "Ronald M. Summers" ], "title": "Deep convolutional networks for pancreas segmentation in CT imaging", "venue": "Medical Imaging 2015: Image Processing,", "year": 2015 }, { "authors": [ "Martin Simonovsky", "Nikos Komodakis" ], "title": "Dynamic edge-conditioned filters in convolutional neural networks on graphs. CoRR, abs/1704.02901, 2017", "venue": null, "year": 2017 }, { "authors": [ "Zhiyu Sun", "Ethan Rooke", "Jerome Charton", "Yusen He", "Jia Lu", "Stephen Baek" ], "title": "Zernet: Convolutional neural networks on arbitrary surfaces via zernike local tangent space estimation, 2018", "venue": null, "year": 2018 }, { "authors": [ "John Tencer", "Kevin Potter" ], "title": "Enabling nonlinear manifold projection reduced-order models by extending convolutional neural networks to unstructured data", "venue": null, "year": 2020 }, { "authors": [ "Nitika Verma", "Edmond Boyer", "Jakob Verbeek" ], "title": "Feastnet: Feature-steered graph convolutions for 3d shape analysis, 2017", "venue": null, "year": 2017 }, { "authors": [ "Chu Wang", "Babak Samari", "Kaleem Siddiqi" ], "title": "Local spectral graph convolution for point set feature learning", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Convolutional neural networks have performed incredibly well on tasks such as image classification, segmentation, and object detection (Khan et al., 2020). While there have been diverse architectural design innovations leading to improved accuracies across these tasks, all of these tasks share the common property that they operate on structured Euclidean domain inputs. A growing body of research on how to transfer these successes into non-Euclidean domains, such as manifolds and graphs, has followed.\nWe focus on unstructured graphs which represent discretizations of an underlying metric space. These data types are ubiquitous in computational physics, faceted surface meshes, and (with superpixel conversion) images. Previous efforts to extend CNNs to this type of data have involved parameterized function approximations on localized neighborhoods, such as MoNet (Monti et al., 2017) and SplineCNN (Fey et al., 2017). These function approximations (Gaussian mixture models in the case of MoNet and B-spline kernels in the case of SpineCNN) are complex and relatively expensive to calculate compared to CNN kernels.\nInspired by earlier work in shape correspondence (Boscaini et al., 2016), image segmentation on the unit sphere (Jiang et al., 2019), and low-dimensional embeddings of computational physics data (Tencer & Potter, 2020) we seek to utilize parameterized differential operators (PDOs) to construct convolution kernels. In contrast to MoNet and SpineCNN, parameterized differential operators are cheap to compute and involve only elementary operations. Boscaini et al. (2016) used anisotropic diffusion kernels while Jiang et al. (2019) included gradient operators in addition to an isotropic diffusion operator. Tencer & Potter (2020) performed an ablation study of the differential operators used and demonstrated that the including the gradient operators is broadly beneficial, but that little is gained by including additional terms.\nPrior work (Jiang et al., 2019; Tencer & Potter, 2020) has used differential operators precomputed for specific meshes. This approach has two drawbacks: (1) precomputing operators is not practical\nfor datasets for which the connectivity graph varies between sample points, and (2) differential operators place restrictions on graph connectivity. Differential operators defined for mesh topologies rely on element connectivity information which is unavailable for more general graphs. Superpixel image datasets highlight both of these deficiencies. In contrast to these prior works, we do not precompute any operators and we do not directly use differential operators. Instead, we formulate pseudo-differential operators which are cheaply computed at run-time for a more general class of graphs.\nWhile our approach only applies to graphs with relative position information for each node, the set of graphs with the required positional information is large, encompassing nearly all physical systems as well as a significant number of other graphs, such as graph representations derived from image data.\nSince our method relies on computing approximate spatial derivatives of nodal features, it is also important that these nodal values represent a meaningfully changing field. This criteria is not necessarily met for the node correspondence task on the FAUST dataset or the shape classification task on the ModelNet10 dataset and a corresponding decrease in performance is observed. In contrast, nodal features are critical to superpixel image classification tasks and our method is observed to perform well for these datasets.\nSuperpixel representations are popular for a wide range of tasks, particularly tasks in which large data quantities make the direct application of CNNs to the raw data impractical, such as hyperspectral imaging (Hong et al., 2020) and medical diagnostics (Roth et al., 2015). For these applications, the superpixel representation serves as a sort of context aware lossy compression. Knyazev et al. (2019) compared GCNs applied to superpixel images to CNNs applied to low resolution images with approximately the same information content. In those cases, the graph methods not only held their own, but pulled ahead compared to the CNN performance. While those datasets were not at all pushing the limitations of image size, they indicate a possibility that superpixel methods might handle high resolution image data more efficiently and the value in developing methods that perform well on superpixel datasets.\nOur method is especially well-suited for analyzing superpixel image representations in addition to being applicable to the datasets used by Jiang et al. (2019) and Tencer & Potter (2020) to demonstrate their PDO-based approaches. For regular meshes, such as the icosahedral spherical mesh used by Jiang et al. (2019), our pseudo-differential operators closely approximate the differential operators used in those works." }, { "heading": "1.1 OUR CONTRIBUTIONS", "text": "We created a novel layer architecture inspired by PDOs.\n• We improve upon the static matrix approach of Tencer & Potter (2020) with a dynamic method that enables support for variable graph forms and eliminates the need to precompute matrices.\n• Our method utilizes pseudo-differential operators in contrast to the differential operators used in prior works. Pseudo-differential operators are cheap to compute and are applicable to a broader class of graphs than differential operators.\n• Our novel mixing layer is conceptually simple and easy to code (integrating painlessly with existing graph libraries). (section 3.1)\n• The new approach remains accurate for both sparsely and densely connected graphs, including state-of-the-art results for the MNIST superpixel 75 dataset both with and without reduced edge connection input data. (section 4.1)\n• The new approach is faster than common approaches for equivalent numbers of features owing to the simpler mathematical functions involved. (section 4.1.2)" }, { "heading": "2 PROBLEM", "text": "Many real world datasets may be treated as attributed graphs with positional information provided for the nodes or relative positional information provided for the edges. In physics and engineer-\ning, high-fidelity simulation tools represent continuous spatial functions using unstructured spatial discretizations. In computer vision, triangle surface meshes and super pixel image representations are common. Tencer & Potter (2020) have recently used convolutional autoencoder architectures with PDOs to perform manifold learning on datasets from computational physics. Their approach required precomputing matrices corresponding to various differential operators on a sequence of progressively coarser meshes. We sought to generalize their approach to support heterogeneous graphs without explicitly defined gradient operators and to avoid the need to precompute matrices or coarsened spatial discretizations.\nOur ideal solution, would be fast, simple, scale well, make efficient use of information, and operate effectively on sparsely connected graphs. While there are a number of existing approaches that will work for arbitrary graph information, all have some form of limitation that makes them ill suited for our use cases. Whether ignoring graph connectivity information (Gray et al., 2020), non-existent use of position information (Kipf & Welling, 2016; Berg et al., 2017), or complex formulations (Fey et al., 2017; Monti et al., 2017), each has some weakness that made its use less than ideal." }, { "heading": "3 METHOD", "text": "Our layer works by calculating several quantities that are analogous to the sets of differential operators used in Tencer & Potter (2020). For each node of a layer, each input channel has multiple values calculated (4 items for 2-dimensional and 5 for 3-dimensional graphs):\n• Value of the node in the prior layer (identity)\n• Average values at the 1-ring neighbors of the node\n• Average gradient components (i.e. ∂∂x , ∂ ∂y ) of value along each connected edge weighted\nby inverse edge length\nThese items are then mixed by a neural network yielding the desired number of output channels." }, { "heading": "3.1 NOVEL LAYER", "text": "In contrast to previous methods utilizing PDOs, the method presented here generalizes to heterogeneous datasets, in which the number of nodes, the connectivity, and positions vary across samples. Additionally, by relaxing the definitions of the gradient operators, our implementation is applicable to overconnected graphs (such as superpixel image representations) rather than only meshes suitable for common PDE solution methods (finite element, finite volume, etc.). A consequence of this generalization is slightly more computational overhead from dynamically generating the required operators on the fly rather than precomputing them offline. However, as seen in Figure 1, this overhead does not result in slower training time relative to other methods.\nThe key components of Tencer & Potter (2020)’s approach were the sparse matrix mixing, pooling, and interpolation. In our implementation of the mixing layer, we used pytorch-geometric’s MessagePassing base class Fey & Lenssen (2019). For pooling layers, we evaluated two of pytorch-geometric’s clustering method implementations: voxel_grid and graclus (Dhillon et al., 2007; Simonovsky & Komodakis, 2017). Their voxel_grid implementation gave the best accuracy on the MNIST superpixel dataset.\nThe MessagePassing class operates by calculating new values for each node i on layer (k) with information from their prior node values x(k−1)i , prior values at connected nodes x (k−1) j , and/or the edge attributes ei,j .\nx (k) i = γ (k) ( x (k−1) i , j∈N (i) φ (k) ( x (k−1) i ,x (k−1) j , ei,j )) (1)\nj∈N (i) is an aggregation method (such as mean) that aggregates φ(k) for each node j connected to node i in layer (k−1) into a specific number of values independent of the number of connections. φ(k) and γ(k) are arbitrary functions.\nConvenient and physically motivated choices for φ(k) are derived from mesh differential operators, e.g. I , ∇, or ∆. The identity operator I simply passes the value forward. For the others, let f : Ω→ R be a smooth scalar function for which only the values f1, . . . , fn at the nodes are known and fi corresponds to the value of f at node i. The Laplacian operator ∆ may be expressed as the difference between fi and the average value of fj , j ∈ N (i). For triangle meshes, the nodal gradient operator ∇ is often expressed as the average gradient in the adjacent facets which is equivalent to a weighted sum over the connected edge gradients\n∇fi ≈ ∑\nvj∈N (i)\nwi,j∇f(ei,j). (2)\nwith ∇f(ei,j) = fi−fjri,j êi,j . We denote the Euclidean distance between the two nodes as ri,j , and the unit vector oriented along the edge ei,j as êi,j . For a 2D mesh without intersecting edges wi,j =\nAi,j∑ j∈N(i) Ai,j\n, where Ai,j is the total area of the 2 facets connected to ei,j (Mancinelli et al., 2019). For an overconnected graph (like MNIST superpixel), these facet areas are not defined.\nIf we weight the contributions of each edge equally, our choice for φ(k) becomes( x (k−1) i − x (k−1) j ) rx,i,j\nr2i,j ,\n( x (k−1) i − x (k−1) j ) ry,i,j\nr2i,j ,x\n(k−1) j , (3)\nwhere rx,i,j and ry,i,j are the differences in positions of nodes i and j in the x- and y-dimensions, respectively. We can easily extend these to 3 dimensions (which was required for the FAUST dataset) by adding a z term, in which case, the parameter cost per channel goes up by only a factor of 25%. φ(k) returns a stack of these values for every input channel. The first 2 terms of equation 3 are xand y-components of the gradient. The 3rd term is the average of the neighboring nodes, which, when combined with the identity term x(k−1)i (which is concatenated after aggregation) results in the Laplacian. Given that the next step in the layer is to mix these components via a neural network (our γ(k) function), the Laplacian can be reconstructed by blending these terms if desirable.\nNote that none of these values require complex calculations, which contributes to the layer’s superior computational performance (section 4.1.2). We hypothesize that by limiting the representational\nspace for each node such that it knows only about itself and the local gradient the network is forced to find simpler representations that are more likely to generalize." }, { "heading": "4 EXPERIMENTS", "text": "We tested against 3 variants of MNIST superpixel 75, both CIFAR10 and CIFAR100 as superpixels, FAUST, and ModelNet10 (Monti et al., 2017; Krizhevsky et al., 2009; Bogo et al., 2014; Wu et al., 2015). For the remainder of this paper MNIST superpixel 75 will be referred to as MNIST. In the absence of validation sets, we selected test results based on the best training accuracy in order to avoid biasing our model selection to the test set. For our models this is often comparable, but is not necessarily the overall best test accuracy (figure 1). While we considered using k-fold cross validation, we found that our method was effective in picking out generally applicable models without biasing to the test set. It was both conceptually simpler and easier to implement as well.\nSplineCNN was taken from examples in pytorch-geometric (Fey & Lenssen, 2019) and utilized early stopping to avoid overfitting. For fair comparison, their best overall test accuracy was chosen. For MNIST and CIFAR superpixel, we applied Gray et al. (2020) and Knyazev et al. (2019). We reported results using the test accuracies for the best training accuracy and note any deviations.\nThe graph attention convolution layer (GATConv) (Velikovi et al., 2018) was taken from pytorchgeometric’s implementation. GATConv networks have been recently applied to the MNIST and CIFAR-10 superpixel benchmarks (Avelar et al., 2020). For both MNIST and CIFAR-10, the published GATConv results are inferior to the other comparison models. We tested a variety of head and channel hyperparameters against a version of the SplineCNN model with the spline convolutions replaced with GATConv. We replicated their results for MNIST.\nThere is some variation in results across our model initializations. The trend of high test performance following high training set accuracy was robust. This allowed a bias-free way of selecting performant models from multiple runs of the same hyperparameters.\nFor all training runs, our models had training accuracies on par with test accuracies. Even MNIST had training results at or below test accuracy with appropriate levels of edge dropout (section 4.1.1).\nAll models were trained with Adam optimization using a learning rate of 0.0002 and cross entropy loss for our architecture and the published options for comparison models unless otherwise noted. Learning rates were reduced by a factor of 10 on a plateau of 5 epochs without improving training loss up to a limit of 1/1000th the original learning rate." }, { "heading": "4.1 MNIST SUPERPIXEL 75", "text": "We tested against 3 variants of the MNIST dataset (Monti et al., 2017). The first variant used all of the edges present in the original graph which we call the raw dataset. The second, we use a variant of the hierarchical set used in Knyazev et al. (2019) with sets of approximately 75, 21, and 7 superpixels forming the feature hierarchy for each sample. The number of edges for each node was limited to its 32 closest by Euclidean distance as we are not focused on optimizing performance for minimal input sizes. Lastly, our pruned version was obtained by discarding all of the edge data and applying Delaunay triangulation to the nodes, which reduced the number of edge connections by around 70% on average.\nOur architecture remained fairly consistent throughout testing. The model was comprised of between 3 and 7 layers of downsampling modules followed by 2 fully connected layers with an exponential linear unit between the two. In the end, we selected 7 of our layers for all main results because it provided the best accuracy. Results for shallower networks are in appendix A. Each downsampling module consisted of our introduced graph convolutional layer followed by a voxel grid pooling operation outputting a reduced set of nodes with a selectable count of features. Pooling is done with voxel size halved at each step. Initial voxel size is set such that after the 7 downsampling operations, a 3× 3 set of nodes is present prior to flattening for the fully connected layers. No normalization methods were included in the downsampling layers (e.g. batch, dropout, edge dropout, etc.). Standard dropout of 0.5 was applied prior to all the fully connected layers. When used, we applied edge dropout to the data prior to model input.\nOur architecture with the best performance on all the MNIST superpixel variants used 7 downsampling layers starting with 128 features, doubling each layer, until a maximum width of 512 features. Edges were dropped out on the input to prevent overfitting to the training set (section 4.1.1).\nOur architecture achieved a state-of-the-art test error of 0.80% against the raw MNIST dataset using our best training accuracy as a selector among 32 runs. The average results for ours and several competitors are shown in Table 1. Our architecture achieved these results using an edge dropout rate of 0.45 and a learning rate of 0.002, although runs with edge dropout rates between 0.3 and 0.55 were competitive or beating state-of-the-art in a large number of the training runs (section 4.1.1). This compares to the prior best reported value of 0.95% from Gray et al. (2020). We were able to reproduce their result but only by selecting the best overall test accuracy out of 5 runs. For the pruned variant, when choosing the model based on best training accuracy, our model’s test error outperforms results from Gray et al. (2020) with an error rate of 1.04%.\nFor the hierarchical variant, we note a slight drop in accuracy relative to the raw dataset. The hierarchical dataset adds additional nodes for various scales of abstraction (parent, grandparent, etc) with each level being progressively coarser. Knyazev et al. (2019) explicitly treats each of these levels differently, recognizing that the edge connection is between a child-parent, siblings, etc. as part of its multigraph convolution approach. None of the other methods differentiate between these edge relationships. In our case, we hypothesize that the extra information can cause issues with the Laplacian term as it does not have a distance weighting. The extra nodes have reduced quality information which we believe can act as a confuser to the networks when the connection type is ignored. For Fey et al. (2017), the extra information causes extreme overfitting to the training set." }, { "heading": "4.1.1 EDGE DROPOUT", "text": "To understand how reducing the edge connection through pruning impacted accuracy, we trained against the raw MNIST dataset with varied rates of edge dropout applied to the input data. As shown in Figure 2a, our method beats the comparison models on the raw dataset for a range of rates.\nAt dropout rates above 0.6, orphaned nodes become much more likely and are a significant portion of the nodes by 0.8 dropout. These orphaned nodes negatively impact our performance as the only data left to pass forward at each orphaned node is the identity function. Given that position information is\n1 Test accuracy selected from best training epoch 2 Best overall test accuracy selected\nrequired by our method, an appropriate and effective edge network can be acquired using Delaunay triangulation for 2D graphs. Pruning the MNIST graphs in this way results in approximately a 70% reduction in edges. Using the pruned MNIST dataset, our code performs competitively with prior reported state-of-the-art test accuracy (against models trained and evaluated using the raw superpixel dataset). When comparing test accuracy from the model selected by training accuracy, we outperform other models trained and tested using the raw dataset. We achieved an error rate of 1.04% using a 0.1 edge dropout applied on the pruned input graph as shown in Table 1. Because of this, the performance degradation for overly sparse graphs (high edge dropout rates) should not be a practical limitation for our method.\nModerate levels of input edge dropout seems to provide a data augmentation effect for the raw MNIST dataset, leading to greater test accuracy and less overfitting (our train accuracies remain comparable to test when sufficient edge dropout is used). In all cases, our architectures performed better with some level of input edge dropout than with the original graph. In addition, reduced edge counts have a beneficial impact on training times (section 4.1.2).\nGray et al. (2020) uses a novel method which ignores incoming edge information. Because of this, its accuracy is the same for all levels of edge dropout and pruning (within their normal variance). While this eliminates any accuracy penalty for overly sparse graphs, it also eliminates the data augmentation and performance benefits of input edge dropout." }, { "heading": "4.1.2 PERFORMANCE", "text": "Our network is faster on a per epoch of training time per layer basis. After adding layers to create a deeper, wider, and more performant network, it remains competitive in training time and superior when looking at test accuracy achieved per training time as shown in Figure 1. Surprisingly, adding extra width to the network did not worsen convergence time until a slight dip is observed going from 256 to 512 features wide. While each epoch takes longer, it converges faster.\nWe also show the impact of various hyperparameters on our network performance in Figure 2b . Of note, our pruned dataset running with 0.05 edge dropout has similar performance to the 0.8 edge dropout without pruning. The pruned dataset has around 70–75% fewer edge connections than in the original which accounts for the increase in speed." }, { "heading": "4.2 CIFAR100 SUPERPIXELS", "text": "We also tested against CIFAR10 and CIFAR100 (Krizhevsky et al., 2009) converted to superpixels. The CIFAR100 results are shown in table 2a while the CIFAR10 results are available in appendix A . We achieved a best accuracy of 40.71% on the raw variant and 41.41% on the hierarchical variant.\nThe CIFAR100 superpixel dataset represents a significantly more challenging learning task than either standard (non-superpixel) CIFAR100 or MNIST. CIFAR100 is a set of 32× 32 color images across 100 categories. Each category has 600 images, broken into 500 for training and 100 for test. The superpixel and hierarchical variants are generated by applying a SLIC transformations as described in Knyazev et al. (2019). Each image was constructed with approximately 150 superpixels with node edges restricted to only its 32 nearest neighbors by Euclidean distance. The nodes were constructed from levels of approximately 150, 75, 21, and 7 superpixels in the same manner as the hierarchical MNIST dataset (section 4.1). The CIFAR10 and CIFAR100 superpixel experiments used the same architecture as used for the MNIST experiments, with the exception of additional input channels for color superpixels and output channels for the larger number of classes in CIFAR100." }, { "heading": "4.3 FAUST", "text": "We also tested our method on a shape correspondence task using the FAUST (Bogo et al., 2014) dataset, which contains 100 meshes with 6, 890 nodes each depicting 10 scanned human bodies in 10 different poses. Each node corresponds to a particular part of each body and the task is to identify which node corresponds to what body part. We used the standard 80/20 training/test split.\nA modified version of the architecture used in MNIST experiments was used with 3-D gradients, 8 layers, 16 initial features doubling each layer to a maximum of 128, and ending with 2 fully connected layers (dropout applied before each). Scaled exponential linear units were used as the activation function between each layer. No flattening or pooling operations were used. The final output is a softmax with 6, 890 channels per node. Training used a batch size of 4, dropout of 0.3, learning rate of 0.01, and cross entropy loss. For this task, input edge dropout caused a significant performance drop and was not used. Although no extensive hyperparameter tuning was performed, we achieved an accuracy performance comparable with recent results of 99.20% as shown in table 2b." }, { "heading": "4.4 MODELNET", "text": "We also test our method on the shape classification task using the ModelNet10 (Wu et al., 2015) dataset, which contains 4, 899 3D CAD models of objects from 10 categories. Our method performs worse on this task, achieving a middling classification accuracy of 88.6% on the test set (table 5). We attribute this poor performance to two factors. First, the ModelNet10 dataset does not include\n3 Test accuracy selected from best training epoch\nnodal features. We attempted using the node positions and normal vectors as input features, but suspect that these were insufficient. Second, and perhaps more significantly, many of the meshes in the dataset contain large local variations in edge length. Consequently, our approximate derivatives computed via pseudo-differential operators devalue contributions from distant nodes creating information bottlenecks in the network. This is, perhaps, not an insurmountable obstacle, but resolving these issues is beyond the scope of this work." }, { "heading": "5 RELATED WORK", "text": "Early approaches showed that CNNs could be used on non-Euclidean domains by introducing new intrinsic convolutional methods (Masci et al., 2015; Boscaini et al., 2016) that operate on input manifolds. More recent approaches like Monti et al. (2017) are capable of performing well on both manifolds and general graphs as input. Monti achieves this by creating pseudo-coordinates for either the vertices of a graph or points on a manifold, and then learns a kernel in that space.\nGraph Convolutional Neural networks form the basis for other approaches that have shown great results (Zhang et al., 2019). The literature has been split among methods based on spectral graph theory (Bruna et al., 2013; Henaff et al., 2015; Defferrard et al., 2016; Kipf & Welling, 2016; Levie et al., 2017; Monti et al., 2017; Wang et al., 2018; Tencer & Potter, 2020) and methods that operate with spatial filters (Micheli, 2009; Atwood & Towsley, 2015; Niepert et al., 2016; Fey et al., 2017; Gilmer et al., 2017; Gray et al., 2020; Velikovi et al., 2018). Our method falls into the latter category of spatial approaches." }, { "heading": "6 CONCLUSION", "text": "We introduced a simple graph convolutional layer that outperformed every published result we are aware of for superpixel image classification on the MNIST and CIFAR100 superpixel datasets. We demonstrated faster performance and reduced overfitting tendencies with the MNIST dataset as well. In addition, we tested our new layer on the CIFAR10 superpixel image classification, FAUST shape correspondence, and ModelNet10 shape classification datasets. On CIFAR10 superpixel and FAUST we achieved accuracies comparable to recent results without significant hyperparameter tuning. However, with ModelNet10 our layer only demonstrated moderate performance.\nInput edge dropout often provided positive impacts on our accuracy except at extreme values (section 4.1.1). The edge dropout provides a significant data augmentation effect with even moderate levels of edge dropout as the network sees a combinatoric scale effect (relative to local node connections). For sparse graphs, this had a minimal impact but was significant for highly connected graphs. A node with 10 connections and 2 of them dropped has 45 different combinations. In reality, the stochastic nature of the dropout process means that drops will not just be ( 10 2 ) in this case, but some combination of the various possibilities (3–4 orders of magnitude seems plausible for this scenario).\nThe performance drop witnessed for hierarchical data versus the raw suggests another area for research. Our current implementation did not take advantage of the additional edge information available within the hierarchical dataset. This resulted in a small but meaningful reduction in accuracy on MNIST. The additional information introduced by Knyazev et al. (2019) seems particularly useful for large image cases where it would be advantageous to inform the network of the broader region. Our convolutional layer could be modified to incorporate this information.\nWhile we have performed some hyperparameter tuning, there is still a large degree of unexplored space. We believe that even for the nearly saturated MNIST dataset, there are still gains to be acquired (we had at least one approximately 10% reduction in relative error late in testing). We will look to follow up on these questions in future work." }, { "heading": "A APPENDIX", "text": "We show error rates for smaller depth networks of our model against MNIST in Table 3 . Results for layer depth 3 and 4 used considerably narrower models than used in the 7 layer version (initial width of 8). At 4 layers deep, our network was considerably faster per epoch than the 2 or 3 layer deep SplineCNN (Fey et al., 2017) or Gray et al. (2020).\nAfter further experiments showed the clear win for deeper networks, we did not go back and attempt to train wide shallow networks. It’s quite possible that wider, shallower networks would be able to perform comparably with recent results when properly tuned.\nThe CIFAR10 superpixel classification results are shown in Table 4. Of note, our method’s training accuracy stays considerably closer to test results – which is consistent with our experience on the other datasets. Training accuracies remaining at (or even below) test accuracies was common for our models when sufficient edge dropout was used. The > 10% difference here is considerably out of the norm when compared to MNIST.\nIn contrast to MNIST, the hierarchical variant for CIFAR10 offered a considerable performance improvement for our network. Gray et al. (2020) manages to beat our model’s accuracy on the raw CIFAR10 superpixel dataset, but we win out on the hierarchical variant. Note that we had not adapted the network to properly use the hierarchical information, so it is particularly surprising to see this result.\nFor the ModelNet10 results, we used a point cloud representation. Compared to other methods which used the same input data type, our method is competitive, but not exceptional. Minimal hyperparameter tuning was performed so the results should be considered a lower bound on possible performance for this dataset." } ]
2,020
null
SP:31ecf3efa2c7a0276d2f8fc761d358cfeed6e98d
[ "This paper proposes to couple a GAN, an inverse graphics network, and a differentiable renderer. The authors base their work on StyleGAN, and use the observation that a specific part of the latent code corresponds to camera view-point to rapidly annotate a large amount of synthetic images with approximate camera pose. They then use these images and rough annotations to train the inverse graphics network to provide 3D and texture data. The differentiable renderer is used to synthesize 2D images from 3D, which can be compared to the input for consistency. In a second step, the authors use the inferred 3D data to disentangle the latent space of StyleGAN to allow to use it as a controllable renderer." ]
Differentiable rendering has paved the way to training neural networks to perform “inverse graphics” tasks such as predicting 3D geometry from monocular photographs. To train high performing models, most of the current approaches rely on multi-view imagery which are not readily available in practice. Recent Generative Adversarial Networks (GANs) that synthesize images, in contrast, seem to acquire 3D knowledge implicitly during training: object viewpoints can be manipulated by simply manipulating the latent codes. However, these latent codes often lack further physical interpretation and thus GANs cannot easily be inverted to perform explicit 3D reasoning. In this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable renderers. Key to our approach is to exploit GANs as a multi-view data generator to train an inverse graphics network using an off-the-shelf differentiable renderer, and the trained inverse graphics network as a teacher to disentangle the GAN’s latent code into interpretable 3D properties. The entire architecture is trained iteratively using cycle consistency losses. We show that our approach significantly outperforms state-of-the-art inverse graphics networks trained on existing datasets, both quantitatively and via user studies. We further showcase the disentangled GAN as a controllable 3D “neural renderer”, complementing traditional graphics renderers.
[ { "affiliations": [], "name": "DIFFERENTIABLE RENDERING" }, { "affiliations": [], "name": "NEURAL RENDERING" }, { "affiliations": [], "name": "Yuxuan Zhang" }, { "affiliations": [], "name": "Wenzheng Chen" }, { "affiliations": [], "name": "Huan Ling" }, { "affiliations": [], "name": "Jun Gao" }, { "affiliations": [], "name": "Yinan Zhang" }, { "affiliations": [], "name": "Antonio Torralba" }, { "affiliations": [], "name": "Sanja Fidler" } ]
[ { "authors": [ "Rameen Abdal", "Yipeng Qin", "Peter Wonka" ], "title": "Image2stylegan: How to embed images into the stylegan latent space", "venue": "CoRR, abs/1904.03189,", "year": 2019 }, { "authors": [ "Angel X Chang", "Thomas Funkhouser", "Leonidas Guibas", "Pat Hanrahan", "Qixing Huang", "Zimo Li", "Silvio Savarese", "Manolis Savva", "Shuran Song", "Hao Su" ], "title": "Shapenet: An information-rich 3d model repository", "venue": "arXiv preprint arXiv:1512.03012,", "year": 2015 }, { "authors": [ "Wenzheng Chen", "Jun Gao", "Huan Ling", "Edward Smith", "Jaakko Lehtinen", "Alec Jacobson", "Sanja Fidler" ], "title": "Learning to predict 3d objects with an interpolation-based differentiable renderer", "venue": "In Advances In Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "Infogan: Interpretable representation learning by information maximizing generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Christopher B Choy", "Danfei Xu", "JunYoung Gwak", "Kevin Chen", "Silvio Savarese" ], "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "venue": null, "year": 2016 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Maxim Tatarchenko", "Thomas Brox" ], "title": "Learning to generate chairs, tables and cars with convolutional networks", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2016 }, { "authors": [ "Jun Gao", "Wenzheng Chen", "Tommy Xiang", "Clement Fuji Tsang", "Alec Jacobson", "Morgan McGuire", "Sanja Fidler" ], "title": "Learning deformable tetrahedral meshes for 3d reconstruction", "venue": "In Advances In Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Shubham Goel", "Angjoo Kanazawa", "Jitendra Malik" ], "title": "Shape and viewpoints without keypoints", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Advances in Neural Information Processing Systems", "year": 2014 }, { "authors": [ "Thibault Groueix", "Matthew Fisher", "Vladimir G. Kim", "Bryan Russell", "Mathieu Aubry" ], "title": "AtlasNet: A PapierMâché Approach to Learning 3D Surface Generation", "venue": "In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Erik Härkönen", "Aaron Hertzmann", "Jaakko Lehtinen", "Sylvain Paris" ], "title": "Ganspace: Discovering interpretable gan controls", "venue": "arXiv preprint arXiv:2004.02546,", "year": 2020 }, { "authors": [ "Paul Henderson", "Vittorio Ferrari" ], "title": "Learning to generate and reconstruct 3d meshes with only 2d supervision", "venue": "arXiv preprint arXiv:1807.09259,", "year": 2018 }, { "authors": [ "Krishna Murthy J", "Edward Smith", "Jean-Francois Lafleche", "Clement Fuji Tsang", "Artem Rozantsev", "Wenzheng Chen", "Tommy Xiang", "Rev Lebaredian", "Sanja Fidler" ], "title": "Kaolin: A pytorch library for accelerating 3d deep learning research", "venue": null, "year": 1911 }, { "authors": [ "Angjoo Kanazawa", "Shubham Tulsiani", "Alexei A Efros", "Jitendra Malik" ], "title": "Learning category-specific mesh reconstruction from image collections", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Levent Karacan", "Zeynep Akata", "Aykut Erdem", "Erkut Erdem" ], "title": "Learning to generate images of outdoor scenes from attributes and semantic layouts", "venue": "arXiv preprint arXiv:1612.00215,", "year": 2016 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of StyleGAN", "venue": "CoRR, abs/1912.04958,", "year": 2019 }, { "authors": [ "Hiroharu Kato", "Tatsuya Harada" ], "title": "Self-supervised learning of 3d objects from natural images, 2019", "venue": null, "year": 2019 }, { "authors": [ "Hiroharu Kato", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Neural 3d mesh renderer", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Wonkwang Lee", "Donggyun Kim", "Seunghoon Hong", "Honglak Lee" ], "title": "High-fidelity synthesis with disentangled representation", "venue": "arXiv preprint arXiv:2001.04296,", "year": 2020 }, { "authors": [ "Daiqing Li", "Junlin Yang", "Karsten Kreis", "Antonio Torralba", "Sanja Fidler" ], "title": "Semantic segmentation with generative models: Semi-supervised learning and strong out-of-domain generalization", "venue": null, "year": 2021 }, { "authors": [ "Tzu-Mao Li", "Miika Aittala", "Frédo Durand", "Jaakko Lehtinen" ], "title": "Differentiable monte carlo ray tracing through edge sampling", "venue": "In SIGGRAPH Asia", "year": 2018 }, { "authors": [ "Xueting Li", "Sifei Liu", "Kihwan Kim", "Shalini De Mello", "Varun Jampani", "Ming-Hsuan Yang", "Jan Kautz" ], "title": "Self-supervised single-view 3d reconstruction via semantic consistency", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Zinan Lin", "Kiran Koshy Thekumparampil", "Giulia Fanti", "Sewoong Oh" ], "title": "Infogan-cr: Disentangling generative adversarial networks with contrastive regularizers", "venue": null, "year": 1906 }, { "authors": [ "Hsueh-Ti Derek Liu", "Michael Tao", "Chun-Liang Li", "Derek Nowrouzezahrai", "Alec Jacobson" ], "title": "Beyond pixel norm-balls: Parametric adversaries using an analytically differentiable renderer", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Shichen Liu", "Tianye Li", "Weikai Chen", "Hao Li" ], "title": "Soft rasterizer: A differentiable renderer for image-based 3d reasoning", "venue": null, "year": 2019 }, { "authors": [ "Matthew M. Loper", "Michael J. Black" ], "title": "Opendr: An approximate differentiable renderer", "venue": "Computer Vision - ECCV 2014 - 13th European Conference,", "year": 2014 }, { "authors": [ "Lars Mescheder", "Michael Oechsle", "Michael Niemeyer", "Sebastian Nowozin", "Andreas Geiger" ], "title": "Occupancy networks: Learning 3d reconstruction in function space", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Jeong Joon Park", "Peter Florence", "Julian Straub", "Richard Newcombe", "Steven Lovegrove" ], "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Guim Perarnau", "Joost Van De Weijer", "Bogdan Raducanu", "Jose M Álvarez" ], "title": "Invertible conditional gans for image editing", "venue": "arXiv preprint arXiv:1611.06355,", "year": 2016 }, { "authors": [ "Felix Petersen", "Amit H. Bermano", "Oliver Deussen", "Daniel Cohen-Or" ], "title": "Pix2vex: Image-to-geometry reconstruction using a smooth differentiable renderer", "venue": "CoRR, abs/1903.11149,", "year": 2019 }, { "authors": [ "Scott Reed", "Zeynep Akata", "Xinchen Yan", "Lajanugen Logeswaran", "Bernt Schiele", "Honglak Lee" ], "title": "Generative adversarial text to image synthesis", "venue": "arXiv preprint arXiv:1605.05396,", "year": 2016 }, { "authors": [ "Yujun Shen", "Ceyuan Yang", "Xiaoou Tang", "Bolei Zhou" ], "title": "Interfacegan: Interpreting the disentangled face representation learned by gans", "venue": "arXiv preprint arXiv:2005.09635,", "year": 2020 }, { "authors": [ "Vincent Sitzmann", "Michael Zollhöfer", "Gordon Wetzstein" ], "title": "Scene representation networks: Continuous 3d-structure-aware neural scene representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Ayush Tewari", "Mohamed Elgharib", "Gaurav Bharaj", "Florian Bernard", "Hans-Peter Seidel", "Patrick Pérez", "Michael Zöllhofer", "Christian Theobalt" ], "title": "Stylerig: Rigging stylegan for 3d control over portrait images, cvpr 2020", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE,", "year": 2020 }, { "authors": [ "Antonio Torralba", "Bryan C Russell", "Jenny Yuen" ], "title": "Labelme: Online image annotation and applications", "venue": "Proceedings of the IEEE,", "year": 2010 }, { "authors": [ "Julien Valentin", "Cem Keskin", "Pavel Pidlypenskyi", "Ameesh Makadia", "Avneesh Sud", "Sofien Bouaziz" ], "title": "Tensorflow graphics: Computer graphics meets deep learning", "venue": null, "year": 2019 }, { "authors": [ "G. Van Horn", "S. Branson", "R. Farrell", "S. Haber", "J. Barry", "P. Ipeirotis", "P. Perona", "S. Belongie" ], "title": "Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Nanyang Wang", "Yinda Zhang", "Zhuwen Li", "Yanwei Fu", "Wei Liu", "Yu-Gang Jiang" ], "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "venue": null, "year": 2018 }, { "authors": [ "Weiyue Wang", "Xu Qiangeng", "Duygu Ceylan", "Radomir Mech", "Ulrich Neumann" ], "title": "Disn: Deep implicit surface network for high-quality single-view 3d reconstruction", "venue": null, "year": 1905 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD Birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Shangzhe Wu", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Unsupervised learning of probably symmetric deformable 3d objects from images in the wild", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Yu Xiang", "Roozbeh Mottaghi", "Silvio Savarese" ], "title": "Beyond pascal: A benchmark for 3d object detection in the wild", "venue": "In IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2014 }, { "authors": [ "Shunyu Yao", "Tzu Ming Hsu", "Jun-Yan Zhu", "Jiajun Wu", "Antonio Torralba", "Bill Freeman", "Josh Tenenbaum" ], "title": "3d-aware scene manipulation via inverse graphics", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yuxuan Zhang", "Huan Ling", "Jun Gao", "Kangxue Yin", "Jean-Francois Lafleche", "Adela Barriuso", "Antonio Torralba", "Sanja Fidler" ], "title": "Datasetgan: Efficient labeled data factory with minimal human", "venue": null, "year": 2021 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Zhoutong Zhang", "Chengkai Zhang", "Jiajun Wu", "Antonio Torralba", "Josh Tenenbaum", "Bill Freeman" ], "title": "Visual object networks: Image generation with disentangled 3d representations", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ability to infer 3D properties such as geometry, texture, material, and light from photographs is key in many domains such as AR/VR, robotics, architecture, and computer vision. Interest in this problem has been explosive, particularly in the past few years, as evidenced by a large body of published works and several released 3D libraries (TensorflowGraphics by Valentin et al. (2019), Kaolin by J. et al. (2019), PyTorch3D by Ravi et al. (2020)).\nThe process of going from images to 3D is often called “inverse graphics”, since the problem is inverse to the process of rendering in graphics in which a 3D scene is projected onto an image by taking into account the geometry and material properties of objects, and light sources present in the scene. Most work on inverse graphics assumes that 3D labels are available during training (Wang et al., 2018; Mescheder et al., 2019; Groueix et al., 2018; Wang et al., 2019; Choy et al., 2016), and trains a neural network to predict these labels. To ensure high quality 3D ground-truth, synthetic datasets such as ShapeNet (Chang et al., 2015) are typically used. However, models trained on synthetic datasets often struggle on real photographs due to the domain gap with synthetic imagery.\nTo circumvent these issues, recent work has explored an alternative way to train inverse graphics networks that sidesteps the need for 3D ground-truth during training. The main idea is to make ∗indicates equal contribution.\ngraphics renderers differentiable which allows one to infer 3D properties directly from images using gradient based optimization, Kato et al. (2018); Liu et al. (2019b); Li et al. (2018); Chen et al. (2019). These methods employ a neural network to predict geometry, texture and light from images, by minimizing the difference between the input image with the image rendered from these properties. While impressive results have been obtained in Liu et al. (2019b); Sitzmann et al. (2019); Liu et al. (2019a); Henderson & Ferrari (2018); Chen et al. (2019); Yao et al. (2018); Kanazawa et al. (2018), most of these works still require some form of implicit 3D supervision such as multi-view images of the same object with known cameras. Thus, most results have been reported on the synthetic ShapeNet dataset, or the large-scale CUB (Welinder et al., 2010) bird dataset annotated with keypoints from which cameras can be accurately computed using structure-from-motion techniques.\nOn the other hand, generative models of images appear to learn 3D information implicitly, where several works have shown that manipulating the latent code can produce images of the same scene from a different viewpoint (Karras et al., 2019a). However, the learned latent space typically lacks physical interpretation and is usually not disentangled, where properties such as the 3D shape and color of the object often cannot be manipulated independently.\nIn this paper, we aim to extract and disentangle 3D knowledge learned by generative models by utilizing differentiable graphics renderers. We exploit a GAN, specifically StyleGAN (Karras et al., 2019a), as a generator of multi-view imagery to train an inverse graphics neural network using a differentiable renderer. In turn, we use the inverse graphics network to inform StyleGAN about the image formation process through the knowledge from graphics, effectively disentangling the GAN’s latent space. We connect StyleGAN and the inverse graphics network into a single architecture which we iteratively train using cycle-consistency losses. We demonstrate our approach to significantly outperform inverse graphics networks on existing datasets, and showcase controllable 3D generation and manipulation of imagery using the disentangled generative model." }, { "heading": "2 RELATED WORK", "text": "3D from 2D: Reconstructing 3D objects from 2D images is one of the mainstream problems in 3D computer vision. We here focus our review to single-image 3D reconstruction which is the domain of our work. Most of the existing approaches train neural networks to predict 3D shapes from images by utilizing 3D labels during training, Wang et al. (2018); Mescheder et al. (2019); Choy et al. (2016); Park et al. (2019). However, the need for 3D training data limits these methods to the use of synthetic datasets. When tested on real imagery there is a noticeable performance gap.\nNewer works propose to differentiate through the traditional rendering process in the training loop of neural networks, Loper & Black (2014); Kato et al. (2018); Liu et al. (2019b); Chen et al. (2019); Petersen et al. (2019); Gao et al. (2020). Differentiable renderers allow one to infer 3D from 2D images without requiring 3D ground-truth. However, in order to make these methods work in practice, several additional losses are utilized in learning, such as the multi-view consistency loss whereby the cameras are assumed known. Impressive reconstruction results have been obtained on the synthetic ShapeNet dataset. While CMR by Kanazawa et al. (2018) and DIB-R by Chen et al. (2019) show real-image 3D reconstructions on CUB and Pascal3D (Xiang et al., 2014) datasets, they rely on manually annotated keypoints, while still failing to produce accurate results.\nA few recent works, Wu et al. (2020); Li et al. (2020); Goel et al. (2020); Kato & Harada (2019), explore 3D reconstruction from 2D images in a completely unsupervised fashion. They recover both 3D shapes and camera viewpoints from 2D images by minimizing the difference between original and re-projected images with additional unsupervised constraints, e.g., semantic information (Li et al. (2020)), symmetry (Wu et al. (2020)), GAN loss (Kato & Harada (2019)) or viewpoint distribution (Goel et al. (2020)). Their reconstruction is typically limited to 2.5D (Wu et al. (2020)),\nand produces lower quality results than when additional supervision is used (Goel et al. (2020); Li et al. (2020); Kato & Harada (2019)). In contrast, we utilize GANs to generate multi-view realistic datasets that can be annotated extremely efficiently, which leads to accurate 3D results. Furthermore, our model achieves disentanglement in GANs and turns them into interpretable 3D neural renderers.\nNeural Rendering with GANs: GANs (Goodfellow et al., 2014; Karras et al., 2019a) can be regarded as neural renderers, as they take a latent code as input and “render” an image. However, the latent code is sampled from a predefined prior and lacks interpretability. Several works generate images with conditions: a semantic mask (Zhu et al., 2017), scene layout Karacan et al. (2016), or a caption (Reed et al., 2016), and manipulate the generated images by modifying the input condition. Despite tremendous progress in this direction, there is little work on generating images through an interpretable 3D physics process. Dosovitskiy et al. (2016) synthesizes images conditioned on object style, viewpoint, and color. Most relevant work to ours is Zhu et al. (2018), which utilizes a learnt 3D geometry prior and generates images with a given viewpoint and texture code. We differ in three important ways. First, we do not require a 3D dataset to train the 3D prior. Second, the texture in our model has 3D physical meaning, while Zhu et al. (2018) still samples from a prior. We further control background while Zhu et al. (2018) synthesizes objects onto white background.\nDisentangling GANs: Learning disentangled representations has been widely explored, Lee et al. (2020); Lin et al. (2019); Perarnau et al. (2016). Representative work is InfoGAN Chen et al. (2016), which tries to maximize the mutual information between the prior and the generated image distribution. However, the disentangled code often still lacks physical interpretability. Tewari et al. (2020) transfers face rigging information from an existing model to control face attribute disentanglement in the StyleGAN latent space. Shen et al. (2020) aims to find the latent space vectors that correspond to meaningful edits, while Härkönen et al. (2020) exploits PCA to disentangle the latent space. Parallel to our work, Zhang et al. (2021); Li et al. (2021) attempt to interpret the semantic meaning of StyleGAN latent space. In our work, we disentangle the latent space with knowledge from graphics." }, { "heading": "3 OUR APPROACH", "text": "We start by providing an overview of our approach (Fig. 1), and describe the individual components in more detail in the following sections. Our approach marries two types of renderers: a GANbased neural “renderer” and a differentiable graphics renderer. Specifically, we leverage the fact that the recent state-of-the-art GAN architecture StyleGAN by Karras et al. (2019a;b) learns to produce highly realistic images of objects, and allows for a reliable control over the camera. We manually select a few camera views with a rough viewpoint annotation, and use StyleGAN to generate a large number of examples per view, which we explain in Sec. 3.1. In Sec. 3.2, we exploit this dataset to train an inverse graphics network utilizing the state-of-the-art differentiable renderer, DIBR by Chen et al. (2019) in our work, with a small modification that allows it to deal with noisy cameras during training. In Sec. 3.3, we employ the trained inverse graphics network to disentangle StyleGAN’s latent code and turn StyleGAN into a 3D neural renderer, allowing for control over explicit 3D properties. We fine-tune the entire architecture, leading to significantly improved results." }, { "heading": "3.1 STYLEGAN AS SYNTHETIC DATA GENERATOR", "text": "We first aim to utilize StyleGAN to generate multi-view imagery. StyleGAN is a 16 layers neural network that maps a latent code z ∈ Z drawn from a normal distribution into a realistic image. The code z is first mapped to an intermediate latent code w ∈ W which is transformed to w∗ =\n(w∗1 , w ∗ 2 , ..., w ∗ 16) ∈ W ∗ through 16 learned affine transformations. We call W ∗ the transformed latent space to differentiate it from the intermediate latent space W . Transformed latent codes w∗ are then injected as the style information to the StyleGAN Synthesis network.\nDifferent layers control different image attributes. As observed in Karras et al. (2019a), styles in early layers adjust the camera viewpoint while styles in the intermediate and higher layers influence shape, texture and background. We provide a careful analysis of all layers in Appendix. We empirically find that the latent code w∗v := (w ∗ 1 , w ∗ 2 , w ∗ 3 , w ∗ 4) in the first 4 layers controls camera viewpoints. That is, if we sample a new code w∗v but keep the remaining dimensions of w ∗ fixed (which we call the conten code), we generate images of the same object depicted in a different viewpoint. Examples are shown in Fig. 2.\nWe further observe that a sampled codew∗v in fact represents a fixed camera viewpoint. That is, if we keep w∗v fixed but sample the remaining dimensions of w\n∗, StyleGAN produces imagery of different objects in the same camera viewpoint. This is shown in columns in Fig. 2. Notice how aligned the objects are in each of the viewpoints. This makes StyleGAN a multi-view data generator!\n“StyleGAN” multi-view dataset: We manually select several views, which cover all the common viewpoints of an object ranging from 0-360 in azimuth and roughly 0-30 in elevation. We pay attention to choosing viewpoints in which the objects look most consistent. Since inverse graphics works require camera pose information, we annotate the chosen viewpoint codes with a rough absolute camera pose. To be specific, we classify each viewpoint code into one of 12 azimuth angles, uniformly sampled along 360 deg. We assign each code a fixed elevation (0◦) and camera distance. These camera poses provide a very coarse annotation of the actual pose – the annotation serves as the initialization of the camera which we will optimize during training. This allows us to annotate all views (and thus the entire dataset) in only 1 minute – making annotation effort neglible. For each viewpoint, we sample a large number of content codes to synthesize different objects in these views. Fig. 2 shows 2 cars, and a horse and a bird. Appendix provides more examples.\nSince DIB-R also utilizes segmentation masks during training, we further apply MaskRCNN by He et al. (2017) to get instance segmentation in our generated dataset. As StyleGAN sometimes generates unrealistic images or images with multiple objects, we filter out “bad” images which have more than one instance, or small masks (less than 10% of the whole image area)." }, { "heading": "3.2 TRAINING AN INVERSE GRAPHICS NEURAL NETWORK", "text": "Following CMR by Kanazawa et al. (2018), and DIB-R by Chen et al. (2019), we aim to train a 3D prediction network f , parameterized by θ, to infer 3D shapes (represented as meshes) along with textures from images. Let IV denote an image in viewpoint V from our StyleGAN dataset, andM its corresponding object mask. The inverse graphics network makes a prediction as follows: {S, T} = fθ(IV ), where S denotes the predicted shape, and T a texture map. Shape S is deformed from a sphere as in Chen et al. (2019). While DIB-R also supports prediction of lighting, we empirically found its performance is weak for realistic imagery and we thus omit lighting estimation in our work.\nTo train the network, we adopt DIB-R as the differentiable graphics renderer that takes {S, T} and V as input and produces a rendered image I ′V = r(S, T, V ) along with a rendered maskM\n′. Following DIB-R, the loss function then takes the following form:\nL(I, S, T, V ; θ) =λcolLcol(I, I ′) + λpercptLpecept(I, I ′) + LIOU(M,M ′)\n+ λsmLsm(S) + λlapLlap(S) + λmovLmov(S) (1)\nHere, Lcol is the standard L1 image reconstruction loss defined in the RGB color space while Lpercpt is the perceptual loss that helps the predicted texture look more realistic. Note that rendered images do not have background, so Lcol and Lpercept are calculated by utilizing the mask. LIOU computes the intersection-over-union between the ground-truth mask and the rendered mask. Regularization losses such as the Laplacian loss Llap and flatten loss Lsm are commonly used to ensure that the shape is well behaved. Finally, Lmov regularizes the shape deformation to be uniform and small.\nSince we also have access to multi-view images for each object we also include a multi-view consistency loss. In particular, our loss per object k is:\nLk(θ) = ∑ i,j,i 6=j ( L(IV ki , Sk, Tk, V k i ; θ) + L(IV kj , Sk, Tk, V k j ; θ) ) where {Sk, Tk, Lk} = fθ(IV ki )\n(2)\nStyleGANlatent code\ncamera\nmesh\ntexture\nbackground\nCNN Encoder\nCNN Encoder\nMLP\nMLP\ns <latexit sha1_base64=\"lILmndifYlXYYd3TBDeiCV0kJuw=\">AAAB8XicbVDLSgMxFL1TX7W+qi7dBIvgqsxUQZdFNy4r2Ae2Q8mkd9rQTGZIMkIZ+hduXCji1r9x59+YtrPQ1gOBwzn3knNPkAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3n1BpHssHM0nQj+hQ8pAzaqz02IuoGQVhpqf9csWtunOQVeLlpAI5Gv3yV28QszRCaZigWnc9NzF+RpXhTOC01Es1JpSN6RC7lkoaofazeeIpObPKgISxsk8aMld/b2Q00noSBXZyllAvezPxP6+bmvDaz7hMUoOSLT4KU0FMTGbnkwFXyIyYWEKZ4jYrYSOqKDO2pJItwVs+eZW0alXvolq7v6zUb/I6inACp3AOHlxBHe6gAU1gIOEZXuHN0c6L8+58LEYLTr5zDH/gfP4A9u6RGw==</latexit>\n<latexit sha1_base64=\"bdOh5GAjE79UmVAwZczID+t6uRA=\">AAAB7XicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeiF48V7Ae0oWw2m3btJht2J0Ip/Q9ePCji1f/jzX/jts1BWx8MPN6bYWZekEph0HW/ncLa+sbmVnG7tLO7t39QPjxqGZVpxptMSaU7ATVcioQ3UaDknVRzGgeSt4PR7cxvP3FthEoecJxyP6aDRESCUbRSq6dChaRfrrhVdw6ySrycVCBHo1/+6oWKZTFPkElqTNdzU/QnVKNgkk9LvczwlLIRHfCupQmNufEn82un5MwqIYmUtpUgmau/JyY0NmYcB7Yzpjg0y95M/M/rZhhd+xORpBnyhC0WRZkkqMjsdRIKzRnKsSWUaWFvJWxINWVoAyrZELzll1dJq1b1Lqq1+8tK/SaPowgncArn4MEV1OEOGtAEBo/wDK/w5ijnxXl3PhatBSefOYY/cD5/AEQ5juw=</latexit>\ndisentangling module\n3D Neural Renderer Figure 3: A mapping network maps camera, shape, texture and background into a disentangled code that is passed to StyleGAN for “rendering”. We refer to this network as StyleGAN-R.\nWhile more views provide more constraints, empirically, two views have been proven sufficient. We randomly sample view pairs (i, j) for efficiency.\nWe use the above loss functions to jointly train the neural network f and optimize viewpoint cameras V (which were fixed in Chen et al. (2019)). We assume that different images generated from the same w∗v correspond to the same viewpoint V . Optimizing the camera jointly with the weights of the network allows us to effectively deal with noisy initial camera annotations." }, { "heading": "3.3 DISENTANGLING STYLEGAN WITH THE INVERSE GRAPHICS MODEL", "text": "The inverse graphics model allows us to infer a 3D mesh and texture from a given image. We now utilize these 3D properties to disentangle StyleGAN’s latent space, and turn StyleGAN into a fully controllable 3D neural renderer, which we refer to as StyleGAN-R. Note that StyleGAN in fact synthesizes more than just an object, it also produces the background, i.e., the entire scene. Ideally we want control over the background as well, allowing the neural renderer to render 3D objects into desired scenes. To get the background from a given image, we simply mask out the object.\nWe propose to learn a mapping network to map the viewpoint, shape (mesh), texture and background into the StyleGAN’s latent code. Since StyleGAN may not be completely disentangled, we further fine-tune the entire StyleGAN model while keeping the inverse graphics network fixed.\nMapping Network: Our mapping network, visualized in Figure 3, maps the viewpoints to first 4 layers and maps the shape, texture and background to the last 12 layers of W ∗. For simplicity, we denote the first 4 layers as W ∗V and the last 12 layers as W ∗ STB , where W ∗ V ∈ R2048 and W ∗STB ∈ R3008. Specifically, the mapping network gv for viewpoint V and gs for shape S are separate MLPs while gt for texture T and gb for background B are CNN layers:\nzview = gv(V ; θv), z shape = gs(S; θs), z txt = gt(T ; θt), z bck = gb(B; θb), (3)\nwhere zview ∈ R2048, zshape, ztxt, zbck ∈ R3008 and θv, θs, θt, θb are network parameters. We softly combine the shape, texture and background codes into the final latent code as follows:\nw̃mtb = sm zshape + st ztxt + sb zbck, (4) where denotes element-wise product, and sm, st, sb ∈ R3008 are shared across all the samples. To achieve disentanglement, we want each dimension of the final code to be explained by only one property (shape, texture or background). We thus normalize each dimension of s using softmax.\nIn practice, we found that mapping V to a high dimensional code is challenging since our dataset only contains a limited number of views, and V is limited to azimuth, elevation and scale. We thus map V to the subset of W ∗V , where we empirically choose 144 of the 2048 dimensions with the highest correlation with the annotated viewpoints. Thus, zview ∈ R144 in our case. Training Scheme: We train the mapping network and fine-tune StyleGAN in two separate stages. We first freeze StyleGAN’s weights and train the mapping network only. This warms up the mapping network to output reasonable latent codes for StyleGAN. We then fine-tune both StyleGAN and the mapping network to better disentangle different attributes. We provide details next.\nIn the warm up stage, we sample viewpoint codes w∗v among the chosen viewpoints, and sample the remaining dimensions of w∗ ∈W ∗. We try to minimize the L2 difference between the mapped code w̃ and StyleGAN’s code w∗. To encourage the disentanglement in the latent space, we penalize the entropy of each dimension i of s. Our overall loss function for our mapping network is:\nLmapnet(θv, θs, θt, θv) = ||w̃ − w∗||2 − ∑ i ∑ k∈{m,t,b} ski log(s k i ). (5)\nBy training the mapping network, we find that view, shape and texture can be disentangled in the original StyleGAN model but the background remains entangled. We thus fine-tune the model to get a better disentanglement. To fine-tune the StyleGAN network we incorporate a cycle consistency loss. In particular, by feeding a sampled shape, texture and background to StyleGAN we\nobtain a synthesized image. We encourage consistency between the original sampled properties and the shape, texture and background predicted from the StyleGAN-synthesized image via the inverse graphics network. We further feed the same background B with two different {S, T} pairs to generate two images I1 and I2. We then encourage the re-synthesized backgrounds B̄1 and B̄2 to be similar. This loss tries to disentangle the background from the foreground object. During training, we find that imposing the consistency loss on B in image space results in blurry images, thus we constrain it in the code space. Our fine-tuning loss takes the following form:\nLstylegan(θgan) = ||S − S̄||2 + ||T − T̄ ||2 + ||gb(B)− gb(B̄)||2 + ||gb(B̄1)− gb(B̄2)||2 (6)" }, { "heading": "4 EXPERIMENTS", "text": "In this section, we showcase our approach on inverse graphics tasks (3D image reconstruction), as well as on the task of 3D neural rendering and 3D image manipulation.\nImage Datasets for training StyleGAN: We use three category-specific StyleGAN models, one representing a rigid object class, and two representing articulated (and thus more challenging) classes. We use the official car and horse model from StyleGAN2 (Karras et al., 2019b) repo which are trained on LSUN Car and LSUN Horse with 5.7M and 2M images. We also train a bird model on NABirds (Van Horn et al., 2015) dataset, which contains 48k images.\nOur “StyleGAN” Dataset: We first randomly sample 6000 cars, 1000 horse and 1000 birds with diverse shapes, textures, and backgrounds from StyleGAN. After filtering out images with bad masks as described in Sec. 3, 55429 cars, 16392 horses and 7948 birds images remain in our dataset which is significant larger than the Pascal3D car dataset (Xiang et al., 2014) (4175 car images). Note that nothing prevents us from synthesizing a significantly larger amount of data, but in practice, this amount turned out to be sufficient to train good models. We provide more examples in Appendix." }, { "heading": "4.1 3D RECONSTRUCTION RESULTS", "text": "Training Details: Our DIB-R based inverse graphics model was trained with Adam (Kingma & Ba (2015)), with a learning rate of 1e-4. We set λIOU, λcol, λlap, λsm and λmov to 3, 20, 5, 5, and 2.5, respectively. We first train the model with Lcol loss for 3K iterations, and then fine-tune the model by adding Lpecept to make the texture more realistic. We set λpercept to 0.5. The model converges in 200K iterations with batch size 16. Training takes around 120 hours on four V100 GPUs.\nResults: We show 3D reconstruction results in Fig. 4. Notice the quality of the predicted shapes and textures, and the diversity of the 3D car shapes we obtain. Our method also works well on more challenging (articulated) classes, e.g. horse and bird. We provide additional examples in Appendix.\nPa sc\nal 3D\nO ur\ns\nInput Prediction Texture Multiple Rendered Views of Prediction\nFigure 5: Comparison on Pascal3D test set: We compare inverse graphics networks trained on Pascal3D and our StyleGAN dataset. Notice considerably higher quality of prediction when training on the StyleGAN dataset.\nFu ll\nw .o\nM .V\n.\nInput Prediction Texture Prediction rendered in Multiple Views\nFigure 6: Ablation Study: We ablate the use of multi-view consistency loss. Both texture are shape are worse without this loss, especially in the invisible parts (rows 2, 5, denoted by “w.o M. V.” – no multi-view consistency used during training), showcasing the importance of our StyleGAN-multivew dataset.\nQualitative Comparison: To showcase our approach, we compare our inverse graphics network trained on our StyleGAN dataset with exactly the same model but which we train on the Pascal3D car dataset. Pascal3D dataset has annotated keypoints, which we utilize to train the baseline model, termed as as Pascal3D-model. We show qualitative comparison on Pascal3D test set in Fig. 5. Note that the images from Pascal3D dataset are different from those our StyleGAN-model was trained on. Although the Pascal3D-model’s prediction is visually good in the input image view, rendered predictions in other views are of noticeably lower quality than ours, which demonstrates that we recover 3D geometry and texture better than the baseline.\nQuantitative Comparison: We evaluate the two networks in Table 1 for the car class. We report the estimated annotation time in Table. 1 (a) to showcase efficiency behind our StyleGAN dataset. It takes 3-5 minutes to annotate keypoints for one object, which we empirically verify. Thus, labeling Pascal3D required around 200-350 hours while ours takes only 1 minute to annotate a 10 times larger dataset. In Table 1 (b), we evaluate shape prediction quality by the re-projected 2D IOU score. Our model outperforms the Pascal3D-model on the SyleGAN test set while Pascal3D-model is better on the Pascal test set. This is not surprising since there is a domain gap between two datasets and thus each one performs best on their own test set. Note that this metric only evaluates quality of the prediction in input view and thus not reflect the actual quality of the predicted 3D shape/texture.\nTo analyze the quality of 3D prediction, we conduct an AMT user study on the Pascal3D test set which contains 220 images. We provide users with the input image and predictions rendered in 6 views (shown in Fig. 5, right) for both models. We ask them to choose the model with a more realistic shape and texture prediction that matches the input object. We provide details of the study in the Appendix. We report results in Table. 1 (c). Users show significant preference of our results versus the baseline, which confirms that the quality of our 3D estimation.\nAblation study: In Fig 6 we ablate the importance of using multiple views in our dataset, i.e., by encouraging multi-view consistency loss during training. We compare predictions from inverse graphics networks trained with and without this loss, with significant differences in quality.\nC ar 1 C ar 2\nC ar\n3\nFigure 10: 3D Manipulation: We sample 3 cars in column 1. We replace the shape of all cars with the shape of Car 1 (red box) in 2nd column. We transfer texture of Car 2 (green box) to other cars (3rd col). In last column, we paste background of Car 3 (cyan box) to the other cars. Examples indicated with boxes are unchanged. Zoom in to see details.\nC ar 1 C ar 2\nInput Neural R. Shape Swap Texture Swap Bck. Swap\nFigure 11: Real Image Manipulation: Given input images (1st col), we predict 3D properties and use our StyleGAN-R to render them back (2nd col). We swap out shape, texture & background in cols 3-5." }, { "heading": "4.2 DUAL RENDERERS", "text": "Training Details: We train StyleGAN-R using Adam with learning rate of 1e-5 and batch size 16. Warmup stage takes 700 iterations, and we perform joint fine-tuning for another 2500 iterations.\nWith the provided input image, we first predict mesh and texture using the trained inverse graphics model, and then feed these 3D properties into StyleGAN-R to generate a new image. For comparison, we feed the same 3D properties to the DIB-R graphics renderer (which is the OpenGL renderer). Results are provided in Fig. 7. Note that DIB-R can only render the predicted object, while StyleGAN-R also has the ability to render the object into a desired background. We find that StyleGAN-R produces relatively consistent images compared to the input image. Shape and texture are well preserved, while only the background has a slight content shift." }, { "heading": "4.3 3D IMAGE MANIPULATION WITH STYLEGAN-R", "text": "We test our approach in manipulating StyleGAN-synthesized images from our test set and real images. Specifically, given an input image, we predict 3D properties using the inverse graphics network, and extract background by masking out the object with Mask-RCNN. We then manipulate and feed these properties to StyleGAN-R to synthesize new views.\nControlling Viewpoints: We first freeze shape, texture and background, and change the camera viewpoint. Example is shown in Fig. 9. We obtain meaningful results, particularly for shape and texture. For comparison, an alternative way that has been explored in literature is to directly optimize the GAN’s latent code (in our case the original StyleGAN’s code) via an L2 image reconstruction loss. Results are shown in the last three columns in Fig. 8. As also observed in Abdal et al. (2019), this approach fails to generate plausible images, showcasing the importance of the mapping network and fine-tuning the entire architecture with 3D inverse graphics network in the loop.\nControlling Shape, Texture and Background: We further aim to manipulate 3D properties, while keeping the camera viewpoint fixed. In the second column of Fig 10, we replace the shapes of all cars to one chosen shape (red box) and perform neural rendering using StyleGAN-R. We successfully swap the shape of the car while maintaining other properties. We are able to modify tiny parts of the car, such as trunk and headlights. We do the same experiment but swapping texture and background in the third and forth column of Fig 10. We notice that swapping textures also slightly modifies the background, pointing that further improvements are possible in disentangling the two.\nReal Image Editing: As shown in Fig. 11, our framework also works well when provided with real images, since StyleGAN’s images, which we use in training, are quite realistic." }, { "heading": "4.4 LIMITATIONS", "text": "While recovering faithful 3D gemetry and texture, our model fails to predict correct lighting. Real images and StyleGAN-generated images contain advanced lighting effects such as reflection, transparency and shadows, and our spherical harmonic lighting model is incapable in dealing with it successfully. We also only partly succeed at disentangling the background, which one can see by noticing slight changes in background in Fig. 7, Fig. 10 and Fig. 11. Predicting faithful shapes for out-of-distribution objects as discussed in Appendix is also a significant challenge. We leave improvements to future work." }, { "heading": "5 CONCLUSION", "text": "In this paper, we introduced a new powerful architecture that links two renderers: a state-of-the-art image synthesis network and a differentiable graphics renderer. The image synthesis network generates training data for an inverse graphics network. In turn, the inverse graphics network teaches the synthesis network about the physical 3D controls. We showcased our approach to obtain significantly higher quality 3D reconstruction results while requiring 10,000× less annotation effort than standard datasets. We also provided 3D neural rendering and image manipulation results demonstrating the effectiveness of our approach." }, { "heading": "A OVERVIEW", "text": "In the Appendix, we first show feature visualization of StyleGAN layers in Sec. B. We then provide a detailed explanation of our StyleGAN dataset creation in Sec. C, including examples of the generated images and selected viewpoints. Next, we do a systematic analysis of our camera initialization method in Sec. D. Finally, we show additional results on the 3D inverse graphics task in Sec. E, additional details of the user study in Sec. F, futher examples of StyleGAN disentanglement in Sec. G, with ablation studies and a discussion of limitations in Sec. H and Sec. K, respectively." }, { "heading": "B STYLEGAN LAYERS VISUALIZATION", "text": "The official StyleGAN code repository provides models of different object categories at different resolutions. Here we take the 512 × 384 car model as the example. This model contains 16 layers, where every two consecutive layers form a block. Each block has a different number of channels. In the last block, the model produces a 32-channel feature map at a 512× 384 resolution. Finally, a learned RGB transformation function is applied to convert the feature map into an RGB image.\nWe visualize the feature map for each block via the learned RGB transformation function. Specifically, for the feature map in each block with the size of h × w × c, we first sum along the feature dimension, forming a h×w×1 tensor. We then repeat the feature 32 times and generate a h×w×32 new feature map. This allows us to keep the information of all the channels and directly apply the RGB transformation function in the last block to convert it to the RGB image.\nAs shown in Fig A, we find that blocks 1 and 2 do not exhibit interpretable structure while the car shape starts to appear in blocks 3-5. We observe that there is a rough car contour in block 4 which further becomes clear in block 5. From blocks 6 to 8, the car’s shape becomes increasingly finer and background scene also appears. This supports some of our findings, i.e., the viewpoint is controlled in block 1 and 2 (first 4 layers) while shape, texture, and background exist in the last 12 layers.\nBlock 1 Block 2 Block 3 Block 4\nBlock 5 Block 6 Block 7 Block 8 Generated Image\nFigure A: Layer Visualization for Each Block: Notice that the car contour starts to appear in blocks 4 and higher. This supports some of our findings that the early blocks control viewpoint (and other global properties), while shape, texture and background are controlled in the higher layers." }, { "heading": "C OUR “STYLEGAN” DATASET", "text": "We visualize all of our selected viewpoints in our dataset in Fig. B. Our car training dataset contains 39 viewpoints. For the horse and bird datasets, we choose 22 and 8 views, respectively. We find that these views are sufficient to learn accurate 3D inverse graphics networks. We could not find views that would depict the object from a higher up camera, i.e., a viewpoint from which the roof of the car or the back of the horse would be more clearly visible. This is mainly due to the original dataset on which StyleGAN was trained on, which lacked such views. This leads to challenges in training inverse graphics networks to accurately predict the top of the objects.\nCar Viewpoints\nHorse Viewpoints\nBird Viewpoints\nFigure B: All Viewpoints: We show an example of a car, bird and a horse synthesized in all of our chosen viewpoints. While shape and texture are not perfectly consistent across views, they are sufficiently accurate to enable training accurate inverse graphics networks in our downstream tasks. Horses and birds are especially challenging due to articulation. One can notice small changes in articulation across viewpoints. Dealing with articulated objects is subject to future work.\nNotice the high consistency of both the car shape and texture as well as the background scene across the different viewpoints. Note that for articulated objects such as the horse and bird classes, StyleGAN does not perfectly preserve object articulation in different viewpoints, which leads to challenges in training high accuracy models using multi-view consistency loss. We leave further investigation of articulated objects to future work.\nWe further show examples from our StyleGAN-generated dataset in Fig. C. Our dataset contains objects with various shapes, textures and viewpoints. In particular, in the first six rows, one can notice a diverse variants of car types (Standard Car, SUV, Sports car, Antique Car, etc) . We find that StyleGAN can also produce rare car shapes like trucks, but with a lower probability.\nFigure C: Dataset Overview: We synthesize multi-view datasets for three classes: car, horse, and bird. Our datasets contain objects with various shapes, textures and viewpoints. Notice the consistency of pose of object in each column (for each class). Challenges include the fact that for all of these objects StyleGAN has not learned to synthesize views that overlook the object from above due to the photographer bias in the original dataset that StyleGAN was trained on." }, { "heading": "D CAMERA INITIALIZATION", "text": "Inverse graphics tasks require camera pose information during training, which is challenging to acquire for real imagery. Pose is generally obtained by annotating keypoints for each object and running structure-from-motion (SFM) techniques (Welinder et al., 2010; Xiang et al., 2014) to compute camera parameters. However, keypoint annotation is quite time consuming – requiring roughly 3- 5minutes per object which we verify in practice using the LabelMe interface (Torralba et al., 2010). In our work, we utilize StyleGAN to significantly reduce annotation effort since samples with the same w∗v share the same viewpoint. Therefore, we only need to assign a few selected w ∗ v into camera poses. In particular, we assign poses into several bins which we show is sufficient for training inverse graphics networks where, along with the network parameters, cameras get jointly optimized during training using these bins as initialization.\nSpecifically, we assign poses into 39, 22 and 8 bins for the car, horse and bird classes, respectively. This allows us to annotate all the views (and thus the entire dataset) in only 1 minute. We do acknowledge additional time in selecting good views out of several candidates.\nWe annotate each view with a rough absolute camera pose (which we further optimize during training). To be specific, we first select 12 azimuth angles: [0◦, 30◦, 60◦, 90◦, 120◦, 150◦, 180◦, 210◦, 240◦, 270◦, 300◦, 330◦]. Given a StyleGAN viewpoint, we manually classify which azimuth angle it is close to and assign it to the corresponding label with fixed elevation (0◦) and camera distance.\nTo demonstrate the effectiveness of our camera initialization, we make a comparison with another inverse graphics network trained with a more accurate camera initialization. Such an initialization is done by manually annotating object keypoints in each of the selected views (w∗v) of a single car example, which takes about 3-4 hours (around 200 minutes, 39 views). Note that this is still a significantly lower annotation effort compared to 200-350 hours required to annotate keypoints for every single object in the Pascal3D dataset. We then compute the camera parameters using SfM. We refer to the two inverse graphics networks trained with different camera initializations as viewmodel and keypoint -model, respectively.\nWe visualize our two different annotation types in Fig D. We show annotated bins in the top. We annotated keypoints for the (synthesized) car example in the first image row based on which we compute the accurate viewpoint using SfM. To showcase how well aligned the objects are for the same viewpoint code, we visualize the annotated keypoints on all other synthesized car examples. Note that we do not assume that these keypoints are accurate for these cars (only the implied viewpoint).\nWe quantitatively evaluate two initialization methods in Table. D. We first compare the annotation and training times. While it takes the same amount of time to train, view-model saves on annotation time. The performance of view-model and keypoint -model are comparable with almost the same 2D IOU re-projection score on the StyleGAN test set. Moreover, during training the two camera systems converge to the same position. We evaluate this by converting all the views into quaternions and compare the difference between the rotation axes and rotation angles. Among all views, the average difference of the rotation axes is only 1.43◦ and the rotation angle is 0.42◦. The maximum difference of the rotation axes is only 2.95◦ and the rotation angle is 1.11◦.\nWe further qualitatively compare the two methods in Fig. E, showing that they perform very similarly. Both, qualitative and quantitative comparisons, demonstrated that view-camera initialization is sufficient for training accurate inverse graphics networks and no additional annotation is required. This demonstrates a scaleable way for creating multi-view datasets with StyleGAN, with roughy a minute of annotation time per class.\nE 3D INFERENCE\nWe here present additional 3D prediction results and compare our model, which is trained on our StyleGAN generated dataset (StyleGAN-model), with the one trained on the Pascal 3D dataset (Xiang et al., 2014) (PASCAL-model). We qualitatively compare two models on the Pascal3D test set in Fig. F and web imagery in Fig. G. Our StyleGAN-model produces better shape and texture predictions in all the testing datasets, which is particularly noticeable when looking at different rendered views of the prediction. We also present additional 3D prediction results on horses and birds in Fig. H.\nAzimuth=0◦ Azimuth=30◦ Azimuth=30◦ Azimuth=180◦ Azimuth=210◦ Azimuth=270◦\nFigure D: We show examples of cars synthesized in chosen viewpoints (columns) along with annotations. Top row shows the pose bin annotation, while the images show the annotated keypoints. We annotated keypoints for the car example in the first image-row based on which we compute the accurate camera parameters using SfM. To showcase how well aligned the objects are for the same viewpoint latent code, we visualize the annotated keypoints on all other synthesized car examples. Note that we do not assume that these keypoints are accurate for these cars (only the implied viewpoint). Annotating pose bins took 1 min for the car class, while keypoint annotation took 3-4 hours, both types of annotations thus being quite efficient. We empirically find that pose bin annotation is sufficient in training accurate inverse graphics networks (when optimizing camera parameters during training in addition to optimizing the network parameters).\nv ie w\n-I ni t. k e y p o in tIn it.\nInput Pred. Multiple Views for the Predicted Shape and Texture\nFigure E: Comparison of Different Camera Initializations: The first row shows predictions from keypointInitialization (cameras computed by running SFM on annotated keypoints) and the second row show results obtained by training with view-Initialization (cameras are coarsely annotated into 12 view bins). Notice how close the two predictions are, indicating that coarse viewpoint annotation is sufficient for training accurate inverse graphics networks. Coarse viewpoint annotation can be done in 1 minute.\nAnnotation Type Annotation Time Training Time 2D IOU\nkeypoint 3-4h 60h 0.953 view 1min 60h 0.952\nQuaternion Mean Max\nqxyz 1.43◦ 2.95◦\nqw 0.42◦ 1.11◦\n(a) Time & Performance (b) Camera Difference after Training\nTable A: Comparison of Different Camera Initializations: First table shows annotation time required for the StyleGAN dataset, and training times of the view-model and keypoint-model on the dataset with respective annotations (binned viewpoints or cameras computed with SFM from annotated keypoints). The view-model requires significantly less annotation time, and its final performance is comparable to the keypoint-model. Second table shows the difference of the camera parameters after training both methods (which optimize cameras during training). They converge to very similar camera positions. This shows that coarse view annotation along with camera optimization during training is sufficient in training high accuracy inverse graphics networks.\nPa sc\nal 3D\nO ur s Pa sc al\n3D O ur s\nPa sc\nal 3D\nO ur s Pa sc al\n3D O ur s\nPa sc\nal 3D\nO ur s Pa sc al\n3D O ur s\nInput Pred. Multiple Views\nFigure F: Comparison on PASCAL3D imagery: We compare PASCAL-model with StyleGAN-model on PASCAL3D test set. While the predictions from both models are visually good in the corresponding image view, the prediction from StyleGAN-model have much better shapes and textures as observed in other views.\nPa sc\nal 3D\nO ur s Pa sc al\n3D O ur s\nPa sc\nal 3D\nO ur s Pa sc al\n3D O ur s\nPa sc\nal 3D\nO ur\ns\nInput Pred. Multiple Views\nFigure G: Comparison on Images from the Web: We compare the PASCAL-model with our StyleGANmodel on images downloaded from the web. While the predictions from both models are visually good in the corresponding image view, the prediction from StyleGAN-model have much better shapes and textures as observed in other views.\nInput Pred. Multiple Views for the predicted shape and texture\nFigure H: 3D Reconstruction Results for Car, Horse and Bird Classes: We show car, horse and bird examples tested on the images from the StyleGAN dataset test sets. Notice that the model struggles a little in reconstructing the top of the back of the horse, since such views are lacking in training." }, { "heading": "F USER STUDY", "text": "We provide user study details in this section. We implement our user interface, visualized in in Fig. I, on Amazon Mechanical Turk. We show the input image and predictions rendered in 6 views such that users can better judge the quality of 3D reconstruction. We show results for both, our inverse graphics network (trained on the StyleGAN dataset) and the one trained on the Pascal3D dataset. We show shape reconstruction and textured models separately, such that users can judge the quality of both, shape and texture, more easily. We randomize the order of ours vs baseline in each HIT to avoid any bias. We ask users to choose results that produce more realistic and representative shape, texture and overall quality with respect to the input image. We separate judgement of quality into these three categories to disentangle effects of 3D reconstruction from texture prediction. We also provide “no preference” options in case of ties. Our instructions emphasize that more “representative” results of the input should be selected, to avoid users being biased by good looking predictions that are not consistent with the input (e.g., such as in the case of overfit networks).\nWe evaluate the two networks on all 220 images from the Pascal3D test set (which are “in-domain” for the Pascal3D-trained network). For each image we ask three users to perform evaluation, which results in 660 votes in total. We report the average of all votes as our final metric. We further report annotator agreement analysis in Table B. For shape, texture, and overall evaluation, there are 88.2%, 89.2%, and 87.2% cases where at least two out of three users choose the same option.\nOverall Shape Texture\nOurs 57.5% 61.6% 56.3% Pascal3D-model 25.9% 26.4% 32.8% No Preference 16.6% 11.9% 10.8%\nOverall Shape Texture\nAll Agree 26.1% 29.6% 27.1% Two Agree 61.1% 58.6% 62.1% No Agreement 12.8% 11.8% 10.8%\n(a) 3D Quality Study (b) Annotator Agreement\nTable B: User study results: (a): Quality of 3D estimation (shape, texture and overall). (b): Annotators agreement analysis. “No agreement” stands for the case where all three annotators choose different options." }, { "heading": "G STYLEGAN-R DISENTANGLEMENT", "text": "Given an input image, we infer 3D properties of an object (shape, texture, background) using our inverse graphics network, but can also map these properties back to the latent code and use our StyleGAN-R to synthesize a new image. We show the results in Fig. J. Similar to Fig. 9 in the main paper, we show DIB-R-rendered predictions and neural rendering StyleGAN-R’s predictions, and manipulate their viewpoints in rows (1, 4) and (2, 5). We further show “neural rendering” results from the original StyleGAN in row (3, 6), where we only learn the mapping network but keep the StyleGAN’s weights fixed. We find that fine-tuning is necessary and StyleGAN-R produces more consistent shape, texture and background.\nD IB -R St yl eG A N\n-R St yl eG A N\nD IB -R St yl eG A N\n-R St yl eG A N\nInput Pred. Multiple Views\nFigure J: Dual Rendering: Given the input image, we show the DIB-R-rendered predictions in rows (1, 4) and StyleGAN-R’s results in rows (2, 5). We further shows the neural rendering results from the original StyleGAN model, where we only learn the mapping network but keep the StyleGAN weights fixed. Clearly, after fine-tuning, StyleGAN-R produces more consistent results.\nInput w. Light w.o. Light Input w. Light w.o. Light\nFigure K: Light Prediction: Given the input image, we show rendering (using the OpenGL renderer used in DIB-R) results with light (columns 2, 5) and results with just textures (columns 3, 6). We find that the two results are quite similar, which indicates that we did not learn a good predictor for lighting. Moreover, we find that higher order lighting, such as reflection, high-specular light are merged into texture, as shown in the second row. We aim to resolve this limitation in future work.\nReal Image Editing: We show additional real-image editing examples in Fig. L. With our StyleGAN-R, we can easily change the car’s size, azimuth and elevation and synthesize a new image while preserving the shape and texture of the car with a consistent background." }, { "heading": "H ABLATION STUDIES", "text": "We find that the multi-view consistency and perceptual losses play an import role in training, as shown in Fig. P. Multi-view consistency loss helps in training a more accurate inverse graphics network in terms of shape, while the perceptual loss helps to keep texture more realistic.\nInput StyleGAN-R Manipulate Scales\nInput StyleGAN-R Manipulate Azimuths\nInput StyleGAN-R Manipulate Elevations\nFigure L: Real Image Editing. Given an input image (column 1), we use our inverse graphics network to predict the 3D properties and apply StyleGAN-R to re-render these (column 2, 3). We manipulate the car size/scale (row 1-3), azimuth (row 4-6) and elevation (Row 7-9)." }, { "heading": "I STYLEGAN MANIPULATION", "text": "We show that our method for manipulating StyleGAN is generalizable and can be generalized to other class, as illustrated in the StyleGAN-R manipulation results for the bird in Fig M and Fig N." }, { "heading": "J FAILURE CASES", "text": "We find that our inverse graphics network fails on out-of-distribution images/shapes, as shown in Fig. O. For example, the reconstruction results for Batmobile and Flinstone cars are not representative of the input cars. We anticipate that this issue can be addressed by augmenting the dataset on which StyleGAN is trained with more diverse objects. Part of the issue is also caused by GANs not capturing the tails of the distribution well, which is an active area of research.\nSc al e A zi m ut h\nE le\nva tio\nn\nFigure M: Bird Camera Controller: We manipulate azimuth, scale, elevation parameters with StyleGAN-R to synthesize images in new viewpoints while keeping content code fixed.\nSampled Cars Shape Swap Texture Swap Background Swap\nC ar 1 C ar 2\nC ar\n3\nFigure N: Bird 3D Manipulation: We sample 3 birds in column 1. We replace the shape of all birds with the shape of Bird 1 (red box) in 2nd column. We transfer texture of Bird 2 (green box) to other birds (3rd col). In last column, we paste background of Bird 3 (cyan box) to the other birds. Examples indicated with boxes are unchanged." }, { "heading": "K LIMITATIONS", "text": "Our simple spherical harmonics model fails to separate light from textures. We show several examples in Fig.K. We leave this issue for future work.\nInput Multiple Views for the predicted shape and texture\nFigure O: 3D Reconstruction Failure Cases: We show examples of failure cases for car, bird and horse. Our method tends to fail to produce relevant shapes for objects with out-of-distribution shapes (or textures).\nFu ll\nw .o\nM .V . w .o P.\nFu ll\nw .o\nM .V . w .o P.\nInput Pred. Texture Pred. Multiple Views\nFigure P: Ablation Study: We ablate the use of multi-view consistency and perceptual losses by showing results of 3D predictions. Clearly, the texture becomes worse in the invisible part if we remove the multiview consistency loss (rows 2, 5, denoted by “w.o M. V.”, which denotes that no multi-view consistency was used during training), showcasing the importance of our StyleGAN-multivew dataset. Moreover, the textures become quite smooth and lose details if we do not use the perceptual loss (rows 3, 6, noted by “w.o P.”, which denotes that no perceptual loss was used during training)." } ]
2,021
null
SP:66e2413a5a20b51378742286d20985a11776a782
[ "This paper proposes a contrastive autoencoder approach that only requires small data to perform a multi-label classification on the long-tail problem. They introduce a matching network to compare text and label embeddings and calculate the probabilities of the label given the input. The proposed idea is very straightforward by combining a matching network with contrastive learning to give broader signals. The goal of this work is to enable zero-shot and few-shot learning with very few resources as a more sustainable approach to machine learning applications. " ]
For natural language processing (NLP) ‘text-to-text’ tasks, prevailing approaches heavily rely on pretraining large self-supervised models on massive external data sources. However, this methodology is being critiqued for: exceptional compute and pretraining data requirements; diminishing returns on both large and small datasets; and evaluation settings that overestimate performance differences. The core belief behind current methodology, coined ‘the bitter lesson’ by R. Sutton, is that ‘compute scale-up beats data and compute-efficient algorithms’, neglecting that progress in compute hardware scale-up is based near entirely on the miniaturisation of resource consumption. We thus approach pretraining from a miniaturisation perspective, such as not to require massive external data sources and models, and avoid translations from continuous input embeddings to discrete labels. To minimise favourable evaluation, we examine learning on a challenging long-tailed, low-resource, multi-label text classification dataset with noisy, highly sparse labels and many rare concepts. To this end, we propose using a ‘datasetinternal’, self-supervised contrastive autoencoding approach for pretraining that enables marked improvements in zero-shot, few-shot and supervised learning performance; even under a challenging, otherwise avoided, low-resource scenario, without defaulting to large-scale external datasets as support training signals. Crucially, we find evidence that zero and few-shot learning markedly benefit from adding more ‘dataset-internal’, self-supervised training signals, e.g. when increasing self-supervised learning signals via large external sources is infeasible.
[]
[ { "authors": [ "Trapit Bansal", "Rishikesh Jha", "Tsendsuren Munkhdalai", "Andrew McCallum" ], "title": "Self-supervised meta-learning for few-shot natural language classification", "venue": "tasks. CoRR,", "year": 2020 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks", "venue": "In Advances in NeurIPS,", "year": 2006 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information. TACL, 2017", "venue": "URL https://transacl.org/ojs/index. php/tacl/article/view/999", "year": 2017 }, { "authors": [ "Radford", "Ilya Sutskever", "Dario Amodei" ], "title": "Language models are few-shot learners", "venue": "CoRR, abs/2005.14165,", "year": 2020 }, { "authors": [ "Wei-Cheng Chang", "Hsiang-Fu Yu", "Kai Zhong", "Yiming Yang", "Inderjit Dhillon" ], "title": "X-bert: extreme multi-label text classification with using bidirectional encoder representations from transformers", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey E. Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Jesse Davis", "Mark Goadrich" ], "title": "The relationship between precision-recall and ROC curves", "venue": "In Machine Learning, Proceedings of the Twenty-Third International Conference (ICML", "year": 2006 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "venue": "In Proceedings of NAACLHLT. Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Jesse Dodge", "Gabriel Ilharco", "Roy Schwartz", "Ali Farhadi", "Hannaneh Hajishirzi", "Noah A. Smith" ], "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "venue": "CoRR, abs/2002.06305,", "year": 2020 }, { "authors": [ "Alberto Fernández", "Salvador Garcı́a", "Mikel Galar", "Ronaldo C. Prati", "Bartosz Krawczyk", "Francisco Herrera" ], "title": "Learning from Imbalanced Data Sets. Springer, 2018", "venue": "ISBN 978-3319-98073-7", "year": 2018 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of AISTATS. JMLR,", "year": 2010 }, { "authors": [ "Sara Hooker" ], "title": "The hardware lottery, 2020", "venue": "URL https://arxiv.org/abs/2009.06489", "year": 2009 }, { "authors": [ "Sara Hooker", "Aaron Courville", "Gregory Clark", "Yann Dauphin", "Andrea Frome" ], "title": "What do compressed deep neural networks forget?, 2020a", "venue": null, "year": 2020 }, { "authors": [ "Sara Hooker", "Nyalleng Moorosi", "Gregory Clark", "Samy Bengio", "Emily Denton" ], "title": "Characterising bias in compressed models, 2020b", "venue": null, "year": 2020 }, { "authors": [ "Dirk Hovy", "Shannon L. Spruit" ], "title": "The social impact of natural language processing", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "year": 2016 }, { "authors": [ "Huajie Jiang", "Ruiping Wang", "Shiguang Shan", "Xilin Chen" ], "title": "Transferable contrastive network for generalized zero-shot learning", "venue": "IEEE/CVF International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Hossein Mobahi", "Dilip Krishnan", "Samy Bengio" ], "title": "Fantastic generalization measures and where to find them", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tal Linzen" ], "title": "How can we accelerate progress towards human-like linguistic generalization", "venue": "In Proceedings of ACL,", "year": 2020 }, { "authors": [ "Jingzhou Liu", "Wei-Cheng Chang", "Yuexin Wu", "Yiming Yang" ], "title": "Deep learning for extreme multilabel text classification", "venue": "In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2017 }, { "authors": [ "Liangchen Luo", "Yuanhao Xiong", "Yan Liu", "Xu Sun" ], "title": "Adaptive gradient methods with dynamic bound of learning rate", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhuang Ma", "Michael Collins" ], "title": "Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Zongyang Ma", "Aixin Sun", "Quan Yuan", "Gao Cong" ], "title": "Tagging Your Tweets: A Probabilistic Modeling of Hashtag Annotation in Twitter", "venue": "CIKM, pp. 999–1008", "year": 2014 }, { "authors": [ "Tom McCoy", "Ellie Pavlick", "Tal Linzen" ], "title": "Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Gábor Melis", "Tomás Kociský", "Phil Blunsom" ], "title": "Mogrifier LSTM", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Margaret Mitchell", "Dylan Baker", "Nyalleng Moorosi", "Emily Denton", "Ben Hutchinson", "Alex Hanna", "Timnit Gebru", "Jamie Morgenstern" ], "title": "Diversity and inclusion metrics in subset selection", "venue": "In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2020 }, { "authors": [ "Kevin Musgrave", "Serge J. Belongie", "Ser-Nam Lim" ], "title": "A metric learning reality", "venue": "check. CoRR,", "year": 2020 }, { "authors": [ "Preetum Nakkiran", "Gal Kaplun", "Yamini Bansal", "Tristan Yang", "Boaz Barak", "Ilya Sutskever" ], "title": "Deep double descent: Where bigger models and more data hurt", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Nikolaos Pappas", "James Henderson" ], "title": "GILE: A generalized input-label embedding for text classification", "venue": "Trans. Assoc. Comput. Linguistics,", "year": 2019 }, { "authors": [ "Barbara Plank", "Nils Rethmeier. Morty" ], "title": "Unsupervised learning of task-specialized word embeddings by autoencoding", "venue": "In RepL4NLP@ACL,", "year": 2019 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J. Liu" ], "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "venue": "Journal of Machine Learning Research,", "year": 2020 }, { "authors": [ "Brian Rappert", "Michael J. Selgelid" ], "title": "On the Dual Uses of Science and Ethics: Principles, Practices, and Prospects", "venue": "URL http://www.jstor. org/stable/j.ctt5hgz15", "year": 2013 }, { "authors": [ "Mark Riedl" ], "title": "Ai democratization in the era of gpt-3", "venue": "The Gradient,", "year": 2020 }, { "authors": [ "Anna Rogers", "Olga Kovaleva", "Anna Rumshisky" ], "title": "A primer in bertology: What we know about how bert", "venue": null, "year": 2020 }, { "authors": [ "Timo Schick", "Hinrich Schütze" ], "title": "It’s not just size that matters: Small language models are also few-shot learners. CoRR, abs/2009.07118, 2020a", "venue": null, "year": 2009 }, { "authors": [ "Timo Schick", "Hinrich Schütze" ], "title": "Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking", "venue": "In Proceedings of AAAI. AAAI Press,", "year": 2020 }, { "authors": [ "Oğuz Necip Şerbetci", "Sebastian Möller", "Roland Roller", "Nils Rethmeier" ], "title": "Efficare: Better prognostic models via resource-efficient health embeddings", "venue": "In AMIA Annual Symposium. PubMed,", "year": 2020 }, { "authors": [ "Anna Lowenhaupt Tsing", "Heather Anne Swanson", "Elaine Gan", "Nils" ], "title": "Bubandt. Arts of living on a damaged planet: ghosts of the Anthropocene ; Arts of living on a damaged planet: monsters of the Anthropocene", "venue": null, "year": 2017 }, { "authors": [ "Oriol Vinyals", "Charles Blundell", "Tim Lillicrap", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Matching networks for one shot learning", "venue": "In NIPS,", "year": 2016 }, { "authors": [ "Chenguang Wang", "Zihao Ye", "Aston Zhang", "Zheng Zhang", "Alexander J. Smola" ], "title": "Transformer on a diet", "venue": "CoRR, abs/2002.06170,", "year": 2020 }, { "authors": [ "Sinong Wang", "Madian Khabsa", "Hao Ma" ], "title": "To pretrain or not to pretrain: Examining the benefits of pretraining on resource rich tasks", "venue": "CoRR, abs/2006.08671,", "year": 2020 }, { "authors": [ "Sinong Wang", "Madian Khabsa", "Hao Ma" ], "title": "To pretrain or not to pretrain: Examining the benefits of pretraining on resource rich tasks, 2020c", "venue": null, "year": 2020 }, { "authors": [ "Zeerak Waseem", "Smarika Lulz", "Joachim Bingel", "Isabelle Augenstein" ], "title": "Disembodied machine learning: On the illusion of objectivity in nlp. anonymous preprint under review, 2020", "venue": null, "year": 2020 }, { "authors": [ "Dani Yogatama", "Cyprien de Masson d’Autume", "Jerome Connor", "Tomás Kociský", "Mike Chrzanowski", "Lingpeng Kong", "Angeliki Lazaridou", "Wang Ling", "Lei Yu", "Chris Dyer", "Phil Blunsom" ], "title": "Learning and evaluating general linguistic intelligence", "venue": "URL http: //arxiv.org/abs/1901.11373", "year": 2019 }, { "authors": [ "Honglun Zhang", "Liqiang Xiao", "Wenqing Chen", "Yongkun Wang", "Yaohui Jin" ], "title": "Multi-task label embedding for text classification", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Honglun Zhang", "Liqiang Xiao", "Wenqing Chen", "Yongkun Wang", "Yaohui Jin" ], "title": "Multi-task label embedding for text classification", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The current prevailing approach to supervised and few-shot learning is to use self-supervised pretraining on large-scale ‘task-external’ data and then fine-tune on end-task labels. Recent studies have found that, thus far, this way of pretraining fails in low-resource settings (Yogatama et al., 2019; Şerbetci et al., 2020) and that reported performance improvements are caused in part by evaluation setups that are designed in line with the paradigm that “massive resources are pivotal” to improving language understanding (Linzen, 2020; Schick & Schütze, 2020a; Dodge et al., 2020; Brown et al., 2020) or computer vision (Chen et al., 2020). Despite these critiques, the underlying goal of better initialisation of layer weights is a core requirement of successful learning with neural networks, where self-supervised layer-wise pretraining (Bengio et al., 2006) was replaced by better layer initialisation (Glorot & Bengio, 2010), which was in turn replaced by pretraining on growing amounts of external data (Bojanowski et al., 2017; Devlin et al., 2019; Chen et al., 2020; Brown et al., 2020) – i.e. FastText, BERT, SIMCLR and GPT-3. The latter three approaches require massive compute and data resources, but enable marked learning improvements in few-shot (SIMCLR, GPT3) or zero-shot (GPT-3) scenarios compared to models that have several orders of magnitude fewer parameters. There are efforts to reduce model size requirements for few and zero-shot adaptation by orders of magnitude (Schick & Schütze, 2020a;b; Plank & Rethmeier, 2019), with some being increasingly beneficial in scenarios with low input data (X), label resources (Y ), and rare events in X,Y . Crucially, such approaches do not simply rely on more data, but on creating better initialised input features X . In contrast, approaches like SIMCLR or BERT (Chen et al., 2020; Devlin et al., 2019) use self-supervision via contrastive learning and input masking on large-scale datasets to create broader learning signals than supervision provides. SIMCLR is based on a metric learning approach called contrastive self-supervision – i.e. learning to distinguish (dis-)similar inputs using\ngenerated, but weak supervision tasks. However, as Musgrave et al. (2020) find, “when evaluating old vs. recent metric learning approaches, while controlling for data and model size, newer methods only marginally improve over the classic contrastive formulation”. Remarkably, Bansal et al. (2020) recently showed that adding broader self-supervision rather than increasing data size during large-scale pretraining can substantially boost few-shot performance.\nOur central question is whether increased (broader) pretraining self-supervision also boosts few and zero-shot performance using only small-scale, ‘task-internal’ data, instead of resorting to largescale pretraining on orders of magnitude more ‘task-external’ data – i.e. Do we really need large datasets for pretraining or just more (broader) self-supervised learning signals? To broaden small data self-supervision, we propose a contrastive self-supervised objective based on labelembedding prediction, where labels are expressed as word embeddings to learn their matching with an input text embedding. For contrastive learning, our method samples positive and negative word input tokens X for self-supervised pretraining, zero and few-shot learning; and positive and negative classes Y for few-shot to fully supervised fine-tuning. Thus, we propose a model architecture that unifies training from labels Y and inputs X . To increase evaluation robustness, we compare models of the same parameter and data sizes as suggested by Musgrave et al. (2020), and evaluate on a challenging learning problem as suggested by Linzen (2020); Hooker (2020). Namely, we evaluate against a challenging low-resource, long-tailed, noisy multi-label data settings, where information is always limited, since the long tail grows with data size and because its modeling requires the majority of parameters (Hooker et al., 2020b). For robust evaluation, we use a typical training, development, test setup and first establish a solid, supervised baseline for many-class multi-label classification that is optimised with a set of generalisation techniques proposed by Jiang et al. (2020). For evaluation in supervised, few and zero-shot learning scenarios, we analyse and propose evaluation metric choices which are meaningful across all scenarios for broader performance comparisons.\nContributions: 1 We provide a straight-forward method for self-supervised contrastive labelembedding prediction and 2 evaluate it in a challenging, noisy long-tail, low-resource multi-label text prediction scenario. 3 We show that small-scale ‘data-internal’ pretraining (on 8-80MB of text) not only improves supervised performance, but also strongly boosts few and zero-shot learning by increasing self-supervision amounts for small data, rather than increasing data amounts via the standard large-scale external data pretraining approach." }, { "heading": "2 RELATED WORK", "text": "Large to Web-scale data pretraining is at the core of state-of-the-art methods in computer vision (Chen et al., 2020) and language processing (Rogers et al., 2020; Brown et al., 2020). However, challenges and disadvantages are increasingly being discussed. (i) A requirement of large-scale external text data resources (Yogatama et al., 2019; Schick & Schütze, 2020a), (ii) an inability to pretrain recent architectures on small-scale data (Liu et al., 2020; Melis et al., 2020; Şerbetci et al., 2020), (iii) calls for more challenging evaluation tasks (Linzen, 2020; McCoy et al., 2019) and (iv) diminishing returns of pretraining on large supervised datasets (Wang et al., 2020b). To address issue (iii), challenging evaluations on long-tail prediction (Chang et al., 2019), few-shot (Schick & Schütze, 2020a), or zero-shot (Brown et al., 2020), were recently shown to benefit from self-supervised pretraining, but to-date, require massive, ‘task-external’, pretraining datasets. (c) Remarkably, Bansal et al. (2020) showed that for large ‘data-external’ pretraining, using more self-supervision, not more data, also boosts few-shot performance. This finding inspired us to collect evidence towards a core question: “Do we need massive data (signals) or just more (diverse) self-supervised learning signals for pretraining?”. We collect evidence by posing three research questions and propose solutions that require designing approaches for issues (i-iii) as follows. One, to address issue (i), “can increasing self-supervision signals during ‘data-internal’ pretraining on small data, i.e. without large-scale ‘data-external’ pretraining, boost few and zero-shot performance”? Two, to address issue (ii), “what pretraining objectives and models do we chose that work without large training data”? Three, to address issue (iii), “within what challenging learning scenario should we evaluate while incorporating the now standard “any NLP task as a ‘text-to-text’ problem” paradigm (Raffel et al., 2020)”?\nFortunately, existing techniques can be extended to address these issues. For example, supervised label embedding prediction (pre-)training enables few and zero-shot learning of subsequent (unseen) supervised tasks. However, this requires the first (pre-training) task to be supervised, unlike recent large scale self-supervised pretraining methods. Large-scale, self-supervised pretraining and label embeddings can be combined (Chang et al., 2019) to fine-tune externally pretrained BERT models via label embedding prediction to boost long-tail task performance. However, BERTs’ contextualized word embeddings did not work as label embeddings ELMOs’ word embeddings had to be used (3.2), further increasing resource requirements. Even worse, when comparing language model pretraining on small text corpora, Transformers (Wang et al., 2020a) largely underperform CNNs and LSTMs (Merity et al., 2017). Fortunately, Liu et al. (2017) established that label-embedding prediction CNNs boost long-tail prediction, even without modern self-supervision or using large ‘task-external’ data pretraining. Further, Pappas & Henderson (2019); Zhang et al. (2018b) used supervised text label-embedding (pre-)training and investigated transfer to subsequent supervised tasks, though not under long-tail evaluation. Here, label embeddings are average word embeddings over label description words – i.e. label descriptions are required. The former added noise contrastive estimation (NCE) (Ma & Collins, 2018) via negative sampling of labels to zero-shot predict rare, unseen classes post supervised pretraining on seen classes. Later, Jiang et al. (2019) adapted the same idea for zero-shot image classification via supervised pretraining on pairs of ‘source’ images and ‘source’ text label descriptions. They reduced overfitting by additionally pretraining on pairs of ‘source’ image and most similar ‘zero-shot target class’ text descriptions – though this is not technically zero-shot learning because sensible target label text descriptions have to be provided, which when unknown (zero-shot), again leads to the long-tail issue. All these approaches are loosely based on Matching Networks by Vinyals et al. (2016) and add various training objectives.\nWe thus combine the advantages of self-supervised pretraining for large data with supervised label embedding prediction for smaller data to propose a contrastive self-supervised pretraining via label-embedding prediction usable for small data pretraining. We extend the supervised label embedding baseline method by Zhang et al. (2018b) and add four important changes. First, we combine label and word embedding look-up tables into one table, as this pushes input words and label(-words) to remain in a shared vector space during training, when predicting dense label(-word) embeddings from dense input word embeddings. This ‘dense-to-dense’ prediction of words to label(-words) follows the current “any NLP task as a ‘text-to-text’ prediction” paradigm (Raffel et al., 2020), but avoids constant dense-to-sparse translation into label distributions via a compute intensive softmax. Secondly, we thus use a noise contrastive estimation (NCE) objective (Ma & Collins, 2018), replacing softmax normalization with negative sampling of (supervision) labels. Combining NCE and label embeddings allows predicting arbitrarily large class set (long-tails) and unseen classes. While Pappas & Henderson (2019) used NCE for supervised label pretraining, we add self-supervised pseudo-label (word) pretraining. Because labels and input words occupy the same vector space, we can use pseudo-labels (words) for self-supervised pretraining by sampling positive words from a current text instance, and negative words from adjacent text instances within a mini-batch. Three, we chose to sample from within a batch to reduce reliance (training bias) on knowing or expecting future and past word or label distribution statistics for the whole dataset, since in a zero-shot evaluation scenario unseen label and input word statistics are unknown. This also adds subsequent learning flexibility because no statistics collection preprocessing is required. Fourth, we add k-max pooling as in the CNN long-tail research by Liu et al. (2017), because it helps during zero-shot learning.\nSuch label-embedding based self-supervised pretraining has multiple advantages. It does not require large or external resources as in (i). Its small ‘data-internal’ self-supervised word pseudo label pretraining addresses issue (ii) and enables unsupervised zero-shot learning. It also markedly boosts few-shot performance without requiring task external supervised annotations as in (i) or supervised embedding transfer as in Pappas & Henderson (2019); Zhang et al. (2018b); Jiang et al. (2019). Since label embeddings are a common long-tail prediction technique, which addresses issue (iii), it makes our approach suitable for low-resource, long-tail learning without task external labels or large-scale annotated datasets. Finally, label embedding NCE training allows for (dense) ‘text-totext’ training, making it applicable to a variety of tasks. We demonstrate the benefits of such a selfsupervised pretraining method and model for self-supervised zero-shot learning (inputX-efficiency) §6.4 or few-shot learning (label Y -efficiency) §6.3.\ncoded by the same word embedding layer E 1 , where labels have word IDs for lookup. The text embeddings are then encoded by a sequence encoder T 2 , while c labels are encoded by a label encoder L 3 . Each text has multiple labels, so the text encoding ti is repeated for, and concatenated with, each label encoding l◦i,l. The resulting batch of ‘text-embedding, label-embedding’ pairs [[ti, l ◦ i,1], . . . , [ti, l ◦ i,c]] 4 is fed into a ‘matcher’ classifier 5 that trains a binary cross entropy loss 6 on multiple (pseudo-)label (mis-)matches {0, 1} for each text instance ti, resulting in a noise contrastive estimation objective (NCE). Words like ‘measuring’ provide self-supervised pseudo-labels (left). Positive and negative (pseudo-)labels are sampled from their own or other instances in a minibatch. Unlike Zhang et al. (2018a) we use a CNN for 2 , negative sampling and self-supervision." }, { "heading": "3 SELF-SUPERVISED, CONTRASTIVE DENSE-TO-DENSE TEXT PREDICTION", "text": "In this section, we propose to use label-embeddings, previously used for supervised learning only (Pappas & Henderson, 2019; Zhang et al., 2018b), and exploit them for self-supervised contrastive pretraining on small-scale data. This enables contrastive self-supervised pretraining somewhat similar to methods used for large-scale models. However, we only use small-scale ‘task-internal’ data for pretraining, which requires orders of magnitude less data and compute than large-scale, ‘taskexternal’ pretraining approaches. Most NLP models translate back and forth between discrete words and continuous token embeddings, often using a softmax computation that is limited to predicting classes known at training time. To ease learning from small data, our first core idea is that text input words wi ∈ x and labels w◦i,l should be mapped into the same word representation space, i.e. drawn from a shared embedding look-up table E, to replace dense to sparse translations with embeddingto-embedding matching. Thus turns NLP from a discrete ‘text-to-text’ tasks, as proposed in Raffel et al. (2020), into a ‘dense(text)-to-dense(text)’ task. We thus replace learning instance labels yi by their corpus-internally pretrained FastText or randomly initialised word embeddings l◦i ∈ L, while others (Pappas & Henderson, 2019) use text descriptions to form label embeddings as the vector average over description word embeddings. As a result, pretraining word embeddings also pretrains (favourably initialising) label embeddings. Unknown labels ( words), in turn, can be inferred via methods like FastText subword embeddings (Bojanowski et al., 2017).\nAs outlined visually, left to right in Fig. 1, learning multi-label classification then becomes a contrastive learning problem of matching the word-sequence embedding ti of text i 2 , with its c label (word-sequence) embeddings l◦i = {l◦i,1, . . . l◦i,c} 3 , by feeding c text-vs-label combinations [[ti, l ◦ i,1], . . . , [ti, l ◦ i,c]] 4 to a binary classifier M 5 for matching. This means that instead of predicting c classes at once, we predict a batch of c, single-class, binary classifications using binary cross entropy 6 , where c needs not be constant across instances i. The details of steps 1 to 6 are as follows. To train a binary classifier, we need both positive and negative labels. Thus, for each text instance wi = {wa, . . . wz} we want to classify, we need g positive labels w−i = {w + 1 , . . . w + g } ∈ Rg and b negative labels w+i = {w − 1 , . . . w − b } ∈ Rb to form a label selection vector w◦i = {w+ ⊕ w−} ∈ Rg+b. To indicate positive and negative labels, we also need a g sized vector of ones 1 ∈ Rg and a b sized zero vector 0 ∈ Rb, to get a class indicator Ii = {1 ⊕ 0} ∈ Rc=g+b. Both the text (word) indices wi and the label indices w◦i are passed through a shared ‘word-or-label embedding’ look-up-table E 1 , after which they are passed through their respective encoder networks – T as text-sequence encoder, L as label encoder. Thus, the text-encoder produces a (single) text embedding vector ti = T (E(wi)) per text\ninstance i 2 . The label-encoder produces c = g + n label embedding vectors (l◦i ) that form a label-embedding matrix Li = [l+1 , . . . , l + g , l − 1 , . . . , l − b ] ← L(E(w◦i )) 3 . As text-encoder T we use a (CNN→max-k-pooling→ReLU) sub-network, while the label-encoder L is simply an (average-pool) operation, since a single label (w◦i,j), e.g. ‘multi’-‘label’, can consist of multiple words. To compare how similar the text-embedding ti is to each label-embedding l◦i,j , we repeat ti c times and combine text and label embeddings to get a text-vs-label-embedding matrix Mi = [[l + i,1, ti], . . . , [l − i,c, ti]] 4 that is passed into the matcher network M 5 to produce a batch of c probabilities pi = {σ(M(Mi)1, . . . , σ(M(Mi)c} 6 . As the optimisation loss, we the use binary cross entropy (BCE) between pi and Ii, i.e. 1c ∑c l=1 Ii,l · log(pi,l) + (1− Ii,l) · log(1− pi,l). Summing BCE over positive and negative (pseudo-)labels is referred to as noise contrastive estimation, as used in representation learning methods across fields (Ma & Collins, 2018).\nVia pseudo-label embedding pretraining, a model can predict supervised labels absent prior supervision. This exploits both transfer learning from inputs and labels, using the matcher as a learned similarity function. Positive labels w+i can be supervision labels. Negative labels w − i can be sampled from the positive labels of other instances w+j in the same batch, which avoids needing to know the label set beforehand. Since labels are words, we can sample positive words from the current and negative words from other text instances to get pseudo-labels. Sampling pseudo-labels provides a straight-forward contrastive, partial autoencoding mechanism usable as self-supervision in pretraining or as zero-shot learner. Because both real and pseudo labels are sampled words, the model does not need to distinguish between them. Instead, learning is controlled by an out-of-model sampling routine for real supervision and pseudo self-supervision labels. This leads to a second core idea: once inputs X and outputs Y are well initialised, the model Θ can also be better initialised by pretraining via self-supervision. As a result, we can learn supervised, few and zero-shot tasks in a unified manner." }, { "heading": "4 SMALL, LONG-TAILED DENSE-TO-DENSE TEXT (LABEL) PREDICTION", "text": "Since it is our goal to research better zero and few-shot learning approaches for small ‘text-to-text’ pretraining models, we choose a small multi-label question tag prediction dataset as a test bed. We use the “Questions from Cross Validated”1 dataset, where machine learning concepts are tagged per question. This dataset fulfills three requirements: it is small-scale, long-tailed, and entails solving a challenging, noisy ‘text-to-text’ prediction task. There is currently no published baseline for this task. The classes (tags) and input words are highly long-tailed (imbalanced). The first 20% of labels occur in only 7 ‘head’ classes. Tags are highly sparse – at most 4 out of 1315 tags are labelled per question. Word embeddings are pretrained with FastText – details in appendix App. A.4. We use the labelled questions part of the dataset, which has 85k questions and 244k labels. What makes this problem particularly challenging is that 80% of the least frequent labels are distributed over 99.5% of classes, as an extreme long tail. The label density (% of active labels per question) is only 0.22% or ≈ 2.8/1305 possible classes per instance. For a realistic evaluation setting, we split the dataset diachronically, using the 80% earliest documents for training, the next 10% for development, and the last 10% for testing.\nWhy not large external pretraining? Real-world, long-tailed datasets are always dominated by a low-learning-resource problem for most classes. This makes two things obvious: (A) that model learning cannot simply be solved by using massive data sets as the long-tail problem grows as well; (B) that studying self-supervised pretraining on challenging, but smaller, long-tailed datasets such as this one, is useful for assessing an approach’s ability to learn from complex, real-world data. Massive data pretraining masks and thus prevents studying these effects. We thus evaluate the effects of self-supervision in a noisy low-resource setup, also as a response to recent critiques of the evaluation metrics used to assess Web-scale learning (Linzen, 2020; Yogatama et al., 2019). As McCoy et al. (2019) shows, these evaluation setups are solvable by large-scale pattern overfitting, which, they find, leads to a ‘Clever Hans effect‘, rather than real task progress.\n1https://www.kaggle.com/stackoverflow/statsquestions" }, { "heading": "5 EXPERIMENTAL SETUP AND METRICS", "text": "We want to analyse the benefits of self-supervision for (a) fully supervised, (b) few and (c) zero-shot learning in a noisy low-resource, long-tailed, multi-label classification setting. In this section, we describe suitable evaluation metrics, then discuss results in the next section.\nLong-tail evaluation metrics and challenges: Long-tail, multi-label classification is challenging to evaluate. Many classification metrics are unsuitable for evaluating long-tailed datasets. They either: (i) misrepresent performance under class imbalance; (ii) do not scale to many classes; or (iii) are only meaningful if the desirable number of classes per instance is known (multi-label classification). For problem (i) ROCAUC is known to overestimate imbalanced performance (Davis & Goadrich, 2006; Fernández et al., 2018), e.g. ROCAUC test scores were upwards of .98 for most of our models. For problem (ii), measures such as F-score require discretisation threshold search for imbalanced prediction problems, i.e. searching for the optimal threshold per class (on a development set), which becomes computationally infeasible. Simply using a 0.5 probability threshold drives model selection towards balanced prediction, mismatching the long-tail problem. Metrics like precision@k handle problem (i-ii), but require knowledge of k, i.e. problem (iii): these metrics can only compare a chosen number of labels k, and cannot handle cases where the correct number of labels per instance varies or is unknown (label distribution shift). To more reliably measure performance under imbalance (i), to avoid unscalable class decision thresholding (ii), and to not optimise models for a set number of labels k per instance (iii), we use the average-precision (AP ) score. It is defined as AP = ∑ n(Rn − Rn−1)Pn, where Pn and Rn are the precision and recall at the nth threshold. AP measures classifier performance over all decision thresholds, is computationally cheaper than threshold search, and allows for a dynamic number of labels per class. This latter property makes this task especially hard. A model has to learn when to predict a label, at what rarity, and how many such labels to predict for each instance. We also report the macro-averaged Brier-Score (BS) over all classes, as a scalable, compute-efficient measure of classifier calibration. Though more accurate measures exist, computing them is more involved and they require additional evaluation labour when optimising a specific supervised dataset, which is not our goal. For both measures, we use their popular scikit-learn implementations2.\nA challenging task, even for humans: On the dataset it is hard to guess how many labels per question to tag and how specific they should be, especially without domain knowledge. Out of the different weighting schemes for average precision, we choose APmicro and APmacro, as they are the most pessimistic (hardest to increase) measures to reduce optimistic evaluation. This choice is motivated by the goal of this work, which is to not simply to push end-task performance, but to use supervised learning scores as a proxy to evaluate the effects of pretraining on zero-shot learning as well as data-efficiency and speed of supervised and few-shot learning." }, { "heading": "6 RESULTS", "text": "In this section, we first analyse a normal and a strong supervised baseline to minimise favourable comparison against subsequently evaluated label-embedding and self-supervision enhanced approaches. Finally, we analyse the benefits of ‘dataset-internal’ pretraining for few-shot learning, and how the amount of pretraining learning signal and model size affect zero-shot learning. Test scores are reported according to the best dev set average precision score APmicro over all classes." }, { "heading": "6.1 BASELINE MODEL RESULTS", "text": "In this section, we establish baseline results (BASE) for a non-learning majority class baseline (ZeroR), a common (‘weak’) CNN baseline trained with binary-cross-entropy, and a solid CNN baseline optimised using a set of generalisation techniques proposed by Jiang et al. (2020). The ZeroR classifier is useful for establishing a baseline performance under class imbalance – e.g. if a class is present in only 10% of instances, then 90% accuracy is achieved by simply always predicting zero – i.e. the majority class. When doing so on our long-tailed task, where the class majority is always zero, we get an APmicro and APmacro of 0.2%, since out of the 1315 classes, maximally four classes are active per instance. Importantly, this tells us that: (a) simply learning to predict zeros can not\n2https://scikit-learn.org/stable/modules/model_evaluation.html\nscore well on under this metric and (b) that this problem setting is challenging. Next, we evaluate both a weak and optimised baselines (WB, OB). When using a very small CNN as baseline (WB) with max pooling over 10 filters at filter sizes 1-3 that feed into a one-layer classifier, we achieved 33.75%APmicro on the test set – after only tuning the learning rate. When tuning this baseline for parameters known to increase generalisation using a set of such methods suggested by Jiang et al. (2020), we get a more solid test score of 45.01 APmicro and an of 22.81 APmacro. The macro result tells us that not all classes perform equally well. Upon closer inspection, we find that model performance worsens with increasing class rarity as expected. While establishing a solid baseline, we find expected limitations of model width, max-k pooling and dropout scale-up, and a confirmation that controlled experiment comparisons that only change one variable at a time, do not suffice to find better hyperparameter configurations. For example, when widening lower layer components and observing a decrease in performance, higher layers should also be made wider to accommodate the additional feature information from lower layers – which is consistent with findings in Nakkiran et al. (2020). A more detailed breakdown of this analysis can be found in Table Tab. 2 in the appendix App. A. We explore a considerable amount of hyperparameter configurations in an effort to compute a solid baseline. This allows for more robust insights and helps to speed up optimisation of the self-supervised models." }, { "heading": "6.2 100% SUPERVISION (SL+SSLSCR) AS REFERENCE (*) FOR FEW AND ZERO-SHOT (0%)", "text": "Tab. 1 show both: models trained FROM SCRATCH (s), and models that are first PRETRAINED (p) using self-supervised word pseudo-labels from text inputs, and afterwards fine-tuned (f) on supervision labels. To fit the supervised end-task (tag prediction), both fine-tuning and training from scratch can either: (4) only fit supervision labels (SL) or (3) jointly fit supervised labels and self-supervised word pseudo-labels (S(+S)L), as described in §3. However, before analysing results, we define a controlled experiment setup using a fixed, but shared hyperparameter setting ‘(*) SL+SSLscr’ as a reference (*). Since SL+SSLscr is the most basic model learning setup that uses both self-supervision and supervision, we use its optimal\nhyperparameters ‘(*) SL+SSLscr’ as a fixed reference configuration for most subsequent learning setups, as indicated by the ‘params like (*)’ marker. This ensures a more controlled comparison of the effects of pretraining vs. training from scratch, and robust insights on how to design selfsupervision during end-task fitting and pretraining. The (*) reference will hence be used for most few and zero-shot settings. When comparing PRETRAINED models with models trained FROM SCRATCH, we see that under comparable hyperparameters, without setting-specific parameter tuning, all four learning setups perform similarly within 1 percent point (%p) of each other. We also see that the PRETRAINED (5) model which uses self-supervision during both pretraining and finetuning performs best. Training FROM SCRATCH using self+supervision SL+SSLscr somewhat hurts performance compared to using supervision alone in SLscr. Test scores are reported for the best dev set APmicro scores." }, { "heading": "6.3 FEW-SHOT: PRETRAIN FOR BETTER LONG-TAIL, LOW-RESOURCE, FEW-SHOT LEARNING", "text": "In this section, we present evidence that even in a data-limited, long-tailed setting, self-supervised ‘data-internal’ pretraining: (a) increases few-shot learning performance of subsequent fine-tuning, while (b) improving learning speed and stability. This demonstrates that small data pretraining has similar benefits as large-scale pretraining (Brown et al., 2020; Schick & Schütze, 2020a). In\nFew-shot label efficiency for supervised train from scratch 1 0.75 0.5 0.25 0.1\nFew-shot label efficiency during joint self+supervised fine-tuning 1 0.75 0.5 0.25 0.1\nFig. 2, when using the (*) reference model from Tab. 1, we now compare training from scratch (4) as before (pretraining off, left), with pretraining via self-supervised word pseudo-labels, and then fine-tuning on the supervised training labels (5) of the end-task (pretraining on). Note that our model architecture (Fig. 1) does not distinguish between self-supervised and supervised labels, which means that during self-supervised pretraining, we sample as many word pseudo-labels as real labels during supervised fine-tuning (or when supervising from scratch).\nWhen fine-tuning the pretrained model on an increasingly difficult FEW-SHOT portion of (100%), 75%, 50%, 25% and only 10% of the supervised training data, we see large APmicro|macro test performance improvements compared to training FROM SCRATCH in both Tab. 1 and Fig. 2. On the right, in Fig. 2, we see that the pretrained models start with a higher epoch-0 performance, train faster, are more stable and achieve a markedly better few-shot end performance than the left-hand ‘from scratch’ setting. This is confirmed by detailed results for the 10% FEWSHOT setting in Tab. 1, where pretrained models (SSLpre→SLfin, SSLpre→SL+SSLfin) achieve ≈ .38/.18APmicro|macro test compared to only ≈ .30/.13APmicro|macro test for models trained from scratch (compare (7-10). This means that, when using only 10% supervised labels, pretrained models still retain 38.25/48.20, or roughly 80%, of their fully supervised performance. This provides evidence to answer the underlying question: “Do we really need more data for pretraining or can we simply increase self-supervision?”. Very recent work by Bansal et al. (2020) has investigated this question for large-scale, self-supervised pretraining, where they showed that increasing self-supervision to create “a richer learning signal” benefits few-shot performance of large models. Our results demonstrate that this is also the case for small-scale, non-Transformer pretrained models, even under a much more challenging long-tailed learning setting than Bansal et al. (2020) examined. However, to better understand the benefits of using more self-supervised training signals\nand its relation to model size, we examine the zero-shot performance of our pretraining approach in regards to label (signal) amount, network width and zero-shot X data-efficiency (low-resource zero-shot performance) – i.e. zero-shot performance when pretraining on fractions of inputs X to forcibly limit self-supervision." }, { "heading": "6.4 ZERO-SHOT: MORE IS BETTER, FOR ‘LOW-RESOURCE’ ZERO-SHOT PRETRAIN LONGER", "text": "In this experiment, we study how the number of self-supervised labels (signal) and the model width used for self-supervised pretraining affects zero-shot performance on the end-task test set. We show results in both Fig. 2 (11, 12) and Tab. 1 (ZERO-SHOT). In Fig. 2, we see that when using the reference hyperparameter configuration ((*) in Tab. 1), pretraining gets the lowest zero-shot performance. When increasing the number of self-supervised word pseudo-labels from 150 to 500, the model performs better (middle curve), while not using more parameters – so increasing self-supervision signals is beneficial. When additionally tripling the network’s sequence and label encoder width and doubling the label match classifier size, zero-shot performance increases even more (top curve). This indicates that for zero-shot learning performance from pretraining, both the amount of training signals and model size have a significant impact. While increased model size has been linked to increased zero-shot performance of Web-scale pretrained models like GPT-3 (Brown et al., 2020), the influence of signal amount on zero-shot learning is much less well understood, because large-scale pretraining research often increases training data size when changing self-supervision, as outlined by Liu et al. (2020). Finally, in Fig. 3 we see that when pretraining our model for zero-shot prediction on only portions (100%, 75%, .50%, 25% and 10%) of the training text inputsX , i.e. an increasingly low-resource zero-shot setting, we still converge towards comparable full zero-shot performance (if we had not stopped early). However, each reduction in training size multiplies the required training time – when using the same number of self-labels. This provides a promising insight into selfsupervised pretraining on small datasets, which, if designed appropriately, can be used to pretrain well-initialised models for supervised fine-tuning and few-shot learning from very small text sizes." }, { "heading": "7 CONCLUSION", "text": "We showed that label-embedding prediction, modified for self-supervised pretraining on a challenging long-tail, low-resource dataset substantially improves low-resource few and zero-shot performance. We find that increased self-supervision, in place of increased data size or resorting to largescale pretraining, strongly boosts few and zero-shot performance, even in challenging settings. In future, we envision that the proposed methods could be applied in scenarios where little in-domain (pre-)training data is available, e.g. in medicine (Şerbetci et al., 2020), and where new labels rapidly emerge at test time, e.g. for hashtag prediction (Ma et al., 2014). The code and data splits will be published on https://github.com." }, { "heading": "A APPENDIX", "text": "A.1 A BASELINE TUNED USING GENERALISATION TECHNIQUES\nFor the baseline we found optimal hyperparameters to be: lr=0.0075, filter-sizes={1: 57, 2: 29, 3: 14}, clf=one layer classifier, ’conf’:[{’do’:.2}] , max-k pooling=3, bs=1536, tune embedding=True, optimizer=ADAM with pytorch defaults. Increasing the filter size, classifier size or depth or using more k decreased dev set performance due to increased overfitting. In general the standard multilabel BCE loss overfit much more quickly than the contrastive methods described in §3. The contrastive model only differs it was able to use more filters {1: 100, 2: 100, 3: 100}, where using only {1: 20, 2: 20, 3: 20} loses 1.5 %p of performance, and that its optimal lr = 0.0005, while the batch size shrinks to 1024 due to increased memory requirements of label matching. This contrastive models optimal matcher classifier is deeper, due to the increased task complexity – four layer classifier, ’conf’: [{’do’: 0.2}, {’out dim’: 1024, ’do’: 0.1}, {’out dim’: 300, ’do’: None}, {’out dim’: 1, ’do’: None}]}.\nA.2 LABEL-EMBEDDINGS, PRETRAINING EFFECTS ON THE LONG-TAIL\nIn this section we analyse the effects of using supervised label-embeddings and self-supervised pretraining with words as pseudo-labels. Plotting the average precision of 1305 would be unreadable.\nInstead, we sort classes from frequent to rare and assign them to one of five 20% class frequency buckets, such that each bucket has the same amount of positive labels (label occurrences) in it. As seen in Fig. 4, this means that the head 0 − 20% bucket (left, blue) has very few, frequent classes, while tail buckets 20 − 40% . . . 80 − 100% have increasingly more classes (right, orange, red) that also become increasingly more rare. We bucket classes this to balance label frequency between buckets to make them directly comparable.\nlong-tail performance: BCE vs label-embedding (LE) and self-supervised pretraining (PT)\nLabel-embedding increase long-tail performance: In Fig. 5 we can see that the optimized baseline (2) from Tab. 1 performs much worse than models that use only the supervised label-embeddings (LE) and methods that also use self-supervised pretraining (PT) via noise contrastive sampling of input words as pseudo labels. We also see that regarding end-task performance on the tag predition task, training from scratch (LE, pink ×, ) performs only slightly worse than fine-tuning after selfsupervised pretraining (purple ×, ). However, we also see that increasing the model size during\nself-supervised pretraining (5.XL) boost performance on the long-tail, especially with increasingly tailed or rare (60-100%) classes. Previously, we saw in Fig. 3 that the same ”larger net, 3.3x labels” model (5.XL) increased zero-shot performance over the default parameters (*), which demonstrates that improved self-supervised zero-shot performance translates into better supervised end-task finetuning performance. We also found that increasing the size of non-pretrained (trained from scratch) models did not improve end-task performance, despite hyper-parameter tuning.\nThis leaves us with two insights for modeling. First, for small-scale, ‘data-internal’, self-supervised pretraining a larger pretraining model increases long-tail performance, whereas Hooker et al. (2020a) found that compressing larger models first ’forgets’ long-tail performance – both experiments provide evidence that model capacity and long-tail performance are tied. This seems to be the case even for small-scale self-supervised pretraining, i.e. it demonstrates that despite training on small data, we still need increased self-supervision signals and model size to capture long-tail information, which could explain why large-scale pretraining and models perform so well rather than simply assuming them to be overparameterized. Second, this larger pretraining model has an end-taskAPmicro score of 49%, which is only .8 percent points better than the pretrained model (5) at 48.2% with default parameters (*), despite showing promising improvements on long-tail classes, which together with the zero and few-shot insights underlines that optimizing for learning insights and analysis other than supervised performance summary metrics can lead to a broader understanding of neural learning processes and modeling effects.\nThe head is learned first (in early epochs), pretraining learns the tail much faster: In Fig. 6 we compare early epoch training with late (optimal) epoch test scores per class bucket. We see that all models learn the head classes during the first epochs (- - dashed line). Methods (5, 5.XL) that use label-embedding (LE) and self-supervised pretraining (PT), start learning the long-tail during the first epoch, while BCE multi-label baseline (2) does not start learning the long-tail until epoch 10, and even then at a much lower performance than the pretrained label-embedding methods. Finally, we see that pretraining a larger model with more self-supervised pretraining signal (5.XL or “larger net, 3.3x labels” in Fig. 3) increases performance on the long-tail, even during the first epochs.\nLong-tail: early epochs vs best (by AP micro dev) late epoch\nSelf-supervised label-embedding pretraining boosts long-tail performance: We thus conclude, that self-supervised pretraining helps us learn the long-tail better, and faster – i.e. even after a single epoch of supervised training. This is a useful property in supervised learning scenarios where data or computation cycles are limited. We note that: our pretraining and fine-tuning do not require\nlearning rate scheduling or normalization layers like BERT or RoBERTa (Devlin et al., 2019; Wang et al., 2020c).\nMiscellaneous result/ an open future evaluation problem: Finally, since average precision (AP ) has no explicit notion of over and under predictions, we plotted over and under predictions per class and over all classes. In standard classification, i.e. discrete, evaluation we would have over predictions as false positives and under predictions as false negatives. Using continuous measures such as AP has computational advantages and does not limit evaluation to a single threshold like F1 or accuracy do. However, has no notion of over and under predictions, which especially regarding long-tail issues may have a significant impact. However, plotting over and underpredictions per class (and overall) was only mildly informative. Due to the high label sparsity, we saw close to zero over predictions on average, but all-class average underpredictions were much harder to reduce (optimize). This observation is a reflection of the high label sparsity, i.e. at most 0.3 of labels are active per instance, combined with a long-tail distribution – i.e. many rare events. Under this combination meaningful evaluation of prediction behaviour is hard to analyse in a meaningful, concise fashion, because per class plots become large and hard to fit and interpret in a paper, and all-class averages do not reveal class dynamics. We include these observations because we found them instructive to outline the challenges of evaluating long-tail learning and do not include the per class plots, because they would not be readable. We are however happy to discuss them upon request.\nA.3 FEW-SHOT: SCRATCH, PRETRAINED, ADDITIONAL SELF+SUPERVISED SCENARIOS\nFew-shot challenges: Few-shot learning increases the long-tail problem. For 10% few shot learning, we train on 6800 instances, so many classes will be unseen at training time We will publish both the parsed data splits and a cleaned code version on Github to encourage experimenting with and extending to other low-resource ‘dense-to-dense’ self-supervision methods, additional evaluation metrics and datasets.\nFew-shot, with and without self-supervision – as pretraining or for joint self+supervised fine tuning: Fig. 7 shows in more detail that the pretrained model (bottom) learns better, and that joint self+supervised end-task training (scratch or fine-tuned) makes no difference.\nA.4 TEXT PREPROCESSING DETAILS\nWe decompose tags such as ‘p-value’ as ‘p’ and ‘value’ and split latex equations into command words, as they would otherwise create many long, unique tokens. 10 tag words are not in the input vocabulary and thus we randomly initialise their embeddings. Though we never used this information, we parsed the text and title and annotated them with ‘html-like’ title, paragraph and sentence delimiters. The dataset is ordered and annotated by time. Dev and test set are therefore future data compared to the training data, which results in a non-stationary problem, though we never determined to what extend.\nA.5 POTENTIAL ETHICAL CONSIDERATIONS\nIn this section, we outline potential impacts of our work for machine learning practice, as well as its possible environmental, societal, health and privacy implications. As with any technology there is the dilemma of dual use (Rappert & Selgelid, 2013). Below, we briefly discuss beneficial and potential detrimental impacts of this work as we can foresee them Hovy & Spruit (2016); Brundage et al. (2018).\nThe main goal of our research is to reduce the hardware and compute requirements of current representation pretraining methodology for language representations, especially for challenging low low-resource, long-tail problems. Due to the reduction in compute requirements, our methods may help reduce carbon impact and the exhaustion of precious resources like rare metal compared to large-scale pretraining. Consuming less energy and mining less resources for hardware production has major impacts on the environment (Tsing et al., 2017). Thus, as a research community we should take action not to let AI methods become a race for precious metal hardware due to its devastating effects on our shared environment. Further, small-scale pretraining could make access to modern NLP methods easier for machine learning researchers and practitioners, who have less hardware resource privileges than are required for state-of-the-art solutions, or whose language of research does not allow for easy access to Web-scale text collections. This may become even more important as socio-economic factors are likely to play a fundamental role in the future democratisation and fair access to AI technology (Riedl, 2020) for economics, health and other key decision making areas. This is especially important as large-scale hardware resources increasingly lead to research and economic inequalities as described by Hooker (2020); Riedl (2020). Another important advantage of researching more data-efficient methods is that using as little data as needed is a requirement of the GDPR regulations for ‘privacy by design’.3 This principle is in direct conflict with the current selfsupervised pretraining approaches, which parties who have both access to massive data collections and compute resources predominantly study.\nFurthermore, there may be potential implications in better learning of underrepresented and rare events from small or very limited data collections (Mitchell et al., 2020). When we increase selfsupervision during pretraining, i.e. when pretraining on more diverse learning signals than direct supervision can provide, we see a substantial increase increase in few-shot (low-data) performance, which, upon inspection, becomes clear is caused by a better retention of rare event performance than direct supervision could provide – see Fig. 2. However, we did not yet study whether this pretraining reduces or increases unwanted data biases (Waseem et al., 2020), though typical analyses of gender and racial biases may be hard on the current dataset of machine learning questions. Note that we did not chose this data set to solve a specific application task, but only as a proxy to study the effects of small-scale pretraining on challenging data.\nBetter small-scale pretraining could benefit areas like medicine where large pretraining is not as effective or fails for a lack of external data resources (Şerbetci et al., 2020). Due to the usage duality of research in general, research into more resource-efficient learning could also cause privacy concerns, enabling easier surveillance, and improved advertisement recommendation can have unforeseen political, but also economical and even environmental impacts, as the goal of advertisement is increased resource consumption.\nThus, a general approach to furthering beneficial usage over detrimental applications of dual technology should regard applying ethics principles at every step of reuse of the discussed methods to\n3https://en.wikipedia.org/wiki/Privacy_by_design\nsupport its transparent use and public verification and auditing, to protect vulnerable groups from harmful applications." } ]
2,020
null
SP:37c923f6c8e655da32a295e69856cf7d7eff9618
[ "In this paper, the authors explore alternatives to the standard token-based routing in sparsely-gated MoE models for multilingual NMT. This exploration is motivated by the need for efficient inference in MoE models, for which token-based routing is a limitation. The alternative is task-based routing, where examples for a task are assigned to the same experts. This allows efficient device placement and request dispatch at inference time. The paper compares with the approaches as well as hybrid approaches where different parts of the network use different routing mechanisms. The results show that task level routing is comparable to token-level routing with the added benefit of inference efficiency. Performing task-based routing on the decoder side only gives better better translation quality, at the cost of inference efficiently. An analysis of routing decision in token-based routing justifies the design choices. " ]
Sparsely-Gated Mixture-of-Experts (MoE) has been a successful approach for scaling multilingual translation models to billions of parameters without a proportional increase in training computation. These models, however, are prohibitively large for serving deployment and there is no easy way to extract a sub-network to decode for a particular language pair. This work proposes improved strategies to route MoE models by tasks instead of tokens, thus enabling separation of network structures at decoding time while enjoying the benefits of scale and task sharing at training time. We compare routing strategies at multiple levels (token, sentence, task) in both, the encoder and the decoder, and conduct extensive experiments on two benchmarks: the public WMT dataset of 30 language pairs and an in-house web-scale dataset of 200 language pairs. On WMT, with a Transformer base model with 32 experts, our task-level MoE outperforms the best performing token-level MoE model by +1.0 BLEU on average over all language pairs. When scaling up to Transformer big model with 128 experts on the large-scale massively multilingual benchmark, our task-level MoE is competitive with token-level MoE while being able to reduce the decoder model size by a factor of 32.34 and increase peak throughput by 2.6 times at inference.
[]
[ { "authors": [ "Yonghui Wu" ], "title": "Massively multilingual neural machine translation in the wild: Findings and challenges, 2019", "venue": null, "year": 2019 }, { "authors": [ "Timothy T Baldwin", "J Kevin Ford" ], "title": "Transfer of training: A review and directions for future research", "venue": "Personnel psychology,", "year": 1988 }, { "authors": [ "Ankur Bapna", "Naveen Arivazhagan", "Orhan Firat" ], "title": "Simple, scalable adaptation for neural machine translation", "venue": "arXiv preprint arXiv:1909.08478,", "year": 2019 }, { "authors": [ "Emmanuel Bengio", "Pierre-Luc Bacon", "Joelle Pineau", "Doina Precup" ], "title": "Conditional computation in neural networks for faster models", "venue": "arXiv preprint arXiv:1511.06297,", "year": 2015 }, { "authors": [ "Nikolay Bogoychev", "Rico Sennrich" ], "title": "Domain, translationese and noise in synthetic data for neural machine translation, 2019", "venue": null, "year": 2019 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Rich Caruana" ], "title": "Multitask learning", "venue": "Machine learning,", "year": 1997 }, { "authors": [ "Yu Cheng", "Duo Wang", "Pan Zhou", "Tao Zhang" ], "title": "A survey of model compression and acceleration for deep neural networks", "venue": "arXiv preprint arXiv:1710.09282,", "year": 2017 }, { "authors": [ "Kevin Clark", "Minh-Thang Luong", "Urvashi Khandelwal", "Christopher D Manning", "Quoc V Le" ], "title": "Bam! born-again multi-task networks for natural language understanding", "venue": null, "year": 1907 }, { "authors": [ "Ronan Collobert", "Jason Weston" ], "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Maha Elbayad", "Jiatao Gu", "Edouard Grave", "Michael Auli" ], "title": "Depth-adaptive transformer", "venue": "arXiv preprint arXiv:1910.10073,", "year": 2019 }, { "authors": [ "Angela Fan", "Shruti Bhosale", "Holger Schwenk", "Zhiyi Ma", "Ahmed El-Kishky", "Siddharth Goyal", "Mandeep Baines", "Onur Celebi", "Guillaume Wenzek", "Vishrav Chaudhary" ], "title": "Beyond english-centric multilingual machine translation", "venue": null, "year": 2010 }, { "authors": [ "Markus Freitag", "Isaac Caswell", "Scott Roy" ], "title": "APE at scale and its implications on MT evaluation biases", "venue": "In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers),", "year": 2019 }, { "authors": [ "Jiatao Gu", "Hany Hassan", "Jacob Devlin", "Victor OK Li" ], "title": "Universal neural machine translation for extremely low resource languages", "venue": "arXiv preprint arXiv:1802.05368,", "year": 2018 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeff Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "arXiv preprint arXiv:1503.02531,", "year": 2015 }, { "authors": [ "Chris Hokamp", "John Glover", "Demian Gholipour" ], "title": "Evaluating the supervised and zero-shot performance of multi-lingual translation models", "venue": "arXiv preprint arXiv:1906.09675,", "year": 2019 }, { "authors": [ "Yanping Huang", "Youlong Cheng", "Ankur Bapna", "Orhan Firat", "Dehao Chen", "Mia Chen", "HyoukJoong Lee", "Jiquan Ngiam", "Quoc V Le", "Yonghui Wu" ], "title": "Gpipe: Efficient training of giant neural networks using pipeline parallelism", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Melvin Johnson", "Mike Schuster", "Quoc V Le", "Maxim Krikun", "Yonghui Wu", "Zhifeng Chen", "Nikhil Thorat", "Fernanda Viégas", "Martin Wattenberg", "Greg Corrado" ], "title": "Google’s multilingual neural machine translation system: Enabling zero-shot translation", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Jungo Kasai", "Nikolaos Pappas", "Hao Peng", "James Cross", "Noah A Smith" ], "title": "Deep encoder, shallow decoder: Reevaluating the speed-quality tradeoff in machine translation", "venue": "arXiv preprint arXiv:2006.10369,", "year": 2020 }, { "authors": [ "Yoon Kim", "Alexander M. Rush" ], "title": "Sequence-level knowledge distillation", "venue": "In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,", "year": 2016 }, { "authors": [ "Taku Kudo", "John Richardson" ], "title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "venue": "arXiv preprint arXiv:1808.06226,", "year": 2018 }, { "authors": [ "Sneha Reddy Kudugunta", "Ankur Bapna", "Isaac Caswell", "Naveen Arivazhagan", "Orhan Firat" ], "title": "Investigating multilingual nmt representations at scale", "venue": null, "year": 1909 }, { "authors": [ "Dmitry Lepikhin", "HyoukJoong Lee", "Yuanzhong Xu", "Dehao Chen", "Orhan Firat", "Yanping Huang", "Maxim Krikun", "Noam Shazeer", "Zhifeng Chen" ], "title": "Gshard: Scaling giant models with conditional computation and automatic sharding", "venue": null, "year": 2006 }, { "authors": [ "Jiaqi Ma", "Zhe Zhao", "Xinyang Yi", "Jilin Chen", "Lichan Hong", "Ed H Chi" ], "title": "Modeling task relationships in multi-task learning with multi-gate mixture-of-experts", "venue": "In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,", "year": 2018 }, { "authors": [ "Jiaqi Ma", "Zhe Zhao", "Jilin Chen", "Ang Li", "Lichan Hong", "Ed H Chi" ], "title": "Snr: Sub-network routing for flexible parameter sharing in multi-task learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Robert Östling", "Jörg Tiedemann" ], "title": "Continuous multilinguality with language vectors", "venue": "arXiv preprint arXiv:1612.07486,", "year": 2016 }, { "authors": [ "Matt Post" ], "title": "A call for clarity in reporting BLEU scores", "venue": "In Proceedings of the Third Conference on Machine Translation: Research Papers,", "year": 2018 }, { "authors": [ "Colin Raffel", "Noam Shazeer", "Adam Roberts", "Katherine Lee", "Sharan Narang", "Michael Matena", "Yanqi Zhou", "Wei Li", "Peter J Liu" ], "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "venue": "arXiv preprint arXiv:1910.10683,", "year": 2019 }, { "authors": [ "Maksim Riabinin", "Anton Gusev" ], "title": "Learning@ home: Crowdsourced training of large neural networks using decentralized mixture-of-experts", "venue": "arXiv preprint arXiv:2002.04013,", "year": 2020 }, { "authors": [ "Sebastian Ruder", "Joachim Bingel", "Isabelle Augenstein", "Anders Søgaard" ], "title": "Latent multi-task architecture learning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Noam Shazeer", "Mitchell Stern" ], "title": "Adafactor: Adaptive learning rates with sublinear memory cost", "venue": "arXiv preprint arXiv:1804.04235,", "year": 2018 }, { "authors": [ "Noam Shazeer", "Azalia Mirhoseini", "Krzysztof Maziarz", "Andy Davis", "Quoc Le", "Geoffrey Hinton", "Jeff Dean" ], "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "venue": "arXiv preprint arXiv:1701.06538,", "year": 2017 }, { "authors": [ "Aditya Siddhant", "Ankur Bapna", "Yuan Cao", "Orhan Firat", "Mia Chen", "Sneha Kudugunta", "Naveen Arivazhagan", "Yonghui Wu" ], "title": "Leveraging monolingual data with self-supervision for multilingual neural machine translation", "venue": null, "year": 2005 }, { "authors": [ "Xu Tan", "Yi Ren", "Di He", "Tao Qin", "Zhou Zhao", "Tie-Yan Liu" ], "title": "Multilingual neural machine translation with knowledge distillation", "venue": null, "year": 1902 }, { "authors": [ "Jörg Tiedemann" ], "title": "Emerging language spaces learned from massively multilingual corpora", "venue": "arXiv preprint arXiv:1802.00273,", "year": 2018 }, { "authors": [ "Jakob Uszkoreit", "Jay M Ponte", "Ashok C Popat", "Moshe Dubiner" ], "title": "Large scale parallel document mining for machine translation", "venue": "In Proceedings of the 23rd International Conference on Computational Linguistics,", "year": 2010 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Yining Wang", "Jiajun Zhang", "Feifei Zhai", "Jingfang Xu", "Chengqing Zong" ], "title": "Three strategies to improve one-to-many multilingual translation", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Shijie Wu", "Mark Dredze" ], "title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of bert", "venue": "arXiv preprint arXiv:1904.09077,", "year": 2019 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V Le", "Jiquan Ngiam" ], "title": "Condconv: Conditionally parameterized convolutions for efficient inference", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Scaling up neural network models has recently received great attention, given the significant quality improvements in a variety of areas such as natural language understanding (Raffel et al., 2019; Brown et al., 2020) and multilingual machine translation (Huang et al., 2019; Lepikhin et al., 2020).\nWhile training massive models on large amounts of data can almost guarantee improved quality, there are two factors affecting their practicality and applicability: (1) training efficiency and (2) inference efficiency. Large dense models are often prohibitively compute-intensive to train, with some models requiring TFlops-days of compute (Brown et al., 2020). A recent line of work has proposed sparsely-gated Mixture-of-Experts (MoE) layers as an efficient alternative to dense models (Shazeer et al., 2017; Lepikhin et al., 2020; Riabinin & Gusev, 2020) in order to address training efficiency limitations. In a vanilla sparsely-gated MoE model each token of the input sequence activates a different subset of the experts, hence the computation cost per token becomes only proportional to the size of the activated sub-network. However, they fail to meet requirements on inference efficiency.\nConsider a long sequence where each token of the sequence activates a disjoint subset of available experts. From a practical standpoint, the inference trace of the full sequence spans several experts independently for every token, resulting in an independent pathway for each token. Although this is a desired property adding flexibility to the model and increasing its capacity, it becomes prohibitive for inference for the following reasons: The model parameters in these large models are beyond the memory limit of a single accelerator, and require model parallelism to shard them across a cluster of devices during inference. For models with MoE Layers, the input token would be dynamically routed to different experts allocated to different devices. This further adds communication cost across devices to the overall serving cost. Moreover, due to the sequential nature of the autoregressive decoding (Kasai et al., 2020; Chen et al., 2018), the added communication cost from model parallel decoders gets multiplied by the number of decoding steps. To add to this, serving MoE models efficiently requires batching a large number of input tokens together, otherwise only a subset of the MoE network will be activated leading to device under-utilization.\nIn this work, we study the inference efficiency of sparsely gated MoE models while taking into account the characteristics of the intended application, Multilingual Neural Machine Translation (MNMT). MNMT is an inherently multi-task learning problem, aimed at building a single neural network for translating multiple language pairs simultaneously. In a MNMT model, the extent to which parameters are shared across languages determines the magnitude of positive transfer (Baldwin & Ford, 1988) and conversely task interference due to the capacity bottleneck (Arivazhagan et al., 2019). In an ideal scenario, we would want to efficiently train a single large MNMT model maximizing transfer while expanding the capacity bottleneck; at the same time, we would like to enjoy the benefits of sparsely activated sub-networks per-task at inference time, i.e. extracting out a sub-network from the model to decode for a particular language pair to actualize inference efficiency.\nWe propose routing algorithms for MoE models with affordable serving costs. While vanilla MoEs route each sub-word token in the input to its preferred experts, we explore alternative routing strategies that leverage global task level information to route all tokens corresponding to a particular task collectively to the same set of experts. While this strategy could be perceived to be restrictive for parameter sharing across tasks, we empirically demonstrate that routing based on task boundaries performs better when applied to MNMT. During training, we mix the inputs from different tasks in the same batch in order to learn the routing network and encourage positive transfer among the tasks. During inference, we decode different tasks separately and only load the subset of experts associated with the corresponding task.\nWe compare our method with multilingual baselines and find that we achieve significant gains on two benchmarks: a multilingual WMT task with comparable inference cost (+3.59 BLEU), described in Section 4, and a large internal dataset (+3.6 BLEU), in Section 4.3.2). We see that the gains are comparable with conventional position-wise Mixture-of-Expert models while utilizing decoders with only a fraction (6.25% and 1.56%) of their serving cost. We discuss the trade-offs of these different methods in Section 3.2. In Section 4.3.4, we analyze the routing decisions made in MoE models and motivate our method." }, { "heading": "2 SCALING TRANSFORMERS WITH MIXTURE-OF-EXPERTS", "text": "The Transformer (Vaswani et al., 2017) architecture is a popular model used for neural machine translation and other natural language understanding problems. In sequence-to-sequence problems (of which neural machine translation is one example), the model consists of a separate encoder and decoder, each of which contains multiple Transformer layers. For further details on Transformers, we refer the reader to the original paper (Vaswani et al., 2017).\nWe use the Mixture-of-Experts Transformer models used by Lepikhin et al. (2020), where the MoE layers for the Transformers consist of E feed-forward networks (FFN), such that (FFN1 . . . FFNE).\nFFNe(xs) = woe · ReLU(wie · xs)\nys = E∑ e=1 Gs,e · FFNe(xs)\nHere, xs is the input token at position s to the MoE layer and each FFNe is a two layer neural network using a ReLU activation function. wie and woe are the input and output projection weights of the e-th expert. Finally, Gs,E is vector computed by the gating network. For each expert, most values of this vector are zeros, one value being positive. We use this vector to route the token to a select few experts. The entries chosen from Gs,E determine how much the expert contributes to the final output ys. Note that, in this work we choose the top 2 weight experts for each example to be comparable with the prior work.\nThe gating network Gs,E must be considered carefully for efficiency purposes: (1) the utilization of experts must be balanced and (2) the function must be efficient to implement at scale. For a more thorough discussion of MoE transformer, we direct the reader to Lepikhin et al. (2020)." }, { "heading": "3 METHODS", "text": "In this section we describe our candidate routing strategies in the context of MNMT and discuss their trade-offs from the perspective of the training and inference efficiency. It is known that multi-\nlingual models learn different overlapping representations depending on the language - this is true for both dense (Wu & Dredze, 2019; Tiedemann, 2018; Tan et al., 2019; Östling & Tiedemann, 2016; Kudugunta et al., 2019) and sparse models (Section 4.3.4). Therefore we propose changing the routing algorithm GATE(xs) of MoEs to choose different experts using more natural separations." }, { "heading": "3.1 ROUTING STRATEGIES", "text": "Given the sequential nature of the multilingual machine translation task, the routing decisions can be made at three different granularities, from bottom up (i) token-level, (ii) sentence-level and (iii) task-level, as detailed below.\n• Token-level Routing: This is the baseline discussed in Section 2 where each token is routed independently. • Sentence-level Routing: Each sequence (sentence), and all tokens that form the sequence,\nare routed to the same expert. We change the routing algorithm to select experts by sentence representation, calculated by taking the average token representations in a given sentence. • Task-level Routing: We select experts by task boundaries as opposed to making input-\nlevel decisions. In the context of MNMT, these task boundaries can either be defined by the target language (French-to-English and German-to-English are the same task) or the language pair (French-to-English and German-to-English are different tasks).\nGs,E = GATE( 1\nS S∑ s=1 xs) (Sentence-level routing) (1)\nGs,E = GATE(task ids) (Task-level routing) (2)\nWe further illustrate the difference in Figure 1, in token-based MoE models (Figure 1a), tokens from each example are routed to different experts, whereas in task-level MoE models (Figure 1b), tokens may be routed to the same expert based on task." }, { "heading": "3.2 INFERENCE IMPLICATIONS OF ROUTING STRATEGIES", "text": "While the MoE models discussed in (Shazeer et al., 2017; Lepikhin et al., 2020) train quickly relative to the number of parameters in terms of the wall-clock time, they are expensive to serve.\nConsider a MoE with 512 experts and 50B parameters (Lepikhin et al., 2020). When employing token-level routing, each token can be independently routed to a different set of experts during inference. Given that the entire model is too large to load into memory on a single accelerator, the two potential solutions to utilize this model for inference are: (i) Loading experts dynamically from host to device depending on routing decisions, or (ii) Utilizing model-parallelism over multiple accelerators for serving. While the first solution incurs heavy host-device communication costs, the second introduces significantly inter-device communication overhead.\nAnother practical approach to serve a large MoE would require model compression via quantization, pruning or distillation (Cheng et al., 2017). While the first two strategies haven’t been explored in the context of conditional computation, distillation (Hinton et al., 2015; Kim & Rush, 2016) has been found to introduce undesirable artifacts into the student model (Freitag et al., 2019; Bogoychev & Sennrich, 2019) in the context of NMT. On the other hand, if we limit the number of experts available to every task in the model to a small fraction of the total available capacity, it is possible to extract task-specific models for serving, alleviating the need for complex serving strategies or compression. Since decoding time complexity for auto-regressive seq2seq models is dominated by the decoder (Kasai et al., 2020), we can also pursue a hybrid strategy where the encoder utilizes more expensive routing strategies while the decoder of the model utilizes simpler and efficient routing.\nWe do note, however, that MoE models that route purely by task boundaries are slower to train due to load balancing considerations. All examples in the input batch belonging to the same task would route to the same set of experts, possibly leading to some experts bearing a significant amount of the load. Balancing between these inference and training time trade-offs merits further exploration.\nSummarizing the effective decoding cost of the MoE models utilizing different routing strategies:\n• Token/Sentence level routing: The routing decisions are made dynamically. Assuming each token/sentence makes disjoint choices, the server needs to load all E experts.\n• Task-level routing: Tokens corresponding to each input sentence are routed to the same experts statically. The server only needs to pre-load K experts (assuming top-K routing)." }, { "heading": "4 EXPERIMENTS", "text": "We compare routing strategies at multiple levels in both, the encoder and the decoder, by conducting extensive experiments on two benchmarks: the public WMT dataset with 30 language pairs (Section 4.1) and an in-house web-scale dataset with 200 language pairs (Section 4.3)." }, { "heading": "4.1 SETUP FOR WMT EXPERIMENTS", "text": "For our experiments, we use parallel training and evaluation data from the WMT corpus and adopt the setup used by Siddhant et al. (2020) with 15 languages, to and from English. Full training data details may be found in Table 2 in the Appendix. The amount of data ranges from more than 60 million sentence pairs in en-cs translation direction (en-cs) to roughly 150k sentence pairs for en-gu.\nWe use a temperature based data sampling strategy to train our models, similar to the strategy used to train the multilingual models in Arivazhagan et al. (2019): if pL is the probability that a sentence in the corpus belongs to language pair L, we sample from a distribution where the probability of sampling from L is proportional to pL 1 T . All the experiments in this paper are performed on a model trained with a sampling temperature T = 5.\nWe use the 142M Transformer Base (Vaswani et al., 2017) architecture (or enhanced versions of it with MoE layers) for all of our experiments with WMT. Our models are optimized using Adafactor (Shazeer & Stern, 2018) with momentum factorization and a per-parameter norm clipping threshold of 1.0. We followed a learning rate of 3.0, with 40K warm-up steps for the schedule, which is decayed with the inverse square root of the number of training steps after warm-up. BLEU scores presented in this paper are calculated using SacreBLEU Post (2018) on the WMT test sets.\nMultilingual baseline: We train a Transformer Base model and a Transformer Big on this dataset as our multilingual dense baselines. We share all parameters across language pairs, including the softmax layer and input/output word embeddings. We use a 64k token Sentence Piece vocabulary (Kudo & Richardson, 2018). The vocabulary is shared on both the encoder and decoder side. Each sentence pair has a <2xx> token pre-pended to the source sentence to indicate the target language, following Johnson et al. (2017).\nMixture of Experts Models: For MoE models, we replace the feed forward network (FFN) of alternate layers of the Transformer with a set of identical FFN experts as depicted in Figure 1a.\nFor brevity, we provide aggregate BLEU scores in Section 4.2 . We provide the full individual BLEU scores in the Appendix A.3, along with bilingual baselines. In addition, we provide the number of parameters for different components of our models in Appendix A.4." }, { "heading": "4.2 COMPARISON OF DIFFERENT ROUTING STRATEGIES ON WMT", "text": "We compare the token-level, sentence-level and task-level routing strategies discussed in Section 3 at identical network size (32 experts, 533M parameters). The results are presented in Table 1. In general, we find that all types of task-level routing performs better than token-level routing. We see that using sentence representations to route examples (Sentence-level MoE - 32 experts) performs much worse, so we do not conduct further experiments on this setting.\nWhen we use Task MoE on both the encoder and the decoder (Task-level MoE - 32 experts: Target/Target), we see consistent gains across the board. To investigate this further, we trained a model that has (a) Token MoE on the encoder and Task MoE on the decoder (Task-level MoE - 32 experts: Token/Target or Token/Language Pair) and (b) Task MoE on the encoder and Token MoE on the decoder (Task-level MoE - 32 experts: Target/Token or Language Pair/Token). In Table 1 we see that using strategy (a) works the best, whether we choose to route by the target language or the language pair. In Section 4.3.4, we discuss these observations further.\nOverall we find that using Task MoE only on the decoder (Task-level MoE 32 experts: Token/Target) works the best, with gains of 1 BLEU over Token MoE. These gains are consistent across xx2en language pairs, en2xx language pairs, high resource languages (more than 1 million sentence pairs), low resource languages and the 2 zero shot pairs.\nWhile the MoE models considered outperform bilingual and multilingual Transformer-Base baselines with comparable inference cost, they are slight outperformed by the multilingual TransformerBig by 0.2 BLEU on average. Note that Transformer-Big incurs much higher decoding cost. We measured our task-level MoE achieved 8.4x (338k vs 40.3k tokens/sec) higher peak decoding throughput. However, we reiterate that the motivation behind scaling sparsely is to increase capacity with little overhead while remaining competitive with dense models - for example, while it is feasible to train a 473M parameter model (with 8x inference cost), training a much larger dense models to say, 13B model (the size of our scaled up MoE model), is prohibitively slow and expensive." }, { "heading": "4.3 SCALING UP TO MASSIVELY MULTILINGUAL, MASSIVE MT (M4)", "text": "We now scale our results up to a larger internal dataset with over 200 language pairs, while also scaling the number of parameters to beyond 10 billion weights. In addition, we look more closely at the gating decisions made by these sparse models and discuss their implications." }, { "heading": "4.3.1 EXPERIMENTAL SETUP", "text": "Data: We use an in-house training corpus generated by crawling and extracting parallel sentences from the web (Uszkoreit et al., 2010). This dataset has 204 direct language pairs (102 languages\nto and from English), with a total of 25 billion sentence pairs. This dataset covers a diverse range of domains and languages, and is quite noisy. There is also a heavy imbalance when it comes to the number of examples available per language pair, ranging between 104 and 109 sentence pairs. In order to record gating decisions while controlling for semantics, we created a multi-way aligned evaluation set containing nearly 3k sentence pairs for all languages.1\n1Each sentence in our evaluation set is semantically identical across all other languages.\nModel: We use the 473M Transformer Big (Vaswani et al., 2017) architecture (or modified versions of it in the case of sparse models) as described by Chen et al. (2018) for this set of experiments. Similar to Section 4.1, we (1) share all parameters across language pairs including softmax layer and input/output word embeddings, (2) pre-pend a <2xx> token to the source sentence to indicate the target language and (3) use a Sentence Piece Model Kudo & Richardson (2018) with 64k tokens vocabulary shared on both the encoder and decoder side.We followed the training and architecture as shown in Lepikhin et al. (2020).2" }, { "heading": "4.3.2 RESULTS", "text": "We compare Task-level MoEs and Token-level MoEs to their bilingual and multilingual baselines in Figure 2. We train 128 expert MoE models with routing in these settings: (1) Routing by token on both the encoder and decoder, (2) Routing by token on the encoder and by target language on the decoder and (3) Routing by token on the encoder and by language pair on the decoder.\nWe find that these scaled up sparse models perform better than their dense baselines, with hybrid task-level routing performing slightly better on En-Xx language pairs and pure token-level routing performing slightly better on Xx-En language pairs. We hypothesize that for the Xx-En tasks, not explicitly dividing expert parameters by tasks on the decoder results in better transfer, thus explaining the better performance of token-level routing. This suggests that a hybrid strategy that partially restricts access to experts based on task-boundaries, while still permitting routing by tokens, might provide the right balance between efficiency and quality.\nWe also note that while both forms of routing have 13B parameters (6.5B on decoder) at train time, token level routing only on the decoder uses only 200M parameters at inference time, in addition to the practical considerations discussed in Section 3.1. We provide aggregate BLEU scores in Appendix A.6 and parameter count breakdowns in Appendix A.5." }, { "heading": "4.3.3 COMPARISON OF THROUGHPUT ON MASSIVE MODELS", "text": "We further compare Task-level MoEs with Token-level MoEs in terms of throughput across different batch sizes in Figure 4. We measure this by decoding the WMT14 English-German test set with our TaskMoE model and with the baseline TokenMoE model on 128 Cloud TPU V3 cores. We find that our Task-MoE model has 2.6 times higher peak throughput while using 32.34 times less decoder parameters (201M vs 6.5B). Moreover, our Task-MoE model has minimal communication overhead compared to decoding with Token-MoE (0.2% versus 36% of step time).\nWe measured that the inference time of the token-based MoE model is dominated by the decoder, with the decoders taking 49x the time per step than the encoders. Therefore, the inference cost of task-level routing on decoder only is roughly equivalent to that on both the encoder and decoder." }, { "heading": "4.3.4 A CLOSER LOOK AT ROUTING DECISIONS", "text": "Now, we analyze the routing decisions made in token-level MoE models to further motivate our investigation. We take a token-level MoE model trained on the massively multilingual dataset and decode these models on the multiway tests sets, while logging the routing decisions for every token. We plot the top expert distributions of several tasks with different scripts and language families in Figure 3. For clarity, and because these two groups of languages behave differently in a multilingual setting, we split the gating decisions into those for Xx-En and En-Xx language pairs.\nIn the encoder (Figure 3a), tokens from all tasks (Xx-En) seem to prefer the same set of few experts slightly over the others. On the other hand, in the decoder (Figure 3b) each task seems to have a slight preference for a few experts over the others. Moreover, the set of experts appears to be similar for related languages. For example, English-Spanish and English-Catalan (two Romance Languages) have similar expert distributions and so do English-Russian and English-Ukranian (two\n2As opposed to displaying BLEU scores for each language pair, we place the baselines on the x-axis at zero and report the ∆BLEU trendline of each model we consider. In order to set these bilingual baselines, we train Neural Machine Translation models for each language pair (e.g. a single model for German-to-English), tuned depending on the available training data for that given language We tuned batch-size and different values of regularization methods (e.g. dropout) in a Transformer-Big or Transformer-Base layout, for high or lowresourced languages respectively.\nSlavic Languages). In the Appendix A.7, we provide expert distribution plots for other layers of this model. In addition, we provide expert distributions of the MoE model that routes tokens by target language discussed in Section 2.\nOur analysis suggest that, when using token-level routing, task-level decisions emerge naturally in the decoder, providing additional motivation for our proposed routing strategies." }, { "heading": "5 RELATED WORK", "text": "Conditional Computation: Conditional computation Bengio et al. (2015), or routing examples through the neural network by activating only a sub-network of the network depending on the input has seen success in large scale natural language processing (NLP) (Shazeer et al. (2017); Lepikhin et al. (2020); Bapna et al. (2019)) and computer vision (Yang et al. (2019)) tasks. A variety of\nstrategies can be used to route examples such as learning a function on the input Shazeer et al. (2017); Lepikhin et al. (2020), computational budget or Bapna et al. (2019); Elbayad et al. (2019).\nMulti-task Learning: Multi-task learning Caruana (1997) can improve model performance across all tasks trained on due to regularization and positive transfer between related tasks. Here, subnetworks are be activated depending on the task to which the input belongs - some of these parameters may be shared. This approach has seen success in a variety of domains such as classification, recommender systems and NLP (Ma et al. (2019; 2018); Clark et al. (2019); Collobert & Weston (2008); Ruder et al. (2019); Tan et al. (2019)). Like our work, some of these models have been designed with inference benefits in mind (Ma et al. (2019)). In this work we focus on multi-task learning in the case of multlingual NMT.\nMulti-task learning for Multilingual NMT Models: Multi-task learning in multilingual models has been well-studied: while complete parameter sharing is simple and works well (Johnson et al. (2017)), an optimal strategy for sharing parameters and possibly having languages-specific parameters would maximize transfer while minimizing interference Hokamp et al. (2019). Strategies involve allocating language specific hidden states, attention modules, decoders or additional specialized layers (Hokamp et al. (2019); Wang et al. (2018); Gu et al. (2018); Bapna et al. (2019)). In addition some strategies involve grouping parameters by language group Fan et al. (2020); Tan et al. (2019). Compared to these works, our approach to parameter sharing is designed to scale models without impacting inference efficiency (as opposed to simply adding language-specific capacity) while still enjoying the benefits of scaling." }, { "heading": "6 CONCLUSIONS", "text": "In this work we discussed more inference friendly algorithms for routing tokens in Sparse Mixtureof-Experts models by making use of task boundaries. We empirically demonstrated that this new algorithm performs as well as, or better than, conventional token-based routing algorithms on two different datasets: the multilingual WMT setup covering 30 language pairs and a large internal dataset covering 200 language pairs. We discussed the trade-offs of these methods in terms of traintime and serving considerations. In addition, we looked more closely at large MoE models and how their gating decisions differ by task.\nWe conclude by highlighting that the algorithms that are more inference friendly while retaining the training speed advantages of Mixture-of-Experts models are a promising direction for future exploration, motivating research on inference efficiency for large models." } ]
null
null
SP:54c5295194b84ff9c33532c6c556558575e42419
[ "In this paper, the authors introduce the bidirectional self-normalizing neural networks (BSNN) that preserve the norms in both forward and backward passes. To serve such purpose, a new class of activation functions, GPN, is proposed, which can be obtained via the affine transform of existing activation functions like tanh and SELU. The authors prove that under orthogonal weights and GPN, the norm for both forward and backward passes can be well preserved. Besides, the conclusions are also supported by experiments on synthetic data and real-world data like MNIST and CIFAR 10." ]
The problem of exploding and vanishing gradients has been a long-standing obstacle that hinders the effective training of neural networks. Despite various tricks and techniques that have been employed to alleviate the problem in practice, there still lacks satisfactory theories or provable solutions. In this paper, we address the problem from the perspective of high-dimensional probability theory. We provide a rigorous result that shows, under mild conditions, how the exploding/vanishing gradient problem disappears with high probability if the neural networks have sufficient width. Our main idea is to constrain both forward and backward signal propagation in a nonlinear neural network through a new class of activation functions, namely Gaussian-Poincaré normalized functions, and orthogonal weight matrices. Experiments on both synthetic and real-world data validate our theory and confirm its effectiveness on very deep neural networks when applied in practice.
[ { "affiliations": [], "name": "BIDIRECTIONALLY SELF-NORMALIZING" } ]
[ { "authors": [ "Tom Alberts", "Davar Khoshnevisan" ], "title": "Calculus on Gauss Space: An Introduction to Gaussian Analysis", "venue": null, "year": 2018 }, { "authors": [ "Sergey G Bobkov" ], "title": "On concentration of distributions of random weighted sums", "venue": "Annals of Probability,", "year": 2003 }, { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Neural photo editing with introspective adversarial networks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "John Dawkins" ], "title": "Normalized vector of gaussian variables is uniformly distributed on the sphere", "venue": null, "year": 2016 }, { "authors": [ "John Duchi", "Elad Hazan", "Yoram Singer" ], "title": "Adaptive subgradient methods for online learning and stochastic optimization", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2013 }, { "authors": [ "Sepp Hochreiter" ], "title": "Untersuchungen zu dynamischen neuronalen netzen", "venue": "Diploma, Technische Universität München,", "year": 1991 }, { "authors": [ "Sergey Ioffe" ], "title": "Batch renormalization: Towards reducing minibatch dependence in batch-normalized models", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Günter Klambauer", "Thomas Unterthiner", "Andreas Mayr", "Sepp Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Elizabeth S Meckes" ], "title": "The Random Matrix Theory of the Classical Compact Groups", "venue": null, "year": 2019 }, { "authors": [ "Dmytro Mishkin", "Jiri Matas" ], "title": "All you need is a good init", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Razvan Pascanu", "Tomas Mikolov", "Yoshua Bengio" ], "title": "On the difficulty of training recurrent neural networks", "venue": "International Conference on Machine Learning,", "year": 2013 }, { "authors": [ "Jeffrey Pennington", "Samuel Schoenholz", "Surya Ganguli" ], "title": "Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "George Philipp", "Dawn Song", "Jaime G Carbonell" ], "title": "The exploding gradient problem demystifieddefinition, prevalence, impact, origin, tradeoffs, and solutions", "venue": "arXiv preprint arXiv:1712.05577,", "year": 2018 }, { "authors": [ "Ben Poole", "Subhaneil Lahiri", "Maithra Raghu", "Jascha Sohl-Dickstein", "Surya Ganguli" ], "title": "Exponential expressivity in deep neural networks through transient chaos", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Andrew M Saxe", "James L McClelland", "Surya Ganguli" ], "title": "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Samuel S Schoenholz", "Justin Gilmer", "Surya Ganguli", "Jascha Sohl-Dickstein" ], "title": "Deep information propagation", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton" ], "title": "Mastering the game of go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinton" ], "title": "Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Roman Vershynin" ], "title": "High-Dimensional Probability: An Introduction with Applications in Data Science", "venue": null, "year": 2018 }, { "authors": [ "Greg Yang", "Jeffrey Pennington", "Vinay Rao", "Jascha Sohl-Dickstein", "Samuel S" ], "title": "Schoenholz. A mean field theory of batch normalization", "venue": "International Conference on Learning Representations,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have brought unprecedented performance in various artificial intelligence tasks (Graves et al., 2013; Krizhevsky et al., 2012; Silver et al., 2017). However, despite decades of research, training neural networks is still mostly guided by empirical observations and successful training often requires various heuristics and extensive hyperparameter tuning. It is therefore desirable to understand the cause of the difficulty in neural network training and to propose theoretically sound solutions.\nA major difficulty is the gradient exploding/vanishing problem (Glorot & Bengio, 2010; Hochreiter, 1991; Pascanu et al., 2013; Philipp et al., 2018). That is, the norm of the gradient in each layer is either growing or shrinking at an exponential rate as the gradient signal is propagated from the top layer to bottom layer. For deep neural networks, this problem might cause numerical overflow and make the optimization problem intrinsically difficult, as the gradient in each layer has vastly different magnitude and therefore the optimization landscape becomes pathological. One might attempt to solve the problem by simply normalizing the gradient in each layer. Indeed, the adaptive gradient optimization methods (Duchi et al., 2011; Kingma & Ba, 2015; Tieleman & Hinton, 2012) implement this idea and have been widely used in practice. However, one might also wonder if there is a solution more intrinsic to deep neural networks, whose internal structure if well-exploited would lead to further advances.\nTo enable the trainability of deep neural networks, batch normalization (Ioffe & Szegedy, 2015) was proposed in recent years and achieved widespread empirical success. Batch normalization is a differentiable operation which normalizes its inputs based on mini-batch statistics and inserted between the linear and nonlinear layers. It is reported that batch normalization can accelerate neural network training significantly (Ioffe & Szegedy, 2015). However, batch normalization does not solve the gradient exploding/vanishing problem (Philipp et al., 2018). Indeed it is proved that batch normalization can actually worsen the problem (Yang et al., 2019). Besides, batch normalization requires separate training and testing phases and might be ineffective when the mini-batch size is small (Ioffe, 2017). The shortcomings of batch normalization motivate us to search for a more principled and generic approach to solve the gradient exploding/vanishing problem.\nAlternatively, self-normalizing neural networks (Klambauer et al., 2017) and dynamical isometry theory (Pennington et al., 2017) were proposed to combat gradient exploding/vanishing problem. In self-normalizing neural networks, the output of each network unit is constrained to have zero mean and unit variance. Based on this motivation, a new activation function, scaled exponential linear unit (SELU), was devised. In dynamical isometry theory, all singular values of the input-output Jacobian matrix are constrained to be close to one at initialization. This amounts to initializing the functionality of a network to be close to an orthogonal matrix. While the two theories dispense batch normalization, it is shown that SELU still suffers from exploding/vanishing gradient problem and dynamical isometry restricts the functionality of the network to be close to linear (pseudolinearity) (Philipp et al., 2018).\nIn this paper, we follow the above line of research to investigate neural network trainability. Our contributions are three-fold: First, we introduce bidirectionally self-normalizing neural network (BSNN) that consist of orthogonal weight matrices and a new class of activation functions which we call Gaussian-Poincaré normalized (GPN) functions. We show many common activation functions can be easily transformed into their respective GPN versions. Second, we rigorously prove that the gradient exploding/vanishing problem disappears with high probability in BSNNs if the width of each layer is sufficiently large. Third, with experiments on synthetic and real-world data, we confirm that BSNNs solve the gradient vanishing/exploding problem to large extent while maintaining nonlinear functionality." }, { "heading": "2 THEORY", "text": "In this section, we introduce bidirectionally self-normalizing neural network (BSNN) formally and analyze its properties. All the proofs of our results are left to Appendix.\nTo simplify the analysis, we define a neural network in a restricted sense as follows: Definition 1 (Neural Network). A neural network is a function from Rn to Rn composed of layerwise operations for l = 1, . . . , L as\nh(l) = W(l)x(l), x(l+1) = φ(h(l)), (1)\nwhere W(l) ∈ Rn×n, φ : R → R is a differentiable function applied element-wise to a vector, x(1) is the input and x(L+1) is the output.\nUnder this definition, φ is called the activation function, {W(l)}Ll=1 are called the parameters, n is called the width and L is called the depth. Superscript (l) denotes the l-th layer of a neural network. The above formulation is similar to (Pennington et al., 2017) but we omit the bias term in (1) for simplicity as it plays no role in our analysis.\nLet E be the objective function of {W(l)}Ll=1 and D(l) be a diagonal matrix with diagonal elements D\n(l) ii = φ ′(h (l) i ), where φ ′ denotes the derivative of φ. Now, the error signal is back propagated via\nd(L) = D(L) ∂E\n∂x(L+1) , d(l) = D(l)(W(l+1))Td(l+1), (2)\nand the gradient of the weight matrix for layer l can be computed as ∂E\n∂W(l) = d(l)(x(l))T . (3)\nTo solve the gradient exploding/vanishing problem, we constrain the forward signal x(l) and the backward signal d(l) in order to constrain the norm of the gradient. This leads to the following definition and proposition. Definition 2 (Bidirectional Self-Normalization). A neural network is bidirectionally selfnormalizing if\n‖x(1)‖2 = ‖x(2)‖2 = ... = ‖x(L)‖2, (4) ‖d(1)‖2 = ‖d(2)‖2 = ... = ‖d(L)‖2. (5)\nProposition 1. If a neural network is bidirectionally self-normalizing, then∥∥∥ ∂E ∂W(1) ∥∥∥ F = ∥∥∥ ∂E ∂W(2) ∥∥∥ F = ... = ∥∥∥ ∂E ∂W(L) ∥∥∥ F . (6)\nIn the rest of this section, we derive the conditions under which bidirectional self-normalization is achievable for a neural network." }, { "heading": "2.1 CONSTRAINTS ON WEIGHT MATRICES", "text": "We constrain the weight matrices to be orthogonal since multiplication by an orthogonal matrix preserves the norm of a vector. For linear networks, this guarantees bidirectionally selfnoramlizingnormalization and its further benefits are discussed in (Saxe et al., 2014). Even for nonlinear neural networks, orthogonal constraints are shown to improve the trainability with proper scaling (Mishkin & Matas, 2016; Pennington et al., 2017)." }, { "heading": "2.2 CONSTRAINTS ON ACTIVATION FUNCTIONS", "text": "To achieve bidirectionally self-noramlizingnormalization for a nonlinear network, it is not enough only to constrain the weight matrices. We also need to constrain the activation function in such a way that both forward and backward signals are normalized. To this end, we propose the following constraint that captures the relationship between a function and its derivative. Definition 3 (Gaussian-Poincaré Normalization). Function φ : R → R is Gaussian-Poincaré normalized if it is differentiable and\nEx∼N (0,1)[φ(x)2] = Ex∼N (0,1)[φ′(x)2] = 1. (7)\nThe definition is inspired by the following theorem which shows the fundamental relationship between a function and its derivative under Gaussian measure. Theorem 1 (Gaussian-Poincaré Inequality (Bogachev, 1998)). If function φ : R→ R is differentiable with bounded Ex∼N (0,1)[φ(x)2] and Ex∼N (0,1)[φ′(x)2], then\nVarx∼N (0,1)[φ(x)] ≤ Ex∼N (0,1)[φ′(x)2]. (8)\nNote that there is an implicit assumption that the input is approximately Gaussian for a GaussianPoincaré normalized (GPN) function. Even though this is standard in the literature (Klambauer et al., 2017; Pennington et al., 2017; Schoenholz et al., 2017), we will rigorously prove that this assumption is valid when orthogonal weight matrices are used in equation 1. Next, we state a property of GPN functions. Proposition 2. Function φ : R→ R is Gaussian-Poincaré normalized and Ex∼N (0,1)[φ(x)] = 0 if and only if φ(x) = x or φ(x) = −x.\nThis result indicates that any nonlinear function with zero mean under Gaussian distribution (e.g., Tanh and SELU) is not GPN. Now we show that a large class of activation functions can be converted into their respective GPN versions using an affine transformation. Proposition 3. For any differentiable function φ : R → R with non-zero and bounded Ex∼N (0,1)[φ(x)2] and Ex∼N (0,1)[φ′(x)2], there exist two constants a and b such that aφ(x) + b is Gaussian-Poincaré normalized.\nTo obtain a and b, one can use numerical procedure to compute the values of Ex∼N (0,1)[φ′(x)2], Ex∼N (0,1)[φ(x)2] and Ex∼N (0,1)[φ(x)] and then solve the quadratic equations\nEx∼N (0,1)[a2φ′(x)2] = 1, (9) Ex∼N (0,1)[(aφ(x) + b)2] = 1. (10)\nWe computed a and b (not unique) for several common activation functions with their default hyperparameters1 and the results are listed in Table 1. Note that ReLU, LeakyReLU and SELU are not differentiable at x = 0 but they can be regarded as approximations of their smooth counterparts. We ignore such point and evaluate the integrals for x ∈ (−∞, 0) ∪ (0,∞). With the orthogonal constraint on the weight matrices and the Gaussian-Poincaré normalization on the activation function, we prove that bidirectionally self-noramlizingnormalization is achievable with high probability under mild conditions in the next subsection.\n1We use α = 0.01 for LeakyReLU, α = 1 for ELU and φ(x) = x/(1 + exp(−1.702x)) for GELU." }, { "heading": "2.3 NORM-PRESERVATION THEOREMS", "text": "The bidirectionally self-noramlizingnormalization may not be achievable precisely in general unless the neural network is a linear one. Therefore, we investigate the properties of neural networks in a probabilistic framework. The random matrix theory and the high-dimensional probability theory allow us to characterize the behaviors of a large class of neural networks by its mean behavior, which is significantly simpler to analyze. Therefore, we study neural networks of random weights whose properties may shed light on the trainability of neural networks in practice.\nFirst, we need a probabilistic version of the vector norm constraint. Definition 4 (Thin-Shell Concentration). Random vector x ∈ Rn is thin-shell concentrated if for any > 0\nP {∣∣∣ 1 n ‖x‖22 − 1 ∣∣∣ ≥ }→ 0 (11) as n→∞.\nThe definition is modified from the one in (Bobkov, 2003). Examples of thin-shell concentrated distributions include standard multivariate Gaussian and any distribution on the n-dimensional sphere of radius √ n.\nAssumptions. To prove the main results, i.e., the norm-preservation theorems, we require the following assumptions:\n1. Random vector x ∈ Rn is thin-shell concentrated. 2. Random orthogonal matrix W = (w1,w2, ...,wn)T is uniformly distributed. 3. Function φ : R→ R is Gaussian-Poincaré normalized. 4. Function φ and its derivative are Lipschitz continuous.\nThe above assumptions are not restrictive. For Assumption 1, one can always normalize the input vectors of a neural network. For Assumption 2, orthogonal constraint or its relaxation has already been employed in neural network training (Brock et al., 2017). Note, in Assumption 2, uniformly distributed means that W is distributed under Haar measure, which is the unique rotation invariant probability measure on orthogonal matrix group. We refer the reader to (Meckes, 2019) for details. Furthermore, all the activation functions or their smooth counterparts listed in Table 1 satisfy Assumptions 3 and 4.\nWith the above assumptions, we can prove the following norm-preservation theorems. Theorem 2 (Forward Norm-Preservation). Random vector\n(φ(wT1 x), φ(w T 2 x), ..., φ(w T nx)) (12)\nis thin-shell concentrated.\nThis result shows the transformation (orthogonal matrix followed by the GPN activation function) can preserve the norm of its input with high probability. Since the output is thin-shell concentrated, it serves as the input for the next layer and so on. Hence, the forward pass can preserve the norm of its input in each layer along the forward path when n is sufficiently large. Theorem 3 (Backward Norm-Preservation). Let D be the diagonal matrix whose diagonal elements are Dii = φ′(wTi x) and y ∈ Rn be a fixed vector with bounded ‖y‖∞. Then for any > 0\nP { 1 n ∣∣∣‖Dy‖22 − ‖y‖22∣∣∣ ≥ }→ 0 (13) as n→∞.\nThis result shows that the multiplication by D preserves the norm of its input with high probability. Since orthogonal matrix W also preserves the norm of its input, when the gradient error signal is propagated backwards as in (2), the norm is preserved in each layer along the backward path when n is sufficient large.\nHence, combining Theorems 2 and 3, we proved that bidirectionally self-noramlizingnormalization is achievable with high probability if the neural network is wide enough and the conditions in the Assumptions are satisfied. Then by Proposition 1, the gradient exploding/vanishing problem disappears with high probability.\nSketch of the proofs. The proofs of Theorems 2 and 3 are mainly based on a phenomenon in high-dimensional probability theory, concentration of measure. We refer the reader to (Vershynin, 2018) for an introduction to the subject. Briefly, it can be shown that for some high-dimensional probability distributions, most mass is concentrated around a certain range. For example, while most mass of a low-dimensional standard multivariate Gaussian distribution is concentrated around the center, most mass of a high-dimensional standard multivariate Gaussian distribution is concentrated around a thin-shell. Furthermore, the random variables transformed by Lipschitz functions are also concentrated around certain values. Using this phenomenon, it can be shown that rows {wi} of a random orthogonal matrix W in high dimension are approximately independent random unit vectors and the inner product wTi x for thin-shell concentrated vector x can be shown to be approximately Gaussian. Then from the assumptions that φ is GPN and φ and φ′ are Lipschitz continuous, the proofs follow. Each of these steps is rigorously proved in Appendix." }, { "heading": "3 EXPERIMENTS", "text": "We verify our theory on both synthetic and real-world data. More experimental results can be found in Appendix. In short, while very deep neural networks with non-GPN activations show vanishing/exploding gradients, GPN versions show stable gradients and improved trainability in both synthetic and real data. Furthermore, compared to dynamical isometry theory, BSNNs do not exhibit pseudo-linearity and maintain nonlinear functionality." }, { "heading": "3.1 SYNTHETIC DATA", "text": "We create synthetic data to test the norm-preservation properties of the neural networks. The input x1 is 500 data points of random standard Gaussian vectors of 500 dimension. The gradient error ∂E/∂xL+1 is also random standard Gaussian vector of 500 dimension. All the neural networks have depth 200. All the weight matrices are random orthogonal matrices uniformly generated. No training is performed.\nIn Figure 1, we show the norm of inputs and gradient of the neural networks of width 500. From the results, we can see that with GPN, the gradient exploding/vanishing problem is eliminated to large extent. The neural network with Tanh activation function does not show gradient exploding/vanishing problem either. However, ‖x(l)‖ is close to zero for large l and each layer is close to a linear one since Tanh(x) ≈ x when x ≈ 0 (pseudo-linearity), for which dynamical isometry is achieved.\nOne might wonder if bidirectionally self-noramlizingnormalization has the same effect as dynamical isometry in solving the gradient exploding/vanishing problem, that is, to make the neural network close to an orthogonal matrix. To answer this question, we show the histogram of φ′(h(l)i ) in Figure 2. If the functionality of a neural network is close to an orthogonal matrix, since the weight matrices are orthogonal, then the values of φ′(h(l)i ) would concentrate around one (Figure 2 (a)), which is not the case for BSNNs (Figure 2 (b)). This shows that BSNNs do not suffer from the gradient vanishing/explosion problem while exhibiting nonlinear functionality.\nIn Figure 7 in Appendix, we show the gradient norm of BSNNs with varying width. There we note that as the width increases, the norm of gradient in each layer of the neural network becomes more equalized, as predicted by our theory.\n∂W(l) ‖F is the Frobenius norm of the gradient of the weight matrix in the l-th\nlayer." }, { "heading": "3.2 REAL-WORLD DATA", "text": "We run experiments on real-world image datasets MNIST and CIFAR-10. The neural networks have width 500 and depth 200 (plus one unconstrained layer at bottom and one at top to fit the dimensionality of the input and output). We use stochastic gradient descent of momentum 0.5 with mini-batch size 64 and learning rate 0.0001. The training is run for 50 epochs for MNIST and 100 epochs for CIFAR-10. We do not use data augmentation. Since it is computationally expensive to enforce the orthogonality constraint, we simply constrain each row of the weight matrix to have l2 norm one as a relaxation of orthogonality by the following parametrization W = (v1/‖v1‖2,v2/‖v2‖2, ...,vn/‖vn‖2)T and optimize V = (v1,v2, ...,vn)T as an unconstrained problem.\nWe summarize the results in Table 2. We can see that, for activation functions ReLU, LeakyReLU and GELU, the neural networks are not trainable. But once these functions are GPN, the neural network can be trained. GPN activation functions consistently outperform their unnormalized counterparts in terms of the trainability, as the training accuracy is increased, but not necessarily generalization ability. We show the test accuracy during training in Figure 3, from which we can see the training is accelerated when SELU is GPN. ReLU, LeakyReLU and GELU, if not GPN, are completely untrainable due to the vanished gradient (see Appendix).\nWe observe that batch normalization leads to gradient explosion when combining with any of the activation functions. This confirms the claim of (Philipp et al., 2018) and (Yang et al., 2019) that batch normalization does not solve the gradient exploding/vanishing problem. On the other hand, without batch normalization the neural network with any GPN activation function has stable gradient magnitude throughout training (see Appendix). This indicates that BSNNs can dispense with batch normalization and therefore avoid its shortcomings." }, { "heading": "4 RELATED WORK", "text": "We compare our theory to several most relevant theories in literature. A key distinguishing feature of our theory is that we provide rigorous proofs of the conditions under which the exploding/vanishing problem disappears. To the best of our knowledge, this is the first time that the problem is provably solved for nonlinear neural networks." }, { "heading": "4.1 SELF-NORMALIZING NEURAL NETWORKS", "text": "Self-normalizing neural networks enforce zero mean and unit variance for the output of each unit with the SELU activation function (Klambauer et al., 2017). However, as pointed out in (Philipp et al., 2018) and confirmed in our experiments, only constraining forward signal propagation does not solve the gradient exploding/vanishing problem since the norm of the backward signal can grow or shrink. The signal propagation in both directions needs to be constrained as in our theory." }, { "heading": "4.2 DEEP SIGNAL PROPAGATION", "text": "Our theory is developed from the deep signal propagation theory (Poole et al., 2016; Schoenholz et al., 2017). Both theories require Ex∼N (0,1)[φ′(x)2] = 1. However, ours also requires the quantity Ex∼N (0,1)[φ(x)2] to be one while in Poole et al. (Poole et al., 2016; Schoenholz et al., 2017) it can be an arbitrary positive number. We emphasize that it is desirable to enforce Ex∼N (0,1)[φ(x)2] = 1 to avoid trivial solutions. For example, if φ(x) = Tanh( x) with ≈ 0, then φ( x) ≈ x and the neural network becomes essentially a linear one for which depth is unnecessary (pseudo-linearity (Philipp et al., 2018)). This is observed in Figure 1 (a). Moreover, in (Poole et al., 2016; Schoenholz et al., 2017) the signal propagation analysis is done based on random weights under i.i.d. Gaussian distribution whereas we proved how one can solve gradient vanishing/exploding problem assuming the weight matrices are orthogonal and uniformly distributed under Haar measure." }, { "heading": "4.3 DYNAMICAL ISOMETRY", "text": "Dynamical isometry theory (Pennington et al., 2017) enforces the Jacobian matrix of the inputoutput function of a neural network to have all singular values close to one. Since the weight matrices are constrained to be orthogonal, it is equivalent to enforce each D(l) in (2) to be close to the identity matrix, which implies the functionality of neural network at initialization is close to an orthogonal matrix (pseudo-linearity). This indeed enables trainability since linear neural networks with orthogonal weight matrices do not suffer from the gradient exploding/vanishing problem. As neural networks need to learn a nonlinear input-output functionality to solve certain tasks, during training the weights of a neural network are unconstrained so that the neural network would move to a nonlinear region where the gradient exploding/vanishing problem might return. In our theory, although the orthogonality of weight matrices is also required, we approach the problem from a different perspective. We do not encourage the linearity at initialization. The neural network can be initialized to be nonlinear and stay nonlinear during the training even when the weights are constrained." }, { "heading": "5 CONCLUSION", "text": "In this paper, we have introduced bidirectionally self-normalizing neural network (BSNN) which constrains both forward and backward signal propagation using a new class of Gaussian-Poincaré normalized activation functions and orthogonal weight matrices. BSNNs are not restrictive in the sense that many commonly used activation functions can be Gaussian-Poincaré normalized. We have rigorously proved that gradient vanishing/exploding problem disappears in BSNNs with high probability under mild conditions. Experiments on synthetic and real-world data confirm the validity of our theory and demonstrate that BSNNs have excellent trainability without batch normalization. Currently, the theoretical analysis is limited to same width, fully-connected neural networks. Future work includes extending our theory to more sophisticated networks such as convolutional architectures as well as investigating the generalization capabilities of BSNNs." }, { "heading": "APPENDIX A PROOFS", "text": "Proposition 1. If a neural network is bidirectionally self-normalizing, then∥∥∥ ∂E ∂W(1) ∥∥∥ F = ∥∥∥ ∂E ∂W(2) ∥∥∥ F = ... = ∥∥∥ ∂E ∂W(L) ∥∥∥ F . (14)\nProof. For each l, we have∥∥∥ ∂E ∂W(l) ∥∥∥ F = √ trace ( ∂E ∂W(l) ( ∂E ∂W(l) )T) (15)\n= √ trace(d(l)(x(l))Tx(l)(d(l))T ) (16)\n= √ trace((x(l))Tx(l)(d(l))Td(l)) (17)\n= √ (x(l))Tx(l) √ (d(l))Td(l) (18)\n= ‖x(l)‖2‖d(l)‖2. (19) By the definition of bidirectional self-normalization, we have ‖ ∂E\n∂W(1) ‖F = ... = ‖ ∂E∂W(L) ‖F .\nProposition 2. Function φ : R→ R is Gaussian-Poincaré normalized and Ex∼N (0,1)[φ(x)] = 0 if and only if φ(x) = x or φ(x) = −x.\nProof. Since Ex∼N (0,1)[φ(x)2] <∞ and Ex∼N (0,1)[φ′(x)2] <∞, φ(x) and φ′(x) can be expanded in terms of Hermite polynomials. Let the Hermite polynomial of degree k be\nHk(x) = (−1)k√ k! exp( x2 2 ) dk dxk exp(−x 2 2 ) (20)\nand due to H ′k(x) = √ kHk−1(x), we have\nφ(x) = ∞∑ k=0 akHk(x), (21)\nφ′(x) = ∞∑ k=1 √ kakHk−1(x). (22)\nSince Ex∼N (0,1)[φ(x)] = 0, we have a0 = Ex∼N (0,1)[H0(x)φ(x)] (23)\n= Ex∼N (0,1)[φ(x)] (24) = 0. (25)\nSince Ex∼N (0,1)[φ(x)2] = Ex∼N (0,1)[φ′(x)2] = 1 (26)\nand Hermite polynomials are orthonormal, we have Ex∼N (0,1)[φ(x)2] = ∞∑ k=1 a2k = Ex∼N (0,1)[φ′(x)2] = ∞∑ k=1 ka2k = 1. (27)\nTherefore, we have ∞∑ k=1 ka2k − ∞∑ k=1 a2k = 0 (28) that is ∞∑ k=2 (k − 1)a2k = 0. (29)\nSince each term in ∑∞ k=2(k−1)a2k is nonnegative, the only solution is ak = 0 for k ≥ 2. And since Ex∼N (0,1)[φ(x)2] = a21 = 1, we have a1 = ±1. Hence, φ(x) = ±H1(x) = ±x.\nProposition 3. For any differentiable function φ : R → R with non-zero and bounded Ex∼N (0,1)[φ(x)2] and Ex∼N (0,1)[φ′(x)2], there exist two constants a and b such that aφ(x) + b is Gaussian-Poincaré normalized.\nProof. Let ϕ(x) = φ(x) + c. Then let\nψ(c) = Ex∼N (0,1)[ϕ(x)2]− Ex∼N (0,1)[(φ′(x))2] (30) = Varx∼N (0,1)[ϕ(x)] + (Ex∼N (0,1)[ϕ(x)])2 − Ex∼N (0,1)[(φ′(x))2] (31) = Varx∼N (0,1)[φ(x)] + (Ex∼N (0,1)[φ(x)] + c)2 − Ex∼N (0,1)[(φ′(x))2]. (32)\nTherefore, ψ(c) is a quadratic function of c. We also have ψ(c) > 0 as c → ∞ and ψ(−Ex∼N (0,1)[φ(x)]) ≤ 0 due to Gaussian-Poincaré inequality. Hence, there exists c for which ψ(c) = 0 such that Ex∼N (0,1)[(φ(x)+c)2] = Ex∼N (0,1)[φ′(x)2]. Let a = (Ex∼N (0,1)[φ′(x)2])−1/2 and b = ac, we have Ex∼N (0,1)[(aφ(x) + b)2] = Ex∼N (0,1)[(aφ′(x))2] = 1.\nThe proof is largely due to (Eldredge, 2020) with minor modification in here.\nAssumptions.\n1. Random vector x ∈ Rn is thin-shell concentrated. 2. Random orthogonal matrix W = (w1,w2, ...,wn)T is uniformly distributed. 3. Function φ : R→ R is Gaussian-Poincaré normalized. 4. Function φ and its derivative are Lipschitz continuous.\nTheorem 2 (Forward Norm-Preservation). Random vector\n(φ(wT1 x), φ(w T 2 x), ..., φ(w T nx)) (33)\nis thin-shell concentrated.\nTheorem 3 (Backward Norm-Preservation). Let D be the diagonal matrix whose diagonal elements are Dii = φ′(wTi x) and y ∈ Rn be a fixed vector with bounded ‖y‖∞. Then for any > 0\nP { 1 n ∣∣∣‖Dy‖22 − ‖y‖22∣∣∣ ≥ }→ 0 (34) as n→∞.\nNotations. Sn−1 = {x ∈ Rn : ‖x‖2 = 1}. O(n) is the orthogonal matrix group of size n. 1{·} denotes the indicator function. 0n denotes the vector of dimension n and all elements equal to zero. In denotes the identity matrix of size n× n. Lemma 1. If random variable x ∼ N (0, 1) and function f : R → R is Lipschitz continuous, then random variable f(x) is sub-gaussian.\nProof. Due to the Gaussian concentration theorem (Theorem 5.2.2 in (Vershynin, 2018)), we have\n‖f(x)− E[f(x)]‖ψ2 ≤ CK (35)\nwhere ‖ · ‖ψ2 denotes sub-gaussian norm, C is a constant and K is the Lipschitz constant of f . This implies f(x)− E[f(x)] is sub-gaussian (Proposition 2.5.2 in (Vershynin, 2018)). Therefore f(x) is sub-gaussian (Lemma 2.6.8 in (Vershynin, 2018)).\nLemma 2. Let x = (x1, x2, ..., xn) ∈ Rn be a random vector that each coordinate xi is independent and sub-gaussian and E[x2i ] = 1. Let y = (y1, y2, ..., yn) ∈ Rn be a fixed vector with bounded ‖y‖∞. Then\nP { 1 n ∣∣∣∑ i x2i y 2 i − ∑ i y2i ∣∣∣ ≥ }→ 0 (36) as n→∞.\nProof. Since yixi is sub-gaussian, then y2i x 2 i is sub-exponential (Lemma 2.7.6 in (Vershynin, 2018)). Since E[y2i x2i ] = y2i E[x2i ] = y2i , y2i x2i − y2i is sub-exponential with zero mean (Exercise 2.7.10 in (Vershynin, 2018)). Applying Bernsteins inequality (Corollary 2.8.3 in (Vershynin, 2018)), we proved the lemma.\nLemma 3. Let z ∼ N (0n, In). Then for any 0 < δ < 1\nP{z ∈ Rn : (1− δ) √ n ≤ ‖z‖2 ≤ (1 + δ) √ n} ≥ 1− 2 exp(−nδ2). (37)\nSee (Alberts & Khoshnevisan, 2018) (Theorem 1.2) for a proof.\nLemma 4. Let z ∼ N (0n, In). Then z/‖z‖2 is uniformly distributed on Sn−1.\nSee (Dawkins, 2016) for a proof.\nLemma 5. Let z = (z1, z2, ..., zn) ∼ N (0n, In), a = (a1, a2, ..., an) be a fixed vector with bounded ‖a‖∞ and f : R→ R be a continuous function. Then for any > 0\nP { 1 n ∣∣∣∑ i yif( √ n/‖z‖2zi)− ∑ i yif(zi) ∣∣∣ > }→ 0 (38)\nas n→∞.\nProof. Since\n1\nn ∣∣∣∑ i aif( √ n/‖z‖2zi)− ∑ i aif(zi) ∣∣∣ ≤ 1 n ∑ i |ai| · |f( √ n/‖z‖2zi)− f(zi)|, (39)\nif, as n→∞,\nP { 1 n ∑ i |ai| · |f( √ n/‖z‖2zi)− f(zi)| > } → 0, (40)\nthen\nP { 1 n ∣∣∣∑ i yif( √ n/‖z‖2zi)− ∑ i yif(zi) ∣∣∣ > }→ 0. (41)\nFor 0 < δ < 1, let A = { z ∈ Rn : 1\nn ∑ i |ai| · |f( √ n/‖z‖2zi)− f(zi)| > } , (42)\nUδ = { z ∈ Rn : (1− δ) √ n ≤ ‖z‖2 ≤ (1 + δ) √ n } . (43)\nThen P { 1 n ∑ i |ai| · |f( √ n/‖z‖2zi)− f(zi)| > } = ∫ Rn 1{z∈A}dz (44)\n= ∫ Rn\\Uδ 1{z∈A}dz+ ∫ Uδ 1{z∈A}dz. (45)\nLet δ = n−1/4. From Lemma 3, we have, as n→∞,∫ Rn\\Uδ 1{z∈A}dz ≤ ∫ Rn\\Uδ dz = 1− P{z ∈ Uδ} ≤ 2 exp(−nδ2)→ 0. (46)\nFor z ∈ Uδ and δ = n−1/4, we have ‖z‖2 → √ n, √ n/‖z‖2zi → zi and therefore f( √ n/‖z‖2zi)→ f(zi) as n→∞. Hence, ∫ Uδ 1{z∈A}dz→ 0, as n→∞.\nLemma 6. Let random matrix W be uniformly distributed on O(n) random vector θ be uniformly distributed on Sn−1 and random vector x ∈ Rn be thin-shell concentrated. Then Wx → √ nθ as n→∞.\nProof. Let y ∈ Rn be any vector with ‖y‖2 = √ n and e = ( √ n, 0, ..., 0) ∈ Rn. Since W is\nuniformly distributed, Wy has the same distribution as We. We is the first row of √ nW, which is equivalent to random vector √ nθ. Since x is thin-shell concentrated, x→ √ n\n‖x‖2x = y and therefore Wx→ √ nθ as n→∞.\nProof of Theorem 2. Let z = (z1, z2, ..., zn) ∼ N (0, I) . Due to Lemma 1, random variable φ(zi) is sub-gaussian. Since φ is Gaussian-Poincaré normalized, Ezi∼N (0,1)[φ(zi)2] = 1. Applying Lemma 2 with each yi = 1, we have for > 0\nP {∣∣∣ 1 n ∑ i φ(zi) 2 − 1 ∣∣∣ ≥ }→ 0 (47) as n→∞. Due to Lemma 4 and 5 (with each ai = 1), for random vector θ = (θ1, θ2, ..., θn) uniformly distributed on Sn−1, we have\nP {∣∣∣ 1 n ∑ i φ( √ nθi) 2 − 1 n ∑ i φ(zi) 2 ∣∣∣ ≥ }→ 0 (48)\nand therefore\nP {∣∣∣ 1 n ∑ i φ( √ nθi) 2 − 1 ∣∣∣ ≥ }→ 0 (49)\nas n→∞. Then from Lemma 6, we have Wx→ √ nθ and therefore\nP {∣∣∣ 1 n ∑ i φ(wTi x) 2 − 1 ∣∣∣ ≥ }→ 0 (50) as n→∞.\nProof of Theorem 3. Let z = (z1, z2, ..., zn) be the standard multivariate Gaussian random vectors. Due to Lemma 1, random variable φ′(zi) is sub-gaussian. Since φ is Gaussian-Poincaré normalized, Ezi∼N (0,1)[φ′(zi)2] = 1. Applying Lemma 2, we have\nP { 1 n ∣∣∣∑ i y2i φ ′(zi) 2 − y2i ∣∣∣ ≥ }→ 0 (51)\nas n→∞. Due to Lemma 4 and 5 (with each ai = y2i ), for random vector θ = (θ1, θ2, ..., θn) uniformly distributed on Sn−1, we have\nP {∣∣∣ 1 n ∑ i y2i φ ′( √ nθi) 2 − 1 n ∑ i y2i φ ′(zi) 2 ∣∣∣ ≥ }→ 0 (52)\nand therefore\nP {∣∣∣ 1 n ∑ i y2i φ ′( √ nθi) 2 − y2i ∣∣∣ ≥ }→ 0 (53)\nas n→∞. Then from Lemma 6, we have Wx→ √ nθ and therefore\nP {∣∣∣ 1 n ∑ i y2i φ ′(wTi x) 2 − y2i ∣∣∣ ≥ }→ 0 (54)\nas n→∞." }, { "heading": "APPENDIX B ADDITIONAL EXPERIMENTS", "text": "Due to the space limitation, we only showed the experiments with Tanh and SELU activation functions in the main text. In this section, we show the experiments with ReLU, LeakyReLU, ELU and SELU. Additionally, we also measure the gradient exploding/vanishing during training on the real-world data.\nB.1 SYNTHETIC DATA\nWe show the figures of the experimental results in addition to the ones in the main text.\n∂W(l) ‖F is the Frobenius norm of the gradient of the weight matrix in the l-th\nlayer.\n∂W(l) ‖F is the Frobenius norm of the gradient of the weight matrix in the l-th\nlayer.\n∂W(l)\n‖F . The width ranges from 100 to 1500. The error bars show\nstandard deviation.\nB.2 REAL-WORLD DATA\nIn Figure 8, we show the test accuracy during training on MNIST. In Figure 9, we show the test accuracy during training on CIFAR-10.\nIn Figure 10, 11, 12 and 13, we show a measure of gradient exploding/vanishing during training for different activation functions. The measure is defined as the ratio of the maximum gradient norm and the minimum gradient norm across layers. Since we use the parametrization\nW = ( v1 ‖v1‖2 , v2 ‖v2‖2 , ..., vn ‖vn‖2 )T (55)\nwith V = (v1,v2, ...,vn)T , the gradient norm ratio is defined on the unconstrained weights V, that is,\nmaxl ‖ ∂E∂V(l) ‖F minl ‖ ∂E∂V(l) ‖F . (56)\nNote that for ReLU, LeakyReLU and GELU, the gradient vanishes during training in some experiments and therefore the plots are empty. From the figures, we can see that batch normalization leads to gradient explosion especially at the early stage of training. On the other hand, without batch normalization, the gradient is stable throughout training for GPN activation functions.\n∂V(l)\n‖F . The gradient\nvanishes (‖ ∂E\n∂V(l) ‖F ≈ 0) for ReLU and LeakyReLU during training and hence the plots are empty.\n∂V(l)\n‖F . The gradient\nvanishes (‖ ∂E\n∂V(l) ‖F ≈ 0) for GELU and GELU-BN during training and hence the plots are empty.\n∂V(l)\n‖F . The\ngradient vanishes (‖ ∂E\n∂V(l) ‖F ≈ 0) for ReLU and LeakyReLU during training and hence the plots\nare empty.\n∂V(l)\n‖F . The\ngradient vanishes (‖ ∂E\n∂V(l) ‖F ≈ 0) for GELU, GELU-BN and GELU-GPN-BN during training and\nhence the plots are empty. For GELU-GPN-BN, both gradient exploding and gradient vanishing are observed." } ]
2,020
null
SP:a8e0b9e55e9a0648ba1c64cf0edb8f09c9a38109
[ "The paper is well-written, easy to follow and clear. However, the novelty and main contribution of the paper is not clear. The authors used a scoring model to score the composition of each segment, as well as the probability of having a specific label for the segment. The BERT language model is used in the paper to encode the input sequence. The training part is a more like a supervised training and a dynamic programming (DP) approach is used for inference. It is not clear how DP contributes to the success of the model, as the scores for segments are derived during the training (which seems most of the success is coming from the labeled data (i.e. supervised training) and BERT encoding). One other thing about formatting and citing references, some of the references are published in conference proceedings, not sure why authors cited their arxiv version." ]
In this work, we present Lexical Unit Analysis (LUA), a framework for general sequence segmentation tasks. Given a natural language sentence, LUA scores all the valid segmentation candidates and utilizes dynamic programming (DP) to extract the maximum scoring one. LUA enjoys a number of appealing properties such as inherently guaranteeing the predicted segmentation to be valid and facilitating globally optimal training and inference. Besides, the practical time complexity of LUA can be reduced to linear time, which is very efficient. We have conducted extensive experiments on 5 tasks, including syntactic chunking, named entity recognition (NER), slot filling, Chinese word segmentation, and Chinese part-of-speech (POS) tagging, across 15 datasets. Our models have achieved the state-of-the-art performances on 13 of them. The results also show that the F1 score of identifying long-length segments is notably improved.
[]
[ { "authors": [ "Alan Akbik", "Duncan Blythe", "Roland Vollgraf" ], "title": "Contextual string embeddings for sequence labeling", "venue": "In Proceedings of the 27th International Conference on Computational Linguistics,", "year": 2018 }, { "authors": [ "Hui Chen", "Zijia Lin", "Guiguang Ding", "Jianguang Lou", "Yusen Zhang", "Borje Karlsson" ], "title": "Grn: Gated relation network to enhance convolutional neural network for named entity recognition", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Qian Chen", "Zhu Zhuo", "Wen Wang" ], "title": "Bert for joint intent classification and slot filling", "venue": "arXiv preprint arXiv:1902.10909,", "year": 2019 }, { "authors": [ "Alice Coucke", "Alaa Saade", "Adrien Ball", "Théodore Bluche", "Alexandre Caulier", "David Leroy", "Clément Doumouro", "Thibault Gisselbrecht", "Francesco Caltagirone", "Thibaut Lavril" ], "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "venue": "arXiv preprint arXiv:1805.10190,", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Timothy Dozat", "Christopher D Manning" ], "title": "Deep biaffine attention for neural dependency parsing", "venue": "arXiv preprint arXiv:1611.01734,", "year": 2016 }, { "authors": [ "Thomas Emerson" ], "title": "The second international chinese word segmentation bakeoff", "venue": "In Proceedings of the fourth SIGHAN workshop on Chinese language Processing,", "year": 2005 }, { "authors": [ "Chih-Wen Goo", "Guang Gao", "Yun-Kai Hsu", "Chih-Li Huo", "Tsung-Chieh Chen", "Keng-Wei Hsu", "Yun-Nung Chen" ], "title": "Slot-gated modeling for joint slot filling and intent prediction", "venue": "In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers),", "year": 2018 }, { "authors": [ "Charles T Hemphill", "John J Godfrey", "George R Doddington" ], "title": "The atis spoken language systems pilot corpus", "venue": "In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley,", "year": 1990 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Weipeng Huang", "Xingyi Cheng", "Kunlong Chen", "Taifeng Wang", "Wei Chu" ], "title": "Toward fast and accurate neural chinese word segmentation with multi-criteria learning", "venue": null, "year": 1903 }, { "authors": [ "Zhiheng Huang", "Wei Xu", "Kai Yu" ], "title": "Bidirectional lstm-crf models for sequence tagging", "venue": "arXiv preprint arXiv:1508.01991,", "year": 2015 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "John Lafferty", "Andrew McCallum", "Fernando CN Pereira" ], "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", "venue": null, "year": 2001 }, { "authors": [ "Guillaume Lample", "Miguel Ballesteros", "Sandeep Subramanian", "Kazuya Kawakami", "Chris Dyer" ], "title": "Neural architectures for named entity recognition", "venue": "arXiv preprint arXiv:1603.01360,", "year": 2016 }, { "authors": [ "Kenton Lee", "Luheng He", "Luke Zettlemoyer" ], "title": "Higher-order coreference resolution with coarseto-fine inference", "venue": "arXiv preprint arXiv:1804.05392,", "year": 2018 }, { "authors": [ "Xiaoya Li", "Jingrong Feng", "Yuxian Meng", "Qinghong Han", "Fei Wu", "Jiwei Li" ], "title": "A unified mrc framework for named entity recognition", "venue": null, "year": 1910 }, { "authors": [ "Tianyu Liu", "Jin-Ge Yao", "Chin-Yew Lin" ], "title": "Towards improving neural named entity recognition with gazetteers", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Yijin Liu", "Fandong Meng", "Jinchao Zhang", "Jinan Xu", "Yufeng Chen", "Jie Zhou" ], "title": "Gcdt: A global context enhanced deep transition architecture for sequence labeling", "venue": "arXiv preprint arXiv:1906.02437,", "year": 2019 }, { "authors": [ "Yijin Liu", "Fandong Meng", "Jinchao Zhang", "Jie Zhou", "Yufeng Chen", "Jinan Xu" ], "title": "Cm-net: A novel collaborative memory network for spoken language understanding", "venue": "arXiv preprint arXiv:1909.06937,", "year": 2019 }, { "authors": [ "Ying Luo", "Fengshun Xiao", "Hai Zhao" ], "title": "Hierarchical contextualized representation for named entity recognition", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Ji Ma", "Kuzman Ganchev", "David Weiss" ], "title": "State-of-the-art chinese word segmentation with bilstms", "venue": "arXiv preprint arXiv:1808.06511,", "year": 2018 }, { "authors": [ "Xuezhe Ma", "Eduard Hovy" ], "title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf", "venue": "arXiv preprint arXiv:1603.01354,", "year": 2016 }, { "authors": [ "Yuxian Meng", "Wei Wu", "Fei Wang", "Xiaoya Li", "Ping Nie", "Fan Yin", "Muyu Li", "Qinghong Han", "Xiaofei Sun", "Jiwei Li" ], "title": "Glyce: Glyph-vectors for chinese character representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Grégoire Mesnil", "Yann Dauphin", "Kaisheng Yao", "Yoshua Bengio", "Li Deng", "Dilek Hakkani-Tur", "Xiaodong He", "Larry Heck", "Gokhan Tur", "Dong Yu" ], "title": "Using recurrent neural networks for slot filling in spoken language understanding", "venue": "IEEE/ACM Transactions on Audio, Speech, and Language Processing,", "year": 2014 }, { "authors": [ "Joakim Nivre", "Marie-Catherine De Marneffe", "Filip Ginter", "Yoav Goldberg", "Jan Hajic", "Christopher D Manning", "Ryan McDonald", "Slav Petrov", "Sampo Pyysalo", "Natalia Silveira" ], "title": "Universal dependencies v1: A multilingual treebank collection", "venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16),", "year": 2016 }, { "authors": [ "Nanyun Peng" ], "title": "Jointly Learning Representations for Low-Resource Information Extraction", "venue": "PhD thesis,", "year": 2017 }, { "authors": [ "Sameer Pradhan", "Alessandro Moschitti", "Nianwen Xue", "Hwee Tou Ng", "Anders Björkelund", "Olga Uryupina", "Yuchen Zhang", "Zhi Zhong" ], "title": "Towards robust linguistic analysis using ontonotes", "venue": "In Proceedings of the Seventeenth Conference on Computational Natural Language Learning,", "year": 2013 }, { "authors": [ "Tao Qian", "Yue Zhang", "Meishan Zhang", "Yafeng Ren", "Donghong Ji" ], "title": "A transition-based model for joint segmentation, pos-tagging and normalization", "venue": "In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing,", "year": 2015 }, { "authors": [ "Erik F Sang", "Sabine Buchholz" ], "title": "Introduction to the conll-2000 shared task: Chunking", "venue": "arXiv preprint cs/0009008,", "year": 2000 }, { "authors": [ "Erik F Sang", "Fien De Meulder" ], "title": "Introduction to the conll-2003 shared task: Languageindependent named entity recognition", "venue": "arXiv preprint cs/0306050,", "year": 2003 }, { "authors": [ "Sunita Sarawagi", "William W Cohen" ], "title": "Semi-markov conditional random fields for information extraction", "venue": "In Advances in neural information processing systems,", "year": 2005 }, { "authors": [ "Sebastian Schuster", "Sonal Gupta", "Rushin Shah", "Mike Lewis" ], "title": "Cross-lingual transfer learning for multilingual task oriented dialog", "venue": "arXiv preprint arXiv:1810.13327,", "year": 2018 }, { "authors": [ "Yan Shao", "Christian Hardmeier", "Jörg Tiedemann", "Joakim Nivre" ], "title": "Character-based joint segmentation and pos tagging for chinese using bidirectional rnn-crf", "venue": "arXiv preprint arXiv:1704.01314,", "year": 2017 }, { "authors": [ "Aditya Siddhant", "Anuj Goyal", "Angeliki Metallinou" ], "title": "Unsupervised transfer learning for spoken language understanding in intelligent agents", "venue": "In Proceedings of the AAAI conference on artificial intelligence,", "year": 2019 }, { "authors": [ "Mitchell Stern", "Jacob Andreas", "Dan Klein" ], "title": "A minimal span-based neural constituency parser", "venue": "arXiv preprint arXiv:1705.03919,", "year": 2017 }, { "authors": [ "Shaolei Wang", "Wanxiang Che", "Yue Zhang", "Meishan Zhang", "Ting Liu" ], "title": "Transition-based disfluency detection using lstms", "venue": "In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing,", "year": 2017 }, { "authors": [ "Wenhui Wang", "Baobao Chang" ], "title": "Graph-based dependency parsing with bidirectional lstm", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 }, { "authors": [ "Naiwen Xue", "Fei Xia", "Fu-Dong Chiou", "Marta Palmer" ], "title": "The penn chinese treebank: Phrase structure annotation of a large corpus", "venue": "Natural language engineering,", "year": 2005 }, { "authors": [ "Nianwen Xue" ], "title": "Chinese word segmentation as character tagging", "venue": "In International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number", "year": 2003 }, { "authors": [ "Jie Yang", "Yue Zhang", "Fei Dong" ], "title": "Neural word segmentation with rich pretraining", "venue": "arXiv preprint arXiv:1704.08960,", "year": 2017 }, { "authors": [ "Zhi-Xiu Ye", "Zhen-Hua Ling" ], "title": "Hybrid semi-markov crf for neural sequence labeling", "venue": "arXiv preprint arXiv:1805.03838,", "year": 2018 }, { "authors": [ "Juntao Yu", "Bernd Bohnet", "Massimo Poesio" ], "title": "Named entity recognition as dependency parsing", "venue": "arXiv preprint arXiv:2005.07150,", "year": 2020 }, { "authors": [ "Feifei Zhai", "Saloni Potdar", "Bing Xiang", "Bowen Zhou" ], "title": "Neural models for sequence chunking", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Meishan Zhang", "Yue Zhang", "Guohong Fu" ], "title": "Transition-based neural word segmentation", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Sequence segmentation is essentially the process of partitioning a sequence of fine-grained lexical units into a sequence of coarse-grained ones. In some scenarios, each composed unit is assigned a categorical label. For example, Chinese word segmentation splits a character sequence into a word sequence (Xue, 2003). Syntactic chunking segments a word sequence into a sequence of labeled groups of words (i.e., constituents) (Sang & Buchholz, 2000).\nThere are currently two mainstream approaches to sequence segmentation. The most common is to regard it as a sequence labeling problem by using IOB tagging scheme (Mesnil et al., 2014; Ma & Hovy, 2016; Liu et al., 2019b; Chen et al., 2019a; Luo et al., 2020). A representative work is Bidirectional LSTM-CRF (Huang et al., 2015), which adopts LSTM (Hochreiter & Schmidhuber, 1997) to read an input sentence and CRF (Lafferty et al., 2001) to decode the label sequence. This type of method is very effective, providing tons of state-of-the-art performances. However, it is vulnerable to producing invalid labels, for instance, “O, I-tag, I-tag”. This problem is very severe in low resource settings (Peng et al., 2017). In experiments (see section 4.6), we also find that it performs poorly in recognizing long-length segments.\nRecently, there is a growing interest in span-based models (Zhai et al., 2017; Li et al., 2019; Yu et al., 2020). They treat a span rather than a token as the basic unit for labeling. Li et al. (2019) cast named entity recognition (NER) to a machine reading comprehension (MRC) task, where entities are extracted as retrieving answer spans. Yu et al. (2020) rank all the spans in terms of the scores predicted by a bi-affine model (Dozat & Manning, 2016). In NER, span-based models have significantly outperformed their sequence labeling based counterparts. While these methods circumvent the use of IOB tagging scheme, they still rely on post-processing rules to guarantee the extracted span set to be valid. Moreover, since span-based models are locally normalized at span level, they potentially suffer from the label bias problem (Lafferty et al., 2001).\nThis paper seeks to provide a new framework which infers the segmentation of a unit sequence by directly selecting from all valid segmentation candidates, instead of manipulating tokens or spans. To this end, we propose Lexical Unit Analysis (LUA) in this paper. LUA assigns a score to every valid segmentation candidate and leverages dynamic programming (DP) (Bellman, 1966) to search for the maximum scoring one. The score of a segmentation is computed by using the scores of its all segments. Besides, we adopt neural networks to score every segment of the input sentence.\nThe purpose of using DP is to solve the intractability of extracting the maximum scoring segmentation candidate by brute-force search. The time complexity of LUA is quadratic time, yet it can be optimized to linear time in practice by performing parallel matrix computations. For training criterion, we incur a hinge loss between the ground truth and the predictions. We also extend LUA to unlabeled segmentation and capturing label correlations.\nFigure 1 illustrates the comparison between previous methods and the proposed LUA. Prior models at token level and span level are vulnerable to generating invalid predictions, and hence rely on heuristic rules to fix them. For example, in the middle part of Figure 1, the spans of two inferred named entities, [Word Cup]MISC and [Cup]MISC, conflicts, which is mitigated by comparing the predicted scores. LUA scores all possible segmentation candidates and uses DP to extract the maximum scoring one. In this way, our models guarantee the predictions to be valid. Moreover, the globality of DP addresses the label bias problem.\nExtensive experiments are conducted on syntactic chunking, NER, slot filling, Chinese word segmentation, and Chinese part-of-speech (POS) tagging across 15 tasks. We have obtained new stateof-the-art results on 13 of them and performed competitively on the others. In particular, we observe that LUA is expert at identifying long-length segments." }, { "heading": "2 METHODOLOGY", "text": "We denote an input sequence (i.e., fine-grained lexical units) as x = [x1, x2, · · · , xn], where n is the sequence length. An output sequence (i.e., coarse-grained lexical units) is represented as the segmentation y = [y1, y2, · · · , ym] with each segment yk being a triple (ik, jk, tk). m denotes its length. (ik, jk) specifies a span that corresponds to the phrase xik,jk = [xik , xik+1, · · · , xjk ]. tk is a label from the label space L. We define a valid segmentation candidate as its segments are non-overlapping and fully cover the input sequence.\nA case extracted from CoNLL-2003 dataset (Sang & De Meulder, 2003):\nx = [[SOS],Sangthai,Glory, 22/11/96, 3000,Singapore] y = [(1, 1,O), (2, 3,MISC), (4, 4,O), (5, 5,O), (6, 6,LOC)] .\nStart-of-sentence symbol [SOS] is added in the pre-processing stage." }, { "heading": "2.1 MODEL: SCORING SEGMENTATION CANDIDATES", "text": "We denote Y as the universal set that contains all valid segmentation candidates. Given one of its members y ∈ Y , we compute the score f(y) as\nf(y) = ∑\n(i,j,t)∈y\n( sci,j + s l i,j,t ) , (1)\nAlgorithm 1: Inference via Dynamic Programming (DP) Input: Composition score sci,j and label score sli,j,t for every possible segment (i, j, t). Output: The maximum segmentation scoring candidate ŷ and its score f(ŷ).\n1 Set two n× n shaped matrices, cL and bc, for computing maximum scoring labels. 2 Set two n-length vectors, g and bg , for computing maximum scoring segmentation. 3 for 1 ≤ i ≤ j ≤ n do 4 Compute the maximum label score for each span (i, j): sLi,j = maxt∈L s l i,j,t. 5 Record the backtracking index: bci,j = argmaxt∈L s l i,j,t.\n6 Initialize the value of the base case x1,1: g1 = sc1,1 + s L 1,1. 7 for i ∈ [2, 3, · · · , n] do 8 Compute the value of the prefix x1,i: gi = max1≤j≤i−1 ( gi−j + (s c i−j+1,i + s L i−j+1,i) ) .\n9 Record the backtracking index: bgi = argmax1≤j≤i−1 ( gi−j + (s c i−j+1,i + s L i−j+1,i) ) .\n10 Get the maximum scoring candidate ŷ by back tracing the tables bg and bc. 11 Get the maximum segmentation score: f(ŷ) = gn.\nwhere sci,j is the composition score to estimate the feasibility of merging several fine-grained units [xi, xi+1, · · · , xj ] into a coarse-grained unit and sli,j,t is the label score to measure how likely the label of this segment is t. Both scores are obtained by a scoring model.\nScoring Model. a scoring model scores all possible segments (i, j, t) for an input sentence x. Firstly, we get the representation for each fine-grained unit. Following prior works (Li et al., 2019; Luo et al., 2020; Yu et al., 2020), we adopt BERT (Devlin et al., 2018), a powerful pre-trained language model, as the sentence encoder. Specifically, we have\n[hw1 ,h w 2 · · · ,hwn ] = BERT(x), (2)\nThen, we compute the representation for a coarse-grained unit xi,j , 1 ≤ i ≤ j ≤ n as\nhpi,j = h w i ⊕ hwj ⊕ (hwi − hwj )⊕ (hwi hwj ), (3)\nwhere ⊕ is vector concatenation and is element-wise product. Eventually, we employ two non-linear feedforward networks to score a segment (i, j, t):\nsci,j = ( vc )T tanh(Wchpi,j), s l i,j,t = ( vlt )T tanh(Wlhpi,j), (4)\nwhere vc, Wc, vlt, t ∈ L, and Wl are all learnable parameters. Besides, the scoring model used here can be flexibly replaced by any regression method." }, { "heading": "2.2 INFERENCE VIA DYNAMIC PROGRAMMING", "text": "The prediction of the maximum scoring segmentation candidate can be formulated as\nŷ = argmax y∈Y f(y). (5)\nBecause the size of search space |Y| increases exponentially with respect to the sequence length n, brute-force search to solve Equation 5 is computationally infeasible. LUA uses DP to address this issue, which is facilitated by the decomposable nature of Equation 1.\nDP is a well-known optimization method which solves a complicated problem by breaking it down into simpler sub-problems in a recursive manner. The relation between the value of the larger problem and the values of its sub-problems is called the Bellman equation.\nSub-problem. In the context of LUA, the sub-problem of segmenting an input unit sequence x is segmenting its prefixes x1,i, 1 ≤ i ≤ n. We define gi as the maximum segmentation score of the prefix x1,i. Under this scheme, we have maxy∈Y f(y) = gn.\nThe Bellman Equation. The relatinship between segmenting a sequence x1,i, i > 1 and segmenting its prefixes x1,i−j , 1 ≤ j ≤ i− 1 is built by the last segments (i− j + 1, i, t):\ngi = max 1≤j≤i−1\n( gi−j + (s c i−j+1,i +max\nt∈L sli−j+1,i,t)\n) . (6)\nIn practice, to reduce the time complexity of above equation, the last term is computed beforehand as sLi,j = maxt∈L s l i,j,t, 1 ≤ i ≤ j ≤ n. Hence, Equation 6 is reformulated as\ngi = max 1≤j≤i−1\n( gi−j + (s c i−j+1,i + s L i−j+1,i) ) . (7)\nThe base case is the first token x1,1 = [[SOS]]. We get its score g1 as sc1,1 + s L 1,1.\nAlgorithm 1 shows how DP is applied in inference. Firstly, we set two matrices and two vectors to store the solutions to the sub-problems (1-st to 2-nd lines). Secondly, we get the maximum label scores for all the spans (3-rd to 5-th lines). Then, we initialize the trivial case g1 and recursively calculate the values for prefixes x1,i, i > 1 (6-th to 9-th lines). Finally, we get the predicted segmentation ŷ and its score f(ŷ) (10-th to 11-th lines).\nThe time complexity of Algorithm 1 is O(n2). By performing the max operation of Equation 7 in parallel on GPU, it can be optimized to only O(n), which is highly efficient. Besides, DP, as the backbone of the proposed model, is non-parametric. The trainable parameters only exist in the scoring model part. These show LUA is a very light-weight algorithm." }, { "heading": "2.3 TRAINING CRITERION", "text": "We adopt max-margin penalty as the loss function for training. Given the predicted segmentation ŷ and the ground truth segmentation y∗, we have\nJ = max ( 0, 1− f(y∗) + f(ŷ) ) . (8)" }, { "heading": "3 EXTENSIONS OF LUA", "text": "We propose two extensions of LUA for generalizing it to different scenarios.\nUnlabeled Segmentation. In some tasks (e.g., Chinese word segmentation), the segments are unlabeled. Under this scheme, the Equation 1 and Equation 7 are reformulated as\nf(y) = ∑\n(i,j)∈y\nsci,j , gi = max 1≤j≤i−1 (gi−j + s c i−j+1,i). (9)\nCapturing Label Correlations. In some tasks (e.g., syntactic chunking), the labels of segments are strongly correlated. To incorporate this information, we redefine f(y) as\nf(y) = ∑\n1≤k≤m\n( scik,jk + s l ik,jk,tk ) + ∑ 1≤k≤m sdtk−q+1,tk−q+2,··· ,tk . (10)\nScore sdtk−q+1,tk−q+2,··· ,tk models the label dependencies among q successive segments, yk−q+1,k. In practice, we find q = 2 balances the efficiency and the effectiveness well, and thus parameterize a learnable matrix Wd ∈ R|V|×|V| to implement it. The corresponding Bellman equation to above scoring function is\ngi,t = max 1≤j≤i−1 ( max t′∈L (gi−j,t′ + s d t′,t) + (s c i−j+1,i + s l i−j+1,i,t) ) , (11)\nwhere gi,t is the maximum score of labeling the last segment of the prefix x1,i with t. For initialization, we set the value of gd1,O as 0 and the others as −∞. By performing the inner loops of two max operations in parallel, the practical time complexity for computing gi,t, 1 ≤ i ≤ n, t ∈ L is also O(n). Ultimately, the segmentation score f(ŷ) is obtained by maxt∈L gn,t. This extension further improves the results on syntactic chunking and Chinese POS tagging, as both tasks have rich sequential features among the labels of segments." }, { "heading": "4 EXPERIMENTS", "text": "We have conducted extensive studies on 5 tasks, including Chinese word segmentation, Chinese POS tagging, syntactic chunking, NER, and slot filling, across 15 datasets. Firstly, Our models have achieved new state-of-the-art performances on 13 of them. Secondly, the results demonstrate that the F1 score of identifying long-length segments has been notably improved. Lastly, we show that LUA is a very efficient algorithm concerning the running time." }, { "heading": "4.1 SETTINGS", "text": "We use the same configurations for all 15 datasets. L2 regularization and dropout ratio are respectively set as 1× 10−6 and 0.2 for reducing overfit. We use Adam (Kingma & Ba, 2014) to optimize our model. Following prior works, BERTBASE is adopted as the sentence encoder. We use uncased BERTBASE for slot filling, Chinese BERTBASE for Chinese tasks (e.g., Chinese POS tagging), and cased BERTBASE for others (e.g., syntactic chunking). In addition, the improvements of our model over baselines are statistically significant with p < 0.05 under t-test." }, { "heading": "4.2 CHINESE WORD SEGMENTATION", "text": "Chinese word segmentation splits a Chinese character sequence into a sequence of Chinese words. We use SIGHAN 2005 bake-off (Emerson, 2005) and Chinese Treebank 6.0 (CTB6) (Xue et al., 2005). SIGHAN 2005 back-off consists of 5 datasets, namely AS, MSR, CITYU, and PKU. Following Ma et al. (2018), we randomly select 10% training data as development set. We convert all digits, punctuation, and Latin letters to half-width for handling full/half-width mismatch between training and test set. We also convert AS and CITYU to simplified Chinese. For CTB6, we follow the same format and partition as in Yang et al. (2017); Ma et al. (2018).\nTable 1 depicts the experiment results. All the results of baselines are from Yang et al. (2017); Ma et al. (2018); Huang et al. (2019); Meng et al. (2019). We have achieved new state-of-the-art performance on all datasets except MSR. Our model improves the F1 score by 0.25% on AS, 0.32% on CITYU, 0.19% on PKU, and 0.54% on CTB6. Note that our model doesn’t use any external resources, such as glyph information (Meng et al., 2019) or POS tags (Yang et al., 2017). Despite this, our model is still competitive with Glyce + BERT on MSR." }, { "heading": "4.3 CHINESE POS TAGGING", "text": "Chinese POS tagging jointly segments a Chinese character sequence and assigns a POS tag to each segmented unit. We use Chinese Treebank 5.0 (CTB5), CTB6, Chinese Treebank 9.0 (CTB9) (Xue\net al., 2005), and the Chinese section of Universal Dependencies 1.4 (UD1) (Nivre et al., 2016). CTB5 is comprised of newswire data. CTB9 consists of source texts in various genres, which cover CTB5. we convert the texts in UD1 from traditional Chinese into simplified Chinese. We follow the same train/dev/test split for above datasets as in Shao et al. (2017).\nTable 2 shows the experiment results. The performances of all baselines are reported from Meng et al. (2019). Our model LUA w/ Label Correlations has yielded new state-of-the-art results on all the datasets: it improves the F1 scores by 1.35% on CTB5, 1.22% on CTB6, 0.8% on CTB9, and 0.94% on UD1. Moreover, the basic LUA without capturing the label correlations also outperforms the strongest baseline, Glyce + BERT, by 0.18% on CTB5 and 0.07% on CTB9. All these facts further verify the effectiveness of LUA and its extension." }, { "heading": "4.4 SYNTACTIC CHUNKING AND NER", "text": "Syntactic chunking aims to find phrases related to syntatic category for a sentence. We use CoNLL2000 dataset (Sang & Buchholz, 2000), which defines 11 syntactic chunk types (NP, VP, PP, etc.) and follow the standard splittings of training and test datasets as previous work. NER locates the named entities mentioned in unstructured text and meanwhile classifies them into predefined categories. We use CoNLL-2003 dataset (Sang & De Meulder, 2003) and OntoNotes 5.0 dataset (Pradhan et al., 2013). CoNLL-2003 dataset consists of 22137 sentences totally and is split into 14987, 3466, and 3684 sentences for the training set, development set, and test set, respectively. It is tagged with four linguistic entity types (PER, LOC, ORG, MISC). OntoNotes 5.0 dataset contains 76714 sentences from a wide variety of sources (e.g., magazine and newswire). It includes 18 types of named entity, which consists of 11 types (Person, Organization, etc.) and 7 values (Date, Percent, etc.). We follow the same format and partition as in Li et al. (2019); Luo et al. (2020); Yu et al. (2020). In order to fairly compare with previous reported results, we convert the predicted segments into IOB format and utilize conlleval script1 to compute the F1 score at test time.\nTable 3 shows the results. Most of baselines are directly taken from Akbik et al. (2018); Li et al. (2019); Luo et al. (2020); Yu et al. (2020). Besides, following Luo et al. (2020), we rerun the source code2 of GCDT and report its result on CoNLL-2000 with standard evaluation method. Generally, our proposed models LUA w/o Label Correlations yield competitive performance over state-of-theart models on both Chunking and NER tasks. Specifically, regarding to the NER task, on CoNLL2003 dataset our model LUA outperforms several strong baselines including Flair Embedding, and it is comparable to the state-of-the-art model (i.e., BERT-Biaffine Model). In particular, on OntoNotes dataset, LUA outperforms it by 0.79% points and establishes a new state-of-the-art result. Regarding to the Chunking task, LUA advances the best model (GCDT) and the improvements are further enlarged to 0.42% points by LUA w/ Label Correlations." }, { "heading": "4.5 SLOT FILLING", "text": "Slot filling, as an important task in spoken language understanding (SLU), extracts semantic constituents from an utterance. We use ATIS dataset (Hemphill et al., 1990), SNIPS dataset (Coucke et al., 2018), and MTOD dataset (Schuster et al., 2018). ATIS dataset consists of audio recordings of\n1https://www.clips.uantwerpen.be/conll2000/chunking/conlleval.txt. 2https://github.com/Adaxry/GCDT.\npeople making flight reservations. The training set contains 4478 utterances and the test set contains 893 utterances. SNIPS dataset is collected by Snips personal voice assistant. The training set contains 13084 utterances and the test set contains 700 utterances. MTOD dataset has three domains, including Alarm, Reminder, and Weather. We use the English part of MTOD dataset, where training set, dev set, and test set respectively contain 30521, 4181, and 8621 utterances. We follow the same partition of above datasets as in Goo et al. (2018); Schuster et al. (2018).\nTable 4 summarizes the experiment results for slot filling. On ATIS and SNIPS, we take the results of all baselines as reported in Liu et al. (2019c) for comparison. On MTOD, we rerun the open source toolkits, Slot-gated SLU3 and Joint BERT4. As all previous approaches jointly model slot filling and intent detection (a classification task in SLU), we follow them to augment LUA with intent detection for a fair comparison. As shown in Table 4, the augmented LUA has surpassed all baselines and obtained state-of-the-art results on the three datasets: it increases the F1 scores by around 0.05% on ATIS and SNIPS, and delivers a substantial gain of 1.11% on MTOD. It’s worth mentioning that LUA even outperforms the strong baseline Joint BERT with a margin of 0.18% and 0.21% on ATIS and SNIPS without modeling intent detection." }, { "heading": "4.6 LONG-LENGTH SEGMENT IDENTIFICATION", "text": "Since LUA doesn’t resort to IOB tagging scheme, it should be more accurate in recognizing longlength segments than prior methods. To verify this intuition, we evaluate different models on the segments of different lengths. This study is investigated on OntoNotes 5.0 dataset. Two strong models are adopted as the baselines: one is the best sequence labeling model (i.e., HCR) and the other is the best span-based model (i.e., BERT-Biaffine Model). Both baselines are reproduced by rerunning their open source codes, biaffine-ner5 and Hire-NER6.\nThe results are shown in Table 5. On the one hand, both LUA and Biaffine Model obtain much higher scores of extracting long-length entities than HCR. For example, LUA outperforms HCR w/ BERT by almost twofold on range 12 − 24. On the other hand, LUA achieves even better results than BERT-Biaffine Model. For instance, the F1 score improvements of LUA over it are 10.11% on range 8− 11 and 41.23% on range 12− 24." }, { "heading": "4.7 RUNNING TIME ANALYSIS", "text": "Table 6 shows the running time comparison among different models. The middle two columns are the time complexity of decoding a label sequence. The last column is the time cost of one epoch in training. We set the batch size as 16 and run all the models on 1 GPU. The results indicate that\n3https://github.com/MiuLab/SlotGated-SLU. 4https://github.com/monologg/JointBERT. 5https://github.com/juntaoy/biaffine-ner. 6https://github.com/cslydia/Hire-NER.\nthe success of our models in performances does not lead to serious side-effects on efficiency. For example, with the same practical time complexity, BERT + CRF is slower than the proposed LUA by 15.01% and LUA w/ Label Correlations by 5.30%." }, { "heading": "5 RELATED WORK", "text": "Sequence segmentation aims to partition a fine-grained unit sequence into multiple labeled coarsegrained units. Traditionally, there are two types of methods. The most common is to cast it into a sequence labeling task (Mesnil et al., 2014; Ma & Hovy, 2016; Chen et al., 2019a) by using IOB tagging scheme. This method is simple and effective, providing a number of state-of-the-art results. Akbik et al. (2018) present Flair Embeddings that pretrain character embedding in a large corpus and directly use it, instead of word representation, to encode a sentence. Liu et al. (2019b) introduce GCDT that deepens the state transition path at each position in a sentence, and further assigns each word with global representation. Luo et al. (2020) use hierarchical contextualized representations to incorporate both sentence-level and document-level information. Nevertheless, these models are vulnerable to producing invalid labels and perform poorly in identifying longlength segments. This problem is very severe in low-resource setting. Ye & Ling (2018); Liu et al. (2019a) adopt Semi-Markov CRF (Sarawagi & Cohen, 2005) that improves CRF at phrase level. However, the computation of CRF loss is costly in practice and the potential to model the label dependencies among segments is limited. An alternative approach that is less studied uses a transition-based system to incrementally segment and label an input sequence (Zhang et al., 2016; Lample et al., 2016). For instance, Qian et al. (2015) present a transition-based model for joint word segmentation, POS tagging, and text normalization. Wang et al. (2017) employ a transitionbased model to disfluency detection task, which helps capture non-local chunk-level features. These models have many advantages like theoretically lower time complexity and labeling the extracted mentions at span level. However, to our best knowledge, no recent transition-based models surpass their sequence labeling based counterparts.\nMore recently, there is a surge of interests in span-based models. They treat a segment, instead of a fine-grained token, as the basic unit for labeling. For example, Li et al. (2019) regard NER as a MRC task, where entities are recognized as retrieving answer spans. Since these methods are locally normalized at span level rather than sequence level, they potentially suffer from the label bias problem. Additionally, they rely on rules to ensure the extracted span set to be valid. Spanbased methods also emerge in other fields of NLP. In dependency parsing, Wang & Chang (2016) propose a LSTM-based sentence segment embedding method named LSTM-Minus. Stern et al. (2017) integrate LSTM-minus feature into constituent parsing models. In coreference resolution, Lee et al. (2018) consider all spans in a document as the potential mentions and learn distributions over all the possible antecedents for each other." }, { "heading": "6 CONCLUSION", "text": "This work proposes a novel LUA for general sequence segmentation tasks. LUA directly scores all the valid segmentation candidates and uses dynamic programming to extract the maximum scoring one. Compared with previous models, LUA naturally guarantees the predicted segmentation to be valid and circumvents the label bias problem. Extensive studies are conducted on 5 tasks across 15 datasets. We have achieved the state-of-the-art performances on 13 of them. Importantly, the F1 score of identifying long-length segments is significantly improved." } ]
2,020
null
SP:258fe1091f7ce89bff79cd8377ee5faad84e9315
[ "This work proposes a new learner bridging the gap between connectionists and classicists in the task of Raven’s Progressive Matrices (RPM). It relies on a CNN to extract visual features and then uses an algebraic abstract reasoning module to infer the operators of an RPM instance, which allows applying the inferred operator on the RPM instance to predict potential solutions according to various attributes. The most likely solution according to the ensemble of the attributes is then selected as an answer." ]
Is intelligence realized by connectionist or classicist? While connectionist approaches have achieved superhuman performance, there has been growing evidence that such task-specific superiority is particularly fragile in systematic generalization. This observation lies in the central debate (Fodor et al., 1988; Fodor & McLaughlin, 1990) between connectionist and classicist, wherein the latter continually advocates an algebraic treatment in cognitive architectures. In this work, we follow the classicist’s call and propose a hybrid approach to improve systematic generalization in reasoning. Specifically, we showcase a prototype with algebraic representations for the abstract spatial-temporal reasoning task of Raven’s Progressive Matrices (RPM) and present the ALgebra-Aware Neuro-Semi-Symbolic (ALANS) learner. The ALANS learner is motivated by abstract algebra and the representation theory. It consists of a neural visual perception frontend and an algebraic abstract reasoning backend: the frontend summarizes the visual information from object-based representations, while the backend transforms it into an algebraic structure and induces the hidden operator on-the-fly. The induced operator is later executed to predict the answer’s representation, and the choice most similar to the prediction is selected as the solution. Extensive experiments show that by incorporating an algebraic treatment, the ALANS learner outperforms various pure connectionist models in domains requiring systematic generalization. We further show that the algebraic representation learned can be decoded by isomorphism and used to generate an answer.
[]
[ { "authors": [ "Dzmitry Bahdanau", "Shikhar Murty", "Michael Noukhovitch", "Thien Huu Nguyen", "Harm de Vries", "Aaron Courville" ], "title": "Systematic generalization: What is required and can it be learned", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jonathan F Bard" ], "title": "Practical bilevel optimization: algorithms and applications, volume 30", "venue": "Springer Science & Business Media,", "year": 2013 }, { "authors": [ "Patricia A Carpenter", "Marcel A Just", "Peter Shell" ], "title": "What one intelligence test measures: a theoretical account of the processing in the raven progressive matrices test", "venue": "Psychological Review,", "year": 1990 }, { "authors": [ "François Chollet" ], "title": "The measure of intelligence", "venue": "arXiv preprint arXiv:1911.01547,", "year": 2019 }, { "authors": [ "William W Cohen" ], "title": "Tensorlog: A differentiable deductive database", "venue": "arXiv preprint arXiv:1605.06523,", "year": 2016 }, { "authors": [ "Benoı̂t Colson", "Patrice Marcotte", "Gilles Savard" ], "title": "An overview of bilevel optimization", "venue": "Annals of operations research,", "year": 2007 }, { "authors": [ "Liya Ding" ], "title": "Neural prolog-the concepts, construction and mechanism", "venue": "IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century,", "year": 1995 }, { "authors": [ "Richard Evans", "Edward Grefenstette" ], "title": "Learning explanatory rules from noisy data", "venue": "Journal of Artificial Intelligence Research,", "year": 2018 }, { "authors": [ "Jerry Fodor", "Brian P McLaughlin" ], "title": "Connectionism and the problem of systematicity: Why smolensky’s solution doesn’t", "venue": "work. Cognition,", "year": 1990 }, { "authors": [ "Jerry A Fodor" ], "title": "The language of thought, volume 5", "venue": "Harvard university press,", "year": 1975 }, { "authors": [ "Jerry A Fodor", "Zenon W Pylyshyn" ], "title": "Connectionism and cognitive architecture: A critical analysis", "venue": null, "year": 1988 }, { "authors": [ "Manoel VM França", "Gerson Zaverucha", "Artur S d’Avila Garcez" ], "title": "Fast relational learning using bottom clause propositionalization with artificial neural networks", "venue": "Machine learning,", "year": 2014 }, { "authors": [ "Artur S Avila Garcez", "Gerson Zaverucha" ], "title": "The connectionist inductive learning and logic programming system", "venue": "Applied Intelligence,", "year": 1999 }, { "authors": [ "Artur S d’Avila Garcez", "Krysia B Broda", "Dov M Gabbay" ], "title": "Neural-symbolic learning systems: foundations and applications", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Chi Han", "Jiayuan Mao", "Chuang Gan", "Josh Tenenbaum", "Jiajun Wu" ], "title": "Visual concept-metaconcept learning", "venue": "In Proceedings of Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Bernard A Hausmann", "Oystein Ore" ], "title": "Theory of quasi-groups", "venue": "American Journal of Mathematics,", "year": 1937 }, { "authors": [ "Thomas Little Heath" ], "title": "The thirteen books of Euclid’s Elements", "venue": "Courier Corporation,", "year": 1956 }, { "authors": [ "Felix Hill", "Adam Santoro", "David GT Barrett", "Ari S Morcos", "Timothy Lillicrap" ], "title": "Learning to make analogies by contrasting abstract relational structure", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "S Hiolldobler" ], "title": "A structured connectionist unification algorithm", "venue": "In Proceedings of the National Conference of the American Association on Artificial Intelligence,", "year": 1990 }, { "authors": [ "Douglas R Hofstadter" ], "title": "Fluid concepts and creative analogies: Computer models of the fundamental mechanisms of thought", "venue": "Basic books,", "year": 1995 }, { "authors": [ "Sheng Hu", "Yuqing Ma", "Xianglong Liu", "Yanlu Wei", "Shihao Bai" ], "title": "Hierarchical rule induction network for abstract visual reasoning", "venue": "arXiv preprint arXiv:2002.06838,", "year": 2020 }, { "authors": [ "James E Humphreys" ], "title": "Introduction to Lie algebras and representation theory, volume 9", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Susanne M Jaeggi", "Martin Buschkuehl", "John Jonides", "Walter J Perrig" ], "title": "Improving fluid intelligence with training on working memory", "venue": "Proceedings of the National Academy of Sciences,", "year": 2008 }, { "authors": [ "Ken Kansky", "Tom Silver", "David A Mély", "Mohamed Eldawy", "Miguel Lázaro-Gredilla", "Xinghua Lou", "Nimrod Dorfman", "Szymon Sidor", "Scott Phoenix", "Dileep George" ], "title": "Schema networks: Zeroshot transfer with a generative causal model of intuitive physics", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Ekaterina Komendantskaya" ], "title": "Unification neural networks: unification by error-correction learning", "venue": "Logic Journal of the IGPL,", "year": 2011 }, { "authors": [ "Brenden Lake", "Marco Baroni" ], "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Peter Lancaster" ], "title": "Explicit solutions of linear matrix equations", "venue": "SIAM review,", "year": 1970 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Daniel R Little", "Stephan Lewandowsky", "Thomas L Griffiths" ], "title": "A bayesian model of rule induction in raven’s progressive matrices", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society (CogSci),", "year": 2012 }, { "authors": [ "Andrew Lovett", "Kenneth Forbus" ], "title": "Modeling visual problem solving as analogical reasoning", "venue": "Psychological Review,", "year": 2017 }, { "authors": [ "Andrew Lovett", "Emmett Tomai", "Kenneth Forbus", "Jeffrey Usher" ], "title": "Solving geometric analogy problems through two-stage analogical mapping", "venue": "Cognitive Science,", "year": 2009 }, { "authors": [ "Andrew Lovett", "Kenneth Forbus", "Jeffrey Usher" ], "title": "A structure-mapping model of raven’s progressive matrices", "venue": "In Proceedings of the Annual Meeting of the Cognitive Science Society (CogSci),", "year": 2010 }, { "authors": [ "Penelope Maddy" ], "title": "Believing the axioms", "venue": "i. The Journal of Symbolic Logic,", "year": 1988 }, { "authors": [ "Robin Manhaeve", "Sebastijan Dumancic", "Angelika Kimmig", "Thomas Demeester", "Luc De Raedt" ], "title": "Deepproblog: Neural probabilistic logic programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Jiayuan Mao", "Chuang Gan", "Pushmeet Kohli", "Joshua B Tenenbaum", "Jiajun Wu" ], "title": "The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Gary Marcus" ], "title": "The algebraic mind", "venue": null, "year": 2001 }, { "authors": [ "Gary Marcus" ], "title": "The next decade in ai: four steps towards robust artificial intelligence", "venue": "arXiv preprint arXiv:2002.06177,", "year": 2020 }, { "authors": [ "John McCarthy" ], "title": "Programs with common sense", "venue": "RLE and MIT computation center,", "year": 1960 }, { "authors": [ "Keith McGreggor", "Ashok Goel" ], "title": "Confident reasoning on raven’s progressive matrices tests", "venue": "In Proceedings of AAAI Conference on Artificial Intelligence (AAAI),", "year": 2014 }, { "authors": [ "Keith McGreggor", "Maithilee Kunda", "Ashok Goel" ], "title": "Fractals and ravens", "venue": "Artificial Intelligence,", "year": 2014 }, { "authors": [ "Can Serif Mekik", "Ron Sun", "David Yun Dai" ], "title": "Similarity-based reasoning, raven’s matrices, and general intelligence", "venue": "In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2018 }, { "authors": [ "Allen Newell" ], "title": "Physical symbol systems", "venue": "Cognitive science,", "year": 1980 }, { "authors": [ "James C Raven" ], "title": "Mental tests used in genetic studies: The performance of related individuals on tests mainly educative and mainly reproductive", "venue": null, "year": 1936 }, { "authors": [ "John C Raven", "John Hugh Court" ], "title": "Raven’s progressive matrices and vocabulary scales", "venue": null, "year": 1998 }, { "authors": [ "Tim Rocktäschel", "Sebastian Riedel" ], "title": "End-to-end differentiable proving", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Adam Santoro", "David Raposo", "David G Barrett", "Mateusz Malinowski", "Razvan Pascanu", "Peter Battaglia", "Timothy Lillicrap" ], "title": "A simple neural network module for relational reasoning", "venue": "In Proceedings of Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "Adam Santoro", "Felix Hill", "David Barrett", "Ari Morcos", "Timothy Lillicrap" ], "title": "Measuring abstract reasoning in neural networks", "venue": "In Proceedings of International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Luciano Serafini", "Artur d’Avila Garcez" ], "title": "Logic tensor networks: Deep learning and logical reasoning from data and knowledge", "venue": "arXiv preprint arXiv:1606.04422,", "year": 2016 }, { "authors": [ "Lokendra Shastri" ], "title": "Neurally motivated constraints on the working memory capacity of a production system for parallel processing: Implications of a connectionist model based on temporal synchrony", "venue": "In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society: July,", "year": 1992 }, { "authors": [ "Jude W Shavlik", "Geoffrey G Towell" ], "title": "An approach to combining explanation-based and neural learning algorithms", "venue": "In Applications Of Learning And Planning Methods,", "year": 1991 }, { "authors": [ "Snejana Shegheva", "Ashok Goel" ], "title": "The structural affinity method for solving the raven’s progressive matrices test for intelligence", "venue": "In Proceedings of AAAI Conference on Artificial Intelligence (AAAI),", "year": 2018 }, { "authors": [ "Gustav Sourek", "Vojtech Aschenbrenner", "Filip Zelezny", "Ondrej Kuzelka" ], "title": "Lifted relational neural networks", "venue": "arXiv preprint arXiv:1508.05128,", "year": 2015 }, { "authors": [ "Charles Spearman" ], "title": "The nature of “intelligence” and the principles of cognition", "venue": null, "year": 1923 }, { "authors": [ "Charles Spearman" ], "title": "The abilities of man, volume 6", "venue": "Macmillan New York,", "year": 1927 }, { "authors": [ "Xander Steenbrugge", "Sam Leroux", "Tim Verbelen", "Bart Dhoedt" ], "title": "Improving generalization for abstract reasoning tasks using disentangled feature representations", "venue": "arXiv preprint arXiv:1811.04784,", "year": 2018 }, { "authors": [ "Geoffrey G Towell", "Jude W Shavlik" ], "title": "Knowledge-based artificial neural networks", "venue": "Artificial intelligence,", "year": 1994 }, { "authors": [ "Duo Wang", "Mateja Jamnik", "Pietro Lio" ], "title": "Abstract diagrammatic reasoning with multiplex graph networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Ke Wang", "Zhendong Su" ], "title": "Automatic generation of raven’s progressive matrices", "venue": "In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI),", "year": 2015 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Terry Winograd" ], "title": "Procedures as a representation for data in a computer program for understanding natural language", "venue": "Technical report, MASSACHUSETTS INST OF TECH CAMBRIDGE PROJECT MAC,", "year": 1971 }, { "authors": [ "Ludwig Wittgenstein" ], "title": "Philosophical investigations", "venue": "Philosophische Untersuchungen. Macmillan,", "year": 1953 }, { "authors": [ "Jiajun Wu", "Joshua B Tenenbaum", "Pushmeet Kohli" ], "title": "Neural scene de-rendering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Yuhuai Wu", "Honghua Dong", "Roger Grosse", "Jimmy Ba" ], "title": "The scattering compositional learner: Discovering objects, attributes, relationships in analogical reasoning", "venue": "arXiv preprint arXiv:2007.04212,", "year": 2020 }, { "authors": [ "Kexin Yi", "Jiajun Wu", "Chuang Gan", "Antonio Torralba", "Pushmeet Kohli", "Josh Tenenbaum" ], "title": "Neuralsymbolic vqa: Disentangling reasoning from vision and language understanding", "venue": "In Proceedings of Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Kexin Yi", "Chuang Gan", "Yunzhu Li", "Pushmeet Kohli", "Jiajun Wu", "Antonio Torralba", "Joshua Tenenbaum" ], "title": "Clevrer: Collision events for video representation and reasoning", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Chi Zhang", "Feng Gao", "Baoxiong Jia", "Yixin Zhu", "Song-Chun Zhu" ], "title": "Raven: A dataset for relational and analogical visual reasoning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Chi Zhang", "Baoxiong Jia", "Feng Gao", "Yixin Zhu", "Hongjing Lu", "Song-Chun Zhu" ], "title": "Learning perceptual inference by contrasting", "venue": "In Proceedings of Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Kecheng Zheng", "Zheng-Jun Zha", "Wei Wei" ], "title": "Abstract reasoning with distracting features", "venue": "In Proceedings of Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Song-Chun Zhu", "David Mumford" ], "title": "A stochastic grammar of images", "venue": "Foundations and Trends® in Computer Graphics and Vision,", "year": 2007 }, { "authors": [ "Zhang" ], "title": "2019a) and Hu et al. (2020), there are four operators: Constant, Progression, Arithmetic, and Distribute of Three. Progression is parameterized by its step size ( ̆1{2)", "venue": "B INSTANCES OF OPERATORS", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "“Thought is in fact a kind of Algebra.” —William James (James, 1891)\nImagine you are given two alphabetical sequences of “c, b, a” and “d, c, b”, and asked to fill in the missing element in “e, d, ?”. In nearly no time will one realize the answer to be c. However, more surprising for human learning is that, effortlessly and instantaneously, we can “freely generalize” (Marcus, 2001) the solution to any partial consecutive ordered sequences. While believed to be innate in early development for human infants (Marcus et al., 1999), such systematic generalizability has constantly been missing and proven to be particularly challenging in existing connectionist models (Lake & Baroni, 2018; Bahdanau et al., 2019). In fact, such an ability to entertain a given thought and semantically related contents strongly implies an abstract algebra-like treatment (Fodor et al., 1988); in literature, it is referred to as the “language of thought” (Fodor, 1975), “physical symbol system” (Newell, 1980), and “algebraic mind” (Marcus, 2001). However, in stark contrast, existing connectionist models tend only to capture statistical correlation (Lake & Baroni, 2018; Kansky et al., 2017; Chollet, 2019), rather than providing any account for a structural inductive bias where systematic algebra can be carried out to facilitate generalization.\nThis contrast instinctively raises a question—what constitutes such an algebraic inductive bias? We argue that the foundation of the modeling counterpart to the algebraic treatment in early human development (Marcus, 2001; Marcus et al., 1999) lies in algebraic computations set up on mathematical axioms, a form of formalized human intuition and the starting point of modern mathematical reasoning (Heath et al., 1956; Maddy, 1988). Of particular importance to the basic building blocks of algebra is the Peano Axiom (Peano, 1889). In the Peano Axiom, the essential components of algebra, the algebraic set and corresponding operators over it, are governed by three statements: (1) the existence of at least one element in the field to study (“zero” element), (2) a successor function that is recursively applied to all elements and can, therefore, span the entire field, and (3) the principle of\nmathematical induction. Building on such a fundamental axiom, we begin to form the notion of an algebraic set and induce the operator along with it to construct an algebraic structure. We hypothesize that such a treatment of algebraic computations set up on fundamental axioms is essential for a model’s systematic generalizability, the lack of which will only make it sub-optimal.\nTo demonstrate the benefits of such an algebraic treatment in systematic generalization, we showcase a prototype for Raven’s Progressive Matrices (RPM) (Raven, 1936; Raven & Court, 1998), an exemplar task for abstract spatial-temporal reasoning (Santoro et al., 2018; Zhang et al., 2019a). In this task, an agent is given an incomplete 3ˆ3 matrix consisting of eight context panels with the last one missing, and asked to pick one answer from a set of eight choices that best completes the matrix. Human’s reasoning capability of solving this abstract reasoning task has been commonly regarded as an indicator of “general intelligence” (Carpenter et al., 1990) and “fluid intelligence” (Spearman, 1923; 1927; Hofstadter, 1995; Jaeggi et al., 2008). In spite of the task being one that ideally requires abstraction, algebraization, induction, and generalization (Raven, 1936; Raven & Court, 1998; Carpenter et al., 1990), recent endeavors unanimously propose pure connectionist models that attempt to circumvent such intrinsic cognitive requirements (Santoro et al., 2018; Zhang et al., 2019a;b; Wang et al., 2020; Zheng et al., 2019; Hu et al., 2020; Wu et al., 2020). However, these methods’ inefficiency is also evident in systematic generalization; they struggle to extrapolate to domains beyond training, as pointed out in (Santoro et al., 2018; Zhang et al., 2019b) and shown later in this paper.\nTo address the issue, we introduce the ALgebra-Aware Neuro-Semi-Symbolic (ALANS2) learner. At a high-level, the ALANS2 learner is embedded in a general neuro-symbolic architecture (Yi et al., 2018; Mao et al., 2019; Han et al., 2019; Yi et al., 2020) but has on-the-fly operator learnability and hence semi-symbolic. Specifically, it consists of a neural visual perception frontend and an algebraic abstract reasoning backend. For each RPM instance, the neural visual perception frontend first slides a window over each panel to obtain the object-based representations (Kansky et al., 2017; Wu et al., 2017) for every object. A belief inference engine latter aggregates all object-based representations in each panel to produce the probabilistic belief state. The algebraic abstract reasoning backend then takes the belief states of the eight context panels, treats them as snapshots on an algebraic structure, lifts them into a matrix-based algebraic representation built on the Peano Axiom and the representation theory (Humphreys, 2012), and induces the hidden operator in the algebraic structure by solving an inner optimization (Colson et al., 2007; Bard, 2013). The algebraic representation for the answer is predicted by executing the induced operator: its corresponding set element is decoded by isomorphism established in the representation theory, and the final answer is selected as the one most similar to the prediction.\nThe ALANS2 learner enjoys several benefits in abstract reasoning with an algebraic treatment:\n1. Unlike previous monolithic models, the ALANS2 learner offers a more interpretable account of the entire abstract reasoning process: the neural visual perception frontend extracts object-based representations and produces belief states of panels by explicit probability inference, whereas the algebraic abstract reasoning backend induces the hidden operator in the algebraic structure. The corresponding representation for the final answer is obtained by executing the induced operator, and the choice panel with minimum distance is selected. This process much resembles the topdown bottom-up strategy in human reasoning: humans reason by inducing the hidden relation, executing it to generate a feasible solution in mind, and choosing the most similar answer available (Carpenter et al., 1990). Such a strategy is missing in recent literature (Santoro et al., 2018; Zhang et al., 2019a;b; Wang et al., 2020; Zheng et al., 2019; Hu et al., 2020; Wu et al., 2020). 2. While keeping the semantic interpretability and end-to-end trainability in existing neurosymbolic frameworks (Yi et al., 2018; Mao et al., 2019; Han et al., 2019; Yi et al., 2020), ALANS2 is what we call semi-symbolic in the sense that the symbolic operator can be learned and concluded on-the-fly without manual definition for every one of them. Such an inductive ability also enables a greater extent of the desired generalizability. 3. By decoding the predicted representation in the algebraic structure, we can also generate an answer that satisfies the hidden relation in the context.\nThis work makes three major contributions: (1) We propose the ALANS2 learner. Compared to existing monolithic models, the ALANS2 learner adopts a neuro-semi-symbolic design, where the problem-solving process is decomposed into neural visual perception and algebraic abstract reasoning. (2) To demonstrate the efficacy of incorporating an algebraic treatment in abstract spatialtemporal reasoning, we show the superior systematic generalization ability of the proposed ALANS2 learner in various extrapolatory RPM domains. (3) We present analyses into both neural visual perception and algebraic abstract reasoning. We also show the generative potential of ALANS2." }, { "heading": "2 RELATED WORK", "text": "Quest for Symbolized Manipulation The idea to treat thinking as a mental language can be dated back to Augustine (Augustine, 1876; Wittgenstein, 1953). Since the 1970s, this school of thought has undergone a dramatic revival as the quest for a symbolized manipulation in cognitive modeling, such as “language of thought” (Fodor, 1975), “physical symbol system” (Newell, 1980), and “algebraic mind” (Marcus, 2001). In their study, connectionist’s task-specific superiority and inability to generalize beyond training (Kansky et al., 2017; Chollet, 2019; Santoro et al., 2018; Zhang et al., 2019a) have been hypothetically linked to a lack of such symbolized algebraic manipulation (Lake & Baroni, 2018; Chollet, 2019; Marcus, 2020). With evidence that an algebraic treatment adopted in early human development (Marcus et al., 1999) can potentially address the issue (Bahdanau et al., 2019; Mao et al., 2019; Marcus, 2020), classicist (Fodor et al., 1988) approaches for generalizable reasoning used in programs (McCarthy, 1960) and blocks world (Winograd, 1971) have resurrected. As a hybrid approach to bridge connectionist and classicist, recent developments lead to neuro-symbolic architectures. In particular, Yi et al. (2018) demonstrate a neuro-symbolic prototype for visual question answering, where a perception module and a language parsing module are separately trained, and the predefined logic operators associated with language tokens are chained to process the visual information. Mao et al. (2019) soften the predefined operators to afford end-to-end training with only question answers. Han et al. (2019) and Yi et al. (2020) use the hybrid architecture for metaconcept learning and temporal causal learning, respectively. ALANS2 follows the classicist’s call but adopts a neuro-semi-symbolic architecture: it is end-to-end trainable as opposed to Yi et al. (2018; 2020) and the operator can be learned and concluded on-the-fly without manual specification (Yi et al., 2018; Mao et al., 2019; Han et al., 2019; Yi et al., 2020).\nAbstract Visual Reasoning Recent works by Santoro et al. (2018) and Zhang et al. (2019a) arouse the community’s interest in abstract visual reasoning, where the task of Raven’s Progressive Matrices (RPM) is introduced as such a measure for intelligent agents. Initially proposed as an intelligence quotient test for humans (Raven, 1936; Raven & Court, 1998), RPM is believed to be strongly correlated with human’s general intelligence (Carpenter et al., 1990) and fluid intelligence (Spearman, 1923; 1927; Hofstadter, 1995; Jaeggi et al., 2008). Early RPM-solving systems employ symbolic representations based on hand-designed features and assume access to the underlying logics (Carpenter et al., 1990; Lovett et al., 2009; 2010; Lovett & Forbus, 2017). Another stream of research on RPM recruits similarity-based metrics to select the most similar answer from the choices (Little et al., 2012; McGreggor & Goel, 2014; McGreggor et al., 2014; Mekik et al., 2018; Shegheva & Goel, 2018). However, their hand-defined visual features are unable to handle uncertainty from imperfect perception, and directly assuming access to the logic operations simplifies the problem. Recently proposed data-driven approaches arise from the availability of large datasets: Santoro et al. (2018) extend a pedagogical RPM generation method (Wang & Su, 2015), whereas Zhang et al. (2019a) use a stochastic image grammar (Zhu et al., 2007) and introduce structural annotations in it, which Hu et al. (2020) further refine to avoid shortcut solutions by statistics in candidate panels. Despite the fact that RPM intrinsically requires one to perform abstraction, algebraization, induction, and generalization, existing methods bypass such cognitive requirements using a single feedforward pass in connectionist models: Santoro et al. (2018) use a relational module (Santoro et al., 2017), Steenbrugge et al. (2018) augment it with a VAE (Kingma & Welling, 2013), Zhang et al. (2019a) assemble a dynamic tree, Hill et al. (2019) arrange the data in a contrasting manner, Zhang et al. (2019b) propose a contrast module, Zheng et al. (2019) formulate it in a student-teacher setting, Wang et al. (2020) build a multiplex graph network, Hu et al. (2020) aggregate features from a hierarchical decomposition, and Wu et al. (2020) apply a scattering transformation to learn objects, attributes, and relations. In contrast, ALANS2 attempts to fulfill the cognitive requirements in a neuro-semi-symbolic framework: the perception frontend abstracts out visual information, and the reasoning backend induces the hidden operator in an algebraic structure.\n3 THE ALANS2 LEARNER\nIn this section, we introduce the ALANS2 learner for the RPM problem. In each RPM instance, an agent is given an incomplete 3ˆ 3 panel matrix with the last entry missing and asked to induce the operator hidden in the matrix and choose from eight choice panels one that follows it. Formally, let the answer variable be denoted as y, the context panels as tIo,iu8i“1, and choice panels as tIc,iu8i“1. Then the problem can be formulated as estimating P py | tIo,iu8i“1, tIc,iu8i“1q. According to the common design (Santoro et al., 2018; Zhang et al., 2019a; Carpenter et al., 1990), there is one operator that governs each panel attribute. Hence, by assuming independence among attributes, we\npropose to factorize the probability as P py “ n | tIo,iu8i“1, tIc,iu8i“1q9 ź\na\nÿ T a P pya “ n | T a, tIo,iu8i“1, tIc,iu8i“1qP pT a | tIo,iu8i“1q, (1)\nwhere ya denotes the answer selection based only on attribute a and T a the operator on a. Overview As shown in Fig. 1, the ALANS2 learner decomposes the process into perception and reasoning: the neural visual perception frontend extracts the belief states from each of the sixteen panels, whereas the algebraic abstract reasoning backend views an instance as an example in an abstract algebra structure, transforms belief states into algebraic representations by representation theory, induces the hidden operators, and executes the operators to predict the representation of the answer. Therefore, in Eq. (1), the operator distribution is modeled by the fitness of an operator and the answer distribution by the distance between the predicted representation and that of a candidate." }, { "heading": "3.1 NEURAL VISUAL PERCEPTION", "text": "The neural visual perception frontend consists of an object CNN and a belief inference engine. It is responsible for extracting the belief states for each of the sixteen (context and choice) panels.\nObject CNN For each panel, we use a sliding window to traverse the spatial domain of the image and feed each image region into an object CNN. The CNN has four branches, producing for each region its object attribute distributions, including objectiveness (if the region contains an object), type, size, and color. Distributions of type, size, and color are conditioned on an object’s existence.\nBelief Inference Engine The belief inference engine summarizes the panel attribute distributions (over position, number, type, size, and color) by marginalizing out all object attribute distributions (over objectiveness, type, size, and color). As an example, the distribution of the panel attribute of Number can be computed as such: for N image regions and their predicted objectiveness\nP pNumber “ kq “ ÿ\nRo\nN ź j“1 P proj “ Roj q, (2)\nwhere P proj q denotes the jth region’s estimated objectiveness distribution, and Ro is a binary sequence of length N that sums to k. All panel attribute distributions compose the belief state of a panel. In the following, we denote the belief state as b and the distribution of an attribute a as P pbaq." }, { "heading": "3.2 ALGEBRAIC ABSTRACT REASONING", "text": "Given the belief states of both context and choice panels, the algebraic abstract reasoning backend concerns the induction of hidden operators and the prediction of answer representations for each\nattribute. The fitness of induced operators is used for estimating the operator distribution and the difference between the prediction and the choice panel for estimating the answer distribution.\nAlgebraic Underpinning Without loss of generality, here we assume row-wise operators. For each attribute, under perfect perception, the first two rows in an RPM instance provide snapshots into an example of magma (Hausmann & Ore, 1937) constrained to an integer-indexed set, the simplest group-like algebra structure that is closed under a binary operator. To see this, note that an accurate perception module would see each panel attribute as a deterministic set element. Therefore, RPM instances with unary operators, such as progression, are magma examples with special binary operators where one operand is constant. Instances with binary operators, such as arithmetics, directly follow the magma properties. Those with ternary operators are ones with unary operators on a three-tuple set defined on rows.\nAlgebraic Representation A systematic algebraic view allows us to felicitously recruit ideas in representation theory (Humphreys, 2012) to glean the hidden properties in the abstract structures: it makes abstract algebra amenable by reducing it onto linear algebra. Following the same spirit, we propose to lift both the set elements and the hidden operators to a learnable matrix space. To encode the set element, we employ the Peano Axiom (Peano, 1889). According to the Peano Axiom, an integer-indexed set can be constructed by (1) a zero element (0), (2) a successor function (Sp¨q), and (3) the principle of mathematical induction, such that the kth element is\nencoded as Skp0q. Specifically, we instantiate the zero element as a learnable matrix M0 and the successor function as the matrix-matrix product parameterized by M . In an attribute-specific manner, the representation of an attribute taking the kth value is pMaqkMa0 . For operators, we consider them to live in a learnable matrix group of a corresponding dimension, such that the action of an operator on a set can be represented as matrix multiplication. Such algebraic representations establish an isomorphism between the matrix space and the abstract algebraic structure: abstract elements on the algebraic structure have a bijective mapping to/from the matrix space, and inducing the abstract relation can be reduced to solving for a matrix operator. See Fig. 2 for a graphical illustration of the isomorphism.\nOperator Induction Operator induction concerns about finding a concrete operator in the abstract algebraic structure. By the property of closure, we formulate it as an inner-level regularized linear regression problem: a binary operator T ab in a magma example for attribute a minimizes\nargmin T\n`ab pT q “ ÿ\ni\nE “ }Mpbao,iqTMpbao,i`1q ´Mpbao,i`2q}2F ‰ ` λab }T }2F , (3)\nwhere under visual uncertainty, we take the expectation with respect to the distributions in the belief states of context panels P pbao,iq in the first two rows, and denote its algebraic representation as Mpbao,iq. For unary operators, one operand can be treated as constant and absorbed into T . Note that Eq. (3) admits a closed-form solution (see Appendix for details). Therefore, the operator can be learned and adapted for different instances of binary relations and concluded on-the-fly. Such a design also simplifies the recent neuro-symbolic approaches, where every single symbol operator needs to be hand-defined (Yi et al., 2018; Mao et al., 2019; Han et al., 2019; Yi et al., 2020). Instead, we only specify an inner-level optimization framework and allow symbolic operators to be quickly induced based on the neural observations, while keeping the semantic interpretability in the neurosymbolic methods. Therefore, we term such a design semi-symbolic.\nThe operator probability in Eq. (1) is then modeled by each operator type’s fitness, e.g., for binary, P pT a “ T ab | tIo,iu8i“1q9 expp´`ab pT ab qq. (4)\nOperator Execution To predict the algebraic representation of the answer, we solve another innerlevel optimization similar to Eq. (3), but now treating the representation of the answer as a variable:\nyMab “ argmin M `ab pMq “ Er}Mpbao,7qT ab Mpbao,8q ´M}2F s, (5) where the expectation is taken with respect to context panels in the last row. The optimization also admits a closed-form solution (see Appendix for details), which corresponds to the execution of the induced operator in Eq. (3).\nThe predicted representation is decoded probabilistically as the predicted belief state of the solution, P p pba “ k | T aq9 expp´}yMa ´ pMaqkMa0 }2F q. (6)\nAnswer Selection Based on Eqs. (1) and (4), estimating the answer distribution is now boiled down to estimating the conditional answer distributions for each attribute. Here, we propose to model it based on the Jensen–Shannon Divergence (JSD) of the predicted belief state and that of a choice,\nP pya “ n | T a, tIo,iu8i“1, tIc,iu8i“1q9 expp´DJSDpP p pba | T aq}P pbac,nqqq. (7) Discussion The algebraic abstract reasoning module offers a computational and interpretable counterpart to human-like reasoning in RPM (Carpenter et al., 1990). Specifically, the induction component resembles the fluid intelligence, where one quickly induces the hidden operator by observing the context panels. The execution component synthesizes an image by executing the induced operator, and the choice most similar to the image is selected as the answer. We also note that by decoding the predicted representation in Eq. (6), a solution can be generated: by sequentially selecting the most probable operator and the most probable attribute value, a rendering engine can directly render the solution. The reasoning backend also enables end-to-end training: by integrating the belief states from neural perception, the module conducts both induction and execution in a soft manner, such that the gradients can be back-propagated and the learner jointly trained." }, { "heading": "3.3 LEARNING OBJECTIVE", "text": "We train the entire ALANS2 learner by minimizing the cross-entropy loss between the estimated answer distribution and the ground-truth selection, i.e.,\nmin θ,tMa0 u,tMau\n`pP py | tIo,iu8i“1, tIc,iu8i“1q, y‹q, (8)\nwhere `p¨q denotes the cross-entropy loss, y‹ the ground-truth selection, θ the parameters in the object CNN, and tMa0 u and tMau the zero elements and the successor functions for element encodings, respectively. Note notations are simplified by making the dependency on parameters implicit.\nHowever, we notice in practice that with only the cross-entropy loss on the ground-truth selection, the ALANS2 learner experiences difficulty in convergence. Without a proper guidance, the object CNN does not produce meaningful object-based representations. Therefore, following the discussion in (Santoro et al., 2018; Zhang et al., 2019a; Wang et al., 2020), we augment training with an auxiliary loss on the distribution of the operator, i.e.,\nmin θ,tMa0 u,tMau\n`pP py | tIo,iu8i“1, tIc,iu8i“1q, y‹q ` ÿ\na\nλa`pP pT a | tIo,iu8i“1q, ya‹ q, (9)\nwhere ya‹ denotes the ground-truth operator selection for attribute a, and λ a balances the trade-off." }, { "heading": "4 EXPERIMENTS", "text": "A cognitive architecture with systematic generalization is believed to demonstrate the following three principles (Fodor et al., 1988; Marcus, 2001; 2020): (1) systematicity, (2) productivity, and (3) localism. Systematicity requires an architecture to be able to entertain “semantically related” contents after understanding a given thought. Productivity states that the awareness of a constituent implies that of a recursive application of the constituent, and vice versa for localism.\nTo verify the effectiveness of an algebraic treatment in systematic generalization, we showcase the superiority of the proposed ALANS2 learner on the three principles in the abstract spatial-temporal reasoning task of RPM. Specifically, we use the generation methods proposed in Zhang et al. (2019a) and Hu et al. (2020) to generate RPM problems and carefully split training and testing to construct the three regimes. The former generates candidates by perturbing only one attribute of the correct answer while the later modifies attribute values in a hierarchical manner to avoid shortcut solutions by pure statistics. Both methods categorize relations in RPM into three types, according to Carpenter et al. (1990): unary (Constant and Progression), binary (Arithmetic), and ternary (Distribution of Three), each of which comes with several instances. Grounding the principles into learning abstract relations in RPM, we fix the configuration to be 3ˆ 3Grid and generate the following data splits for evaluation (see Appendix for details):\n• Systematicity: the training set contains only a subset of instances for each type of relation, while the test set all other relation instances.\n• Productivity: as the binary relation results from a recursive application of the unary relation, the training set contains only unary relations, whereas the test set only binary relations. • Localism: the training and testing sets in the productivity split are swapped to study localism.\nWe follow Zhang et al. (2019a) to generate 10, 000 instances for each split and assign 6 folds for training, 2 folds for validation, and 2 folds for testing.\nExperimental Setup We evaluate the systematic generalizability of the proposed ALANS2 learner on the above three splits, and compare the ALANS2 learner with other baselines, including ResNet, ResNet+DRT (Zhang et al., 2019a), WReN (Santoro et al., 2018), CoPINet (Zhang et al., 2019b), MXGNet (Wang et al., 2020), LEN (Zheng et al., 2019), HriNet (Hu et al., 2020), and SCL (Wu et al., 2020). We use either official or public implementations that reproduce the original results.\nSystematic Generalization Table 1 shows the performance of various models on systematic generalization, i.e., systematicity, productivity, and localism. Compared to results reported in Santoro et al. (2018); Zhang et al. (2019a;b); Wang et al. (2020); Zheng et al. (2019); Hu et al. (2020); Wu et al. (2020), all pure connectionist models experience a devastating performance drop when it comes to the critical cognitive requirements on systematic generalization, indicating that pure connectionist models fail to perform abstraction, algebraization, induction, or generalization needed in solving the abstract reasoning task; instead, they seem to only take a shortcut to bypass them. In particular, MXGNet (Wang et al., 2020)’s superiority is diminishing in systematic generalization. Despite of learning with structural annotations, ResNet+DRT (Zhang et al., 2019a) does not fare better than its base model. The recently proposed HriNet (Hu et al., 2020) slightly improves on ResNet in this aspect, with LEN (Zheng et al., 2019) being only marginally better. WReN (Santoro et al., 2018), on the other hand, shows oscillating performance across the three regimes. Evaluated under systematic generation, SCL (Wu et al., 2020) and CoPINet (Zhang et al., 2019b) also far deviate from their “superior performance”. These observations suggest that pure connectionist models highly likely learn from variation in visual appearance rather than the algebra underlying the problem.\nEmbedded in a neural-semi-symbolic framework, the proposed ALANS2 learner improves on systematic generalization by a large margin. With an algebra-aware design, the model is considerably stable across different principles of systematic generalization. The algebraic representations learned in relations of either a constituent or a recursive composition naturally support productivity and localism, while semi-symbolic inner optimization further allows various instances of an operator type to be induced from the algebraic representations and boosts systematicity. The importance of the algebraic representations is made more significant in the ablation study: ALANS2-Ind, with algebraic representation replaced by independent encodings and the algebraic isomorphism broken, shows inferior performance. The ALANS2 learner also enables diagnostic tests into its jointly learned perception module and reasoning module, in contrast to the black-box-like connectionist counterparts.\nAnalysis into Perception and Reasoning The neural-semi-symbolic design allows analyses into both perception and reasoning. To evaluate the neural perception module and the algebraic reasoning module, we extract region-based object attribute annotations from the dataset generation methods (Zhang et al., 2019a; Hu et al., 2020) and categorize all relations into three types, i.e., unary, binary, and ternary, respectively.\nTable 2 shows the perception module’s performance on the test sets in the three regimes of systematic generalization. We note that in order for the ALANS2 learner to achieve the desired results shown in Table 1, ALANS2 learns to construct the concept of objectiveness perfectly. The model also shows a fairly accurate prediction accuracy on the attributes of type and size. However, on the texture-related concept of color, ALANS2 fails to develop a reliable notion on it. Despite that, the general prediction accuracy of the perception module is still surprising, considering that the perception module is only\njointly learned with ground-truth annotations on answer selections. The relatively lower accuracy on color could be attributed to its larger space compared to other attributes.\nTable 3 lists the reasoning module’s performance during testing for the three aspects. Note that on position, the unary operator (shifting) and binary operator (set arithemtics) do not systematically imply each other. Hence, we do not count them as probes into productivity and localism. In general, we notice that the better the perception accuracy on one attribute, the better the performance on reasoning. However, we also note that despite the relatively accurate perception of objectiveness, type, and size, near perfect reasoning is never guaranteed. This deficiency is due to the perception uncertainty handled by expectation in Eq. (3): in spite of correctness when we take argmax, marginalizing by expectation will unavoidably introduce noise into the reasoning process. Therefore, an ideal reasoning module requires the perception frontend to be not only correct but also certain. Computationally, one can sample from the perception module and optimize Eq. (9) using REINFORCE (Williams, 1992). However, the credit assignment problem and variance in gradient estimation will further complicate training.\nGenerative Potential Compared to existing discriminative-only RPM-solving methods, the proposed ALANS2 learner is unique in its generative potential. As mentioned above, the final panel attribute can be decoded by sequentially selecting the most probable hidden operator and the attribute value. When equipped with a rendering engine, a solution can be generated. Here, we use the rendering program released by Zhang et al. (2019a) to demonstrate such a generative potential in the proposed ALANS2 learner. Fig. 3 shows examples where the solutions are generated by ALANS2. Such a generative ability is a computational counterpart to human reasoning: ALANS2 selects the one most similar to a synthesized image from the pool of candidates, which resembles human’s top-down bottom-up reasoning." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose the ALgebra-Aware Neuro-Semi-Symbolic (ALANS2) learner, echoing a normative theory in the connectionist-classicist debate that an algebraic treatment in a cognitive architecture should improve a model’s systematic generalization ability. In experiments, we show that with such an algebraic treatment, the neuro-semi-symbolic learner achieves superior performance in three RPM domains reflective of systematic generalization." }, { "heading": "D MARGINALIZATION FOR OTHER ATTRIBUTES", "text": "For the attribute of position, we denote its value as Ro, a binary vector of length N , with each entry corresponding to one of the N windows. Then\nP pPosition “ Roq “ N ź\nj“1 P proj “ Roj q, (S21)\nwhere P proj q denotes the jth region’s estimated objectiveness distribution returned by a CNN as in the main text.\nFor the attribute of type, the panel attribute of type being k is evaluated as\nP pType “ kq “ ÿ\nRo\n¨\n˝\nź\nj,Roj“1 P prtj “ kq\n˛\n‚P pPosition “ Roq, (S22)\nwhere P prtjq denotes the jth region’s estimated type distribution returned by a CNN. The computation for size and color is exactly the same as type, except that we use the region’s estimated size and color distribution returned by a CNN." }, { "heading": "E RELATED WORK ON NEURAL THEOREM PROVING", "text": "Combining neural architectures with symbolic reasoning has a long history in the field of theorem proving (Garcez et al., 2012), with early works dated back to propositional rules (Shavlik & Towell, 1991; Towell & Shavlik, 1994; Garcez & Zaverucha, 1999). Later works extend the propositional rules to first-order inference (Shastri, 1992; Ding, 1995; França et al., 2014; Sourek et al., 2015; Cohen, 2016). More recent works include the Logic Tensor Networks (Serafini & Garcez, 2016) and the NTP model (Rocktäschel & Riedel, 2017). The former grounds first-order logics and supports function terms, while the latter is constructed from Prolog’s backward chaining and is related to Komendantskaya (2011); Hiolldobler (1990) but supports function-free terms. DeepProbLog (Manhaeve et al., 2018) further improves on NTP by focusing on tight interactions between a neural component and subsymbolic representation and parameter learning for both the neural and the logic components. Evans & Grefenstette (2018) introduces a differentiable rule induction process, though not integrating the neural and symbolic components. Our work is related to the stream of work on neural theorem proving. However, we formulate the relation induction process as continuous optimization rather than logical induction." }, { "heading": "F MORE ON NEURAL VISUAL PERCEPTION", "text": "• Why not train a CNN to predict the position and number of objects? The CNN is trained to predict the type, size, color, and object existence in a window. The object existence in windows is marginalized to be a Number distribution and Position distribution. This is a light-weight method for object detection. Nevertheless, it is also possible to use a Fast-RCNN like method to predict object positions (this implies number) directly. However, in this way, the framework loses the probabilistic interpretation (the object proposal branch is currently still deterministic), and we cannot perform end-to-end learning.\n• How does the CNN predict the presence of an object, its type, size, and color given that it is not trained to do that? For each window, the CNN outputs 4 softmaxed vectors, corresponding to the probability distributions of object existence, object type, object size, and object color. The spaces for these attributes are pre-defined. CNN’s weights are then jointly trained in the framework. Such a design follows recent neuro-symbolic methods (Mao et al., 2019; Han et al., 2019) that also rely on the implicitly trained representation. In short, we assign semantics to the implicitly trained representation (probability distributions for attributes), performs marginalization and reasoning as if they are groundtruth attribute distributions, and jointly train using only the problem’s target label." } ]
2,020
null
SP:a50de9e3cf34fd189763ee172fcff026cbc679dc
[ "This paper proposes a two-stage summarization system where a document is provided along with (optionally) keywords or a prompt. This supplemental information helps to guide the summarization and possibly make it more user-specific. The keywords and prompt can also be guessed automatically by a BERT-base model, which seems to improve automatic metrics on CNN/daily mail." ]
Current summarization systems yield generic summaries that are disconnected from users’ preferences and expectations. To address this limitation, we present CTRLsum, a novel framework for controllable summarization. Our approach enables users to control multiple aspects of generated summaries by interacting with the summarization system through textual input in the form of a set of keywords or descriptive prompts. Using a single unified model, CTRLsum is able to achieve a broad scope of summary manipulation at inference time without requiring additional human annotations or pre-defining a set of control aspects during training. We quantitatively demonstrate the effectiveness of our approach on three domains of summarization datasets and five control aspects: 1) entity-centric and 2) length-controllable summarization, 3) contribution summarization on scientific papers, 4) invention purpose summarization on patent filings, and 5) question-guided summarization on news articles in a reading comprehension setting. Moreover, when used in a standard, uncontrolled summarization setting, CTRLsum achieves state-of-the-art results on the CNN/DailyMail dataset.1
[]
[ { "authors": [ "ciński" ], "title": "2019) that require answers to be found in the summary", "venue": null, "year": 2019 }, { "authors": [ "Saito" ], "title": "2020a) use the number of word prototypes to control", "venue": null, "year": 2020 }, { "authors": [ "Gehrmann" ], "title": "2018) utilize copying words at test time to mask copying operations in a sum", "venue": null, "year": 2018 }, { "authors": [ "Li" ], "title": "2020b) use keywords as extra input to improve", "venue": null, "year": 2020 }, { "authors": [ "Wuebker" ], "title": "2016) and also to demonstrate the multi-task ability present in large pretrained", "venue": null, "year": 2016 }, { "authors": [ "Fan" ], "title": "2018). Then we compute Success Rate, the fraction", "venue": null, "year": 2018 }, { "authors": [ "Fan" ], "title": "EntityCode) for reference point. We note that their numbers come", "venue": null, "year": 2018 }, { "authors": [ "Fabbri" ], "title": "Uncontrolled Summarization", "venue": "We follow (Grusky et al.,", "year": 2020 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "Jianpeng Cheng", "Mirella Lapata" ], "title": "Neural summarization by extracting sentences and words", "venue": "In Proceedings of ACL,", "year": 2016 }, { "authors": [ "Arman Cohan", "Franck Dernoncourt", "Doo Soon Kim", "Trung Bui", "Seokhwan Kim", "Walter Chang", "Nazli Goharian" ], "title": "A discourse-aware attention model for abstractive summarization of long documents", "venue": "In Proceedings of NAACL (Short Papers),", "year": 2018 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Alexander R Fabbri", "Wojciech Kryściński", "Bryan McCann", "Caiming Xiong", "Richard Socher", "Dragomir Radev" ], "title": "Summeval: Re-evaluating summarization evaluation", "venue": "arXiv preprint arXiv:2007.12626,", "year": 2020 }, { "authors": [ "Angela Fan", "David Grangier", "Michael Auli" ], "title": "Controllable abstractive summarization", "venue": "In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation,", "year": 2018 }, { "authors": [ "Zhenxin Fu", "Xiaoye Tan", "Nanyun Peng", "Dongyan Zhao", "Rui Yan" ], "title": "Style transfer in text: Exploration and evaluation", "venue": null, "year": 2018 }, { "authors": [ "Jonas Gehring", "Michael Auli", "David Grangier", "Denis Yarats", "Yann N Dauphin" ], "title": "Convolutional sequence to sequence learning", "venue": "In Proceedings of ICML,", "year": 2017 }, { "authors": [ "Sebastian Gehrmann", "Yuntian Deng", "Alexander M Rush" ], "title": "Bottom-up abstractive summarization", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Dan Gillick", "Yang Liu" ], "title": "Non-expert evaluation of summarization systems is risky", "venue": "In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk,", "year": 2010 }, { "authors": [ "Max Grusky", "Mor Naaman", "Yoav Artzi" ], "title": "Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Kelvin Guu", "Tatsunori B Hashimoto", "Yonatan Oren", "Percy Liang" ], "title": "Generating sentences by editing", "venue": "prototypes. Transactions of the Association for Computational Linguistics,", "year": 2018 }, { "authors": [ "Junxian He", "Taylor Berg-Kirkpatrick", "Graham Neubig" ], "title": "Learning sparse prototypes for text generation", "venue": "In Proceedings of NeurIPS,", "year": 2020 }, { "authors": [ "Junxian He", "Xinyi Wang", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "A probabilistic formulation of unsupervised text style transfer", "venue": "In Proceedings of ICLR,", "year": 2020 }, { "authors": [ "Karl Moritz Hermann", "Tomas Kocisky", "Edward Grefenstette", "Lasse Espeholt", "Will Kay", "Mustafa Suleyman", "Phil Blunsom" ], "title": "Teaching machines to read and comprehend", "venue": "In Proceedings of NeurIPS,", "year": 2015 }, { "authors": [ "Chris Hokamp", "Qun Liu" ], "title": "Lexically constrained decoding for sequence generation using grid beam search", "venue": "In Proceedings of ACL,", "year": 2017 }, { "authors": [ "Zhiting Hu", "Zichao Yang", "Xiaodan Liang", "Ruslan Salakhutdinov", "Eric P Xing" ], "title": "Toward controlled generation of text", "venue": "In Proceedings of ICML,", "year": 2017 }, { "authors": [ "Qiuyuan Huang", "Zhe Gan", "Asli Celikyilmaz", "Dapeng Wu", "Jianfeng Wang", "Xiaodong He" ], "title": "Hierarchically structured reinforcement learning for topically coherent visual story generation", "venue": "In Proceedings of AAAI,", "year": 2019 }, { "authors": [ "Mandar Joshi", "Danqi Chen", "Yinhan Liu", "Daniel S Weld", "Luke Zettlemoyer", "Omer Levy" ], "title": "Spanbert: Improving pre-training by representing and predicting spans", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Bryan McCann", "Lav R Varshney", "Caiming Xiong", "Richard Socher" ], "title": "Ctrl: A conditional transformer language model for controllable generation", "venue": null, "year": 1909 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of ICLR,", "year": 2015 }, { "authors": [ "Rebecca Knowles", "Philipp Koehn" ], "title": "Neural interactive translation prediction", "venue": "In Proceedings of the Association for Machine Translation in the Americas,", "year": 2016 }, { "authors": [ "Wojciech Kryściński", "Nitish Shirish Keskar", "Bryan McCann", "Caiming Xiong", "Richard Socher" ], "title": "Neural text summarization: A critical evaluation", "venue": "In Proceedings of EMNLP,", "year": 2019 }, { "authors": [ "Anton Leuski", "Chin-Yew Lin", "Eduard Hovy" ], "title": "ineats: interactive multi-document summarization", "venue": "In Proceedings of ACL,", "year": 2003 }, { "authors": [ "Mike Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "Abdelrahman Mohamed", "Omer Levy", "Ves Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "venue": null, "year": 1910 }, { "authors": [ "Chenliang Li", "Weiran Xu", "Si Li", "Sheng Gao" ], "title": "Guiding generation for abstractive text summarization based on key information guide network", "venue": "In NAACL (Short Papers),", "year": 2018 }, { "authors": [ "Chin-Yew Lin" ], "title": "Rouge: A package for automatic evaluation of summaries", "venue": "In Text summarization branches out,", "year": 2004 }, { "authors": [ "Yizhu Liu", "Zhiyi Luo", "Kenny Zhu" ], "title": "Controlling length in abstractive summarization using a convolutional neural network", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Bryan McCann", "Nitish Shirish Keskar", "Caiming Xiong", "Richard Socher" ], "title": "The natural language decathlon: Multitask learning as question answering", "venue": "arXiv preprint arXiv:1806.08730,", "year": 2018 }, { "authors": [ "Rada Mihalcea", "Paul Tarau" ], "title": "TextRank: Bringing order into text", "venue": "In Proceedings of EMNLP,", "year": 2004 }, { "authors": [ "Lili Mou", "Yiping Song", "Rui Yan", "Ge Li", "Lu Zhang", "Zhi Jin" ], "title": "Sequence to backward and forward sequences: A content-introducing approach to generative short-text conversation", "venue": "In Proceedings of COLING,", "year": 2016 }, { "authors": [ "Ramesh Nallapati", "Feifei Zhai", "Bowen Zhou" ], "title": "SummaRuNNer: a recurrent neural network based sequence model for extractive summarization of documents", "venue": "In Proceedings of AAAI,", "year": 2017 }, { "authors": [ "Shashi Narayan", "Shay B Cohen", "Mirella Lapata" ], "title": "Ranking sentences for extractive summarization with reinforcement learning", "venue": "In Proceedings of NAACL,", "year": 2018 }, { "authors": [ "Myle Ott", "Sergey Edunov", "Alexei Baevski", "Angela Fan", "Sam Gross", "Nathan Ng", "David Grangier", "Michael Auli" ], "title": "fairseq: A fast, extensible toolkit for sequence modeling", "venue": "In Proceedings of NAACL (Demo Paper),", "year": 2019 }, { "authors": [ "Romain Paulus", "Caiming Xiong", "Richard Socher" ], "title": "A deep reinforced model for abstractive summarization", "venue": "In Proceedings of ICLR,", "year": 2018 }, { "authors": [ "Matt Post", "David Vilar" ], "title": "Fast lexically constrained decoding with dynamic beam allocation for neural machine translation", "venue": "In Proceedings of NAACL,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Pranav Rajpurkar", "Jian Zhang", "Konstantin Lopyrev", "Percy Liang" ], "title": "SQuAD: 100,000+ questions for machine comprehension of text", "venue": "In Proceedings of EMNLP,", "year": 2016 }, { "authors": [ "Ellen Riloff", "Wendy Lehnert" ], "title": "Information extraction as a basis for high-precision text classification", "venue": "ACM Transactions on Information Systems (TOIS),", "year": 1994 }, { "authors": [ "Alexander M Rush", "Sumit Chopra", "Jason Weston" ], "title": "A neural attention model for abstractive sentence summarization", "venue": "In Proceedings of EMNLP,", "year": 2015 }, { "authors": [ "Itsumi Saito", "Kyosuke Nishida", "Kosuke Nishida", "Atsushi Otsuka", "Hisako Asano", "Junji Tomita", "Hiroyuki Shindo", "Yuji Matsumoto" ], "title": "Length-controllable abstractive summarization by guiding with summary prototype", "venue": "arXiv preprint arXiv:2001.07331,", "year": 2020 }, { "authors": [ "Itsumi Saito", "Kyosuke Nishida", "Kosuke Nishida", "Junji Tomita" ], "title": "Abstractive summarization with combination of pre-trained sequence-to-sequence and saliency models", "venue": "arXiv preprint arXiv:2003.13028,", "year": 2020 }, { "authors": [ "Abigail See", "Peter J Liu", "Christopher D Manning" ], "title": "Get to the point: Summarization with pointer-generator networks", "venue": "In Proceedings of ACL,", "year": 2017 }, { "authors": [ "Eva Sharma", "Chen Li", "Lu Wang" ], "title": "BIGPATENT: A large-scale dataset for abstractive and coherent summarization", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Jianheng Tang", "Tiancheng Zhao", "Chenyan Xiong", "Xiaodan Liang", "Eric Xing", "Zhiting Hu" ], "title": "Targetguided open-domain conversation", "venue": "In Proceedings of ACL,", "year": 2019 }, { "authors": [ "Adam Trischler", "Tong Wang", "Xingdi Yuan", "Justin Harris", "Alessandro Sordoni", "Philip Bachman", "Kaheer Suleman" ], "title": "Newsqa: A machine comprehension dataset", "venue": "In Proceedings of the 2nd Workshop on Representation Learning for NLP,", "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Proceedings of NeurIPS,", "year": 2017 }, { "authors": [ "Daisy Zhe Wang", "Wei He", "Hua Wu", "Haiyang Wu", "Wei Li", "Haifeng Wang", "Enhong Chen" ], "title": "Chinese poetry generation with planning based neural network", "venue": "In Proceedings of COLING,", "year": 2016 }, { "authors": [ "Shuohang Wang", "Jing Jiang" ], "title": "Machine comprehension using match-lstm and answer pointer", "venue": "In Proceedings of ICLR,", "year": 2017 }, { "authors": [ "Sam Wiseman", "Stuart M Shieber", "Alexander M Rush" ], "title": "Learning neural templates for text generation", "venue": "In Proceedings of EMNLP,", "year": 2018 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "Rémi Louf", "Morgan Funtowicz", "Joe Davison", "Sam Shleifer", "Patrick von Platen", "Clara Ma", "Yacine Jernite", "Julien Plu", "Canwen Xu", "Teven Le Scao", "Sylvain Gugger", "Mariama Drame", "Quentin Lhoest", "Alexander M. Rush" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": "ArXiv, abs/1910.03771,", "year": 2019 }, { "authors": [ "Joern Wuebker", "Spence Green", "John DeNero", "Saša Hasan", "Minh-Thang Luong" ], "title": "Models and inference for prefix-constrained machine translation", "venue": "In Proceedings of ACL,", "year": 2016 }, { "authors": [ "Lili Yao", "Nanyun Peng", "Ralph Weischedel", "Kevin Knight", "Dongyan Zhao", "Rui Yan" ], "title": "Plan-andwrite: Towards better automatic storytelling", "venue": "In Proceedings of AAAI,", "year": 2019 }, { "authors": [ "Jingqing Zhang", "Yao Zhao", "Mohammad Saleh", "Peter J Liu" ], "title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization", "venue": null, "year": 1912 }, { "authors": [ "Tianyi Zhang", "Varsha Kishore", "Felix Wu", "Kilian Q. Weinberger", "Yoav Artzi" ], "title": "BERTScore: Evaluating text generation with bert", "venue": "In Proceedings of ICLR,", "year": 2020 }, { "authors": [], "title": "enlisted in the Army last year and was due to ship out to basic training April", "venue": "The FBI questioned him March", "year": 2014 }, { "authors": [], "title": "enlisted in the Army last year and was due to ship out to basic training April 7, 2014. He planned to detonate a car bomb at Fort Riley, a large Army base that’s home to the 1st Infantry Division", "venue": null, "year": 2014 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural summarization systems aim to compress a document into a short paragraph or sentence while preserving key information. There are largely two categories of summarization systems: extractive summarization that extracts important portions of a document (Cheng & Lapata, 2016; Nallapati et al., 2017; Narayan et al., 2018), and abstractive summarization that freely generates novel sentences (Rush et al., 2015; See et al., 2017; Paulus et al., 2018) which can produce coherent and fluent summaries more flexibly. In this paper we focus on abstractive summarization.\nTypically abstractive summarization methods take a document as input and yield a generic summary to cover certain information identified by the model. However, content of interest is user-dependent. Summaries should select information with respect to preferences of a user. For example, Figure 1 shows an NBA basketball news article, and the reference summary describes several match results. However, fans of certain basketball stars in these teams such as Lebron James or Stephen Curry might only be interested in the matches they played and would like to know the player’s scores as well.\nMotivated by this, we focus on controllable summarization which allows the users to manipulate the summaries from the model. We propose CTRLsum, a framework to control summaries through control tokens in the form of a set of keywords or descriptive prompts. At training time, the model learns to predict summaries conditioned on both the source document and keywords that serve as external guidance. During inference, keywords and optional prompts, which are the target prefix to constrain decoding, are combined as control tokens to convey user preferences as shown in Figure 1.\nKeywords and prompts are complementary. Prompts do not perform well in many cases such as entity or length controlled summarization as our preliminary experiments imply, but keywords can achieve those goals in a flexible way, for example, by using entity as keywords or varying the number of keywords to control entities and length respectively. However, keywords struggle in more open-ended scenarios like summarizing a list of contributions of scientific papers, while constraining the decoding with prompt “the main contributions of this paper are:(1)” is possibly sufficient to achieve the goal.\n1Code and model checkpoints will be public after the review period.\nCTRLsum is trained using only keywords as additional input which can be easily identified from training summaries. It requires neither extra human annotations nor pre-defining control aspects for training, yet is quite flexible to achieve a broad scope of text manipulation as we will show in this paper. In contrast, prior work primarily rely on pre-defined “control codes” (Fan et al., 2018; Liu et al., 2018; Keskar et al., 2019), thus need to collect annotations for training and cannot generalize to unseen control aspects easily at test time.\nWe use pretrained BART (Lewis et al., 2019) as the underlying architecture and perform experiments on three datasets in three distinct domains: CNN/Dailymail news articles (Hermann et al., 2015), arXiv scientific papers (Cohan et al., 2018), and BIGPATENT patent documents (Sharma et al., 2019). We quantitatively evaluate CTRLsum on five control aspects: (1) entity-centric (§4.2) and (2) length-controllable summarization (§4.3), (3) summarizing the contributions of scientific papers, (4) summarizing the purpose of an invention (§4.4), and (5) summarizing answers to given questions in a zero-shot reading comprehension setting (§4.5). Notably, our approach also achieves comparable or superior performance to the strong BART summarization model on all datasets in a standard, uncontrolled setting (§4.6), leading to state-of-the-art results on the CNN/Dailymail dataset." }, { "heading": "2 CTRLSUM", "text": "" }, { "heading": "2.1 OVERVIEW", "text": "Unconstrained neural summarization methods are trained to learn the conditional distribution p(y|x), where x and y represent the source document and summary respectively. The generated summaries depend solely on the document x without human involvement. To control the output summaries, we propose using additional control tokens z to represent user preferences and training a summarization model that predicts the conditional distribution p(y|x, z). The control tokens z include keywords as extra inputs during training and inference. They can also optionally include prompts at test time to further constrain the decoding process. As shown in Figure 1, control tokens – in the form of keywords, prompts, or a combination of both – act as an interface between users and an otherwise black-box neural model, providing a flexible way for users to explicitly control automatic summarization. Next we describe how to obtain automatic keywords for training as well as potential applications at test time." }, { "heading": "2.2 AUTOMATIC KEYWORD EXTRACTION", "text": "In addition to extracting keywords from training data to train the model, CTRLsum also features an automatic keywords extraction mechanism at test time, which can be used to suggest automatic\nkeywords according to user preferences, or perform uncontrolled summarization without user signals. Next we describe the keywords extraction methods at training and inference time respectively.\nTraining. For training, we use the ground-truth summary to identify keywords in the source document. Specifically, we first greedily select sentences from the document that maximize the ROUGE scores (Lin, 2004) with the reference summary. This step constrains keywords to those found in important sentences. Then, we identify all the longest sub-sequences in the extracted sentences that have matched sub-sequences in the ground-truth summary, similar to the copying word recognition method in (Gehrmann et al., 2018). Finally, we remove duplicate words and stop words and keep the remaining tokens as keywords. Compared to other keywords extraction methods (Riloff & Lehnert, 1994; Mihalcea & Tarau, 2004) which output only a few salient words, our extraction retains most content words found in the summary. This encourages dependence on the given keywords by building a reliable correlation between their presence in the input and the target. It in turn ensures that user-provided keywords are not ignored by the model at test time, which is catastrophic for a controllable summarization system.\nInference. We formulate the keyword extraction problem at test time as a sequence labeling task. Concretely, we train a BERT-based sequence tagger (Devlin et al., 2018) on the keywords and documents from training dataset. This tagger then computes the selection probability qj for each token in the test document. Similar to training time extraction, we first select ns sentences with the highest average token selection probability. Within these sentences words with qj > are selected as keywords up to a maximum number of mmax. The three hyperparameters ns, ,mmax are selected based on the uncontrolled summarization performance on validation datasets. The results are reasonably robust to different settings (see Appendix D for details)." }, { "heading": "2.3 SUMMARIZATION: TRAINING DETAILS", "text": "Format. At training time we prepend the keyword sequence to the source document separated with a special token. The summarization model is then trained to maximize p(y|x, z) in an end-to-end fashion. The keyword sequence maintains the order of the keywords as they were in the source document, but we observe that the model often ignores this ordering as it frequently differs between source and target summary. We also separate keywords from different source sentences with the special token (“|”). In applications where the sentence boundary is unknown, as when users propose their own keywords, the “|” token can be ignored as in some of our experiments.\nKeyword Dropout. As mentioned in §2.2, our keyword extraction strategy retains most words from the summary found in the source document. Without regularization, the dependence on such keywords is strong enough that the model rarely generates novel words in the summary. To remedy this, we randomly drop keywords at training time so that the model learns to rely on keywords that are present in the input, while also learning to still carry over key information from the source document that is not present in the keywords. Note that keywords dropout is applied at training time only.\nNext we are going to introduce the five control aspects that we study in this paper as example use cases of CTRLsum. Qualitative examples of them are shown in Table 1." }, { "heading": "2.4 SUMMARIZATION: INFERENCE WITH KEYWORDS.", "text": "The keywords provide a generic interface to control multiple aspects of summaries, which allows the user to optionally rely on automatically extracted keywords, user provided keywords, or a combination of both. This method provides clean separation of test-time user control and the training process, including pretraining. Consequently, CTRLsum can be adapted to new use cases without changing model parameters. For example, though nothing during training specifically focuses on controlling entities or length, examples below demonstrate the general applicability of keyword control to entity and length manipulation.\nEntity Control. The goal of entity control is to produce summaries that focus on entities of interest. Figure 1 exemplifies summarization with respect to different players when those player names are included as keywords directly influencing the summary.\nLength Control. Users may have different preferences as to the length of summaries. We allow such manipulation of the summary length through a user-specified length parameter. Specifically, we\nfirst separate the training data into 5 buckets by summary length so that each bucket has the same number of examples. Then we compute the average number of keywords Kl for each bucket on the training data. At test time, a user can specify length parameter l ∈ {0, 1, 2, 3, 4} to include the Kl keywords with the highest selection probability computed by the sequence tagger. This is similar to (Saito et al., 2020a), which uses the number of “guiding words” to control summary length." }, { "heading": "2.5 SUMMARIZATION: INFERENCE WITH KEYWORDS AND PROMPTS", "text": "Prompts are pre-defined text sequences used as the target prefix to constrain decoding. They have been utilized to perform multi-purpose text generation with a single unified model (Radford et al., 2019; Brown et al., 2020). In the CTRLsum framework, prompts are a kind of control token sequence, and we always use such tokens as both the target prefix and keywords (ablation results on using prompts as keywords or prefix alone can be found in Appendix C). We find that using prompts as keywords besides prefix helps focus on prompt-related content and mitigate the over-generation issue of vanilla summarization models, as we will show in §4.4. To the best of our knowledge, we are the first to evaluate such a prompt-based control method for summarization systems.\nSummarizing Contributions. Existing datasets about scientific papers such as arXiv (Cohan et al., 2018) collect paper abstracts as the summaries, which often include extra background context and lack detailed contribution descriptions for the associated paper. In many cases, readers would benefit from an explicit list of contributions in order to understand the novelty and value of the paper. For these cases, we propose using control tokens – “the main contributions of this paper are:(1)”. This prompt then triggers generation of a summary focused on contributions.\nSummarizing Invention Purpose. Patent article summaries in existing datasets such as BIGPATENT (Sharma et al., 2019) can be over-complicated, often covering core method details. Yet for a non-technical reader it would be preferred to provide a one-sentence summary that states the purpose of the invention while ignoring technical details. To apply CTRLsum in this scenario, we use the\ncontrol tokens, “the purpose of the present invention is”. This triggers a concise summary focused on patent purpose.\nQuestion-guided summarization. Human summarization can be constrained by questions (Kryściński et al., 2019) that require answers to be found in the summary. This points to an important connection between summarization and reading comprehension that we further explore. We hypothesize that a summarization model can directly answer some questions about the article if guided properly. This suggests the possibility of subsuming reading comprehension as a form of summarization. To verify this hypothesis, we use the control tokens “Q: question text? A:” to trigger reading comprehension behaviour.\nWe note that prompts- and keywords-based control are complementary in practice – while prompts could theoretically achieve any type of control, empirically they often do not work well for many aspects and the model is very sensitive to the precise wording of the prompt. For example, we found that using prompts such as “a summary focused on [entity] is:” or “a short summary is:” does not work as well as explicitly using keywords for entity or length control (details can be found in Appendix C)." }, { "heading": "3 RELATED WORK", "text": "Previous work on controllable summarization often collects control codes such as entity or length as supervision to train the model conditioned on both the code and article together (Fan et al., 2018; Liu et al., 2018). These methods do not generalize for controlling aspects of the summarization that were not seen during training. Recently Saito et al. (2020a) use the number of word prototypes to control summary length in a similar way to how we use keywords. Interactive summarization provides a way for users to continuously control the information that is included in the summary (Bornstein et al., 1999; Leuski et al., 2003). More broadly, controllable text generation has been studied for styles (Hu et al., 2017; Fu et al., 2018; He et al., 2020b), topics (Tang et al., 2019; Huang et al., 2019), and templates (Guu et al., 2018; Wiseman et al., 2018; He et al., 2020a).\nKeyword-guided text generation has been applied in other contexts with different motivations. Gehrmann et al. (2018) utilize copying words at test time to mask copying operations in a summarization task. Li et al. (2018) and Saito et al. (2020b) use keywords as extra input to improve the uncontrolled summarization performance. Wang et al. (2016), Mou et al. (2016), and Yao et al. (2019) use textual input to plan poetry, dialogue, and stories respectively. Lexically-constrained decoding specifies certain lexicons as hard constraints in the target text (Hokamp & Liu, 2017; Post & Vilar, 2018). Prefix-constrained decoding was used in machine translation (Knowles & Koehn, 2016; Wuebker et al., 2016) and also to demonstrate the multi-task ability present in large pretrained models (McCann et al., 2018; Radford et al., 2019; Keskar et al., 2019; Brown et al., 2020)." }, { "heading": "4 EXPERIMENTS", "text": "Our experiments below are designed to (1) test the control efficacy of CTRLsum on five different aspects, and (2) examine the performance of CTRLsum in a traditional summarization setting without external control signals. Also, extensive model output examples can be found in Appendix E." }, { "heading": "4.1 EXPERIMENTAL DETAILS", "text": "We perform experiments on three distinct-domain summarization datasets: CNN/Dailymail (CNNDM) news articles (Hermann et al., 2015), arXiv scientific papers (Cohan et al., 2018), and BIGPATENT patent articles (Sharma et al., 2019). For all datasets the source documents are truncated to 1024 tokens and the target summaries are truncated to 256 tokens following (Zhang et al., 2019). The conditional distribution p(y|x, z) in CTRLsum is our fine-tuned version of the pretrained BARTLARGE model (Lewis et al., 2019), which achieves state-of-the-art performance on several summarization benchmarks. The automatic keyword tagger at test time is based on the pretrained BERTLARGE model (Devlin et al., 2018) fine-tuned as described in §2.2. Our summarization model implementation is based on the fairseq toolkit (Ott et al., 2019) and the automatic keyword extraction model is based on the HuggingFace Transformers library (Wolf et al., 2019). Complete setup and training details can be found in Appendix A.1.\nFor evaluation, we measure commonly used ROUGE scores (Lin, 2004) and the recently proposed BERTScore (Zhang et al., 2020) when ground-truth is available. For control-related evaluation where we often do not have reference summaries, we (1) collect ground-truth summaries when possible, (2) examine whether summaries respect the control signal, or (3) resort to human evaluation." }, { "heading": "4.2 ENTITY CONTROL", "text": "Setup. We first simulate user preference by providing the model with oracle entities extracted from the ground-truth target. Then we compare it to the model using automatic keywords in a uncontrolled setting to show the effect of oracle entities. To examine whether the decoded summaries respect entity change, we sample 100 documents and repeatedly acquire every entity in the document to generate summaries, following Fan et al. (2018). Then we compute Success Rate, the fraction of requested entity actually occurring in the output summaries. The results are reported in separation of whether the entity is from leading 3 sentences or from the full article. To test if the summaries from different entity input are factually consistent with the document, we sample another 100 documents, and for each we randomly sample one “important” entity that appears in the reference, and one “unimportant” entity that occurs neither in the reference nor the leading three source sentences to produce summaries. For each (article, summary) pair we ask 3 annotators from Amazon Mechanical Turk to make a binary decision as to whether the summary can be entailed from the article. We then take the majority vote as the result and report the fraction of factually correct summaries. We evaluate on CNNDM only since many examples in arXiv and BIGPATENT do not have identifiable entities.\nResults. In Table 2 we observe that the use of oracle entities helps boost the ROUGE-2 score by 3.6 points compared with using automatic keywords, which means CTRLsum is able to take advantage of the given entities. Table 3 shows the Success Rate and factual correctness evaluations. We include the numbers from Fan et al. (2018) (EntityCode) for reference point. We note that their numbers come from a convolutional seq2seq architecture (see Appendix B for ablation analysis on this) and their method utilizes entity annotations during training time, thus is not very comparable to CTRLsum. Remarkably, our model achieves a high success rate for both lead-3 and full-article entities reaching around 95%. Yet other systems struggle to include the given entities especially for the ones that do not occur in the beginning of the article. Factual correctness scores from human annotators suggest that CTRLsum is able to generate factually consistent summaries no matter whether the entity of interest is important or not, comparable to the unconstrained BART baseline." }, { "heading": "4.3 LENGTH CONTROL", "text": "Setup. Similar to entity control, we first examine the effect of oracle length signal from the reference to simulate user preference. In addition to ROUGE and BERTScore, we measure the length distance between the decoded summary and the reference following (Liu et al., 2018). Specifically, we compute the mean of absolute deviation (MAD) of the actual length bucket code lsys of the decoded summary from the ground-truth control code lref, as 1N ∑N n |l (n) sys − l(n)ref |. To assess the summary variations as length signals change, we further sample 1000 documents and decode 5 different-length summaries for each document. Then we report the Pearson Correlation Coefficient (PCC) between the input bucket code and actual bucket code. Experiments are conducted on CNNDM and arXiv.\nResults. In Table 2 CTRLsum with oracle length signals only presents relatively small gains over the automatic CTRLsum baseline. This implies that oracle lengths only convey limited additional information to help generate the reference summary. We also run the LengthCode baseline (Fan et al., 2018) based on BART, where the ground-truth length bucket code is prepended to the article at both training at test time. However, LengthCode fails to consistently improve over BART with oracle length signals. Moreover, we find that the BART model fine-tuned with LengthCode method almost ignores the length signal with PCC close to 0, as shown in Table 4. This is not very surprising since length code would be less useful when the summarizers grow stronger, which can already learn a good length predictor implicitly. In contrast, CTRLsum with length-guided keywords achieves high positive PCC between control signal and actual output length, and is able to reduce the length deviation MAD compared to automatic baselines." }, { "heading": "4.4 CONTRIBUTION AND PURPOSE SUMMARIZATION", "text": "Contribution Summarization Setup. There is no existing dataset to evaluate contribution summarization of scientific papers, bringing challenges to our evaluation. However, researchers often summarize the bullet contributions of their paper in the Introduction section, which inspire us to extract such contribution claims as the reference summary. Therefore, we resort to the entire arXiv database,2 and download all the papers whose first submission time is within the first six months of 20193 that gives us 67K papers. We extract the Introduction section and bullet contributions with regular expression and filter out the ones that fail. The contributions are used as the reference and the Introduction section after removing the contribution claims is used as the source article – we aim to predict contributions from the rest of the introduction section. This procedure leads to 1018 test examples. We test the model trained on arXiv.\nPurpose Summarization Setup. To collect a test dataset that features one-sentence invention purpose summaries, we sample 1000 test examples from BIGPATENT and present their reference summaries to human annotators from Amazon Mechanical Turk. For each example we ask one annotator to select the sentence that convey the purpose of the invention. We also provide the option for annotators that the invention purpose cannot be identified. After filtering out the invalid examples, we collect 763 examples as our test data.\n2We do not use the arXiv test set because we can only extract 20 valid test points from it. The entire arXiv database is at: https://www.kaggle.com/Cornell-University/arxiv\n3The arXiv dataset used to train CTRLsum is collected before April 2018 according to their paper submission time, thus there should be no data overlap between the training data and our contribution test data.\nResults. Table 6 shows results of contribution summarization on scientific papers and invention purpose summarization on patent filings. Through using the prompt text as both the decoder prefix and keywords, CTRLsum outperforms the BART baseline in most cases. We further report the precision (P) and recall (R) scores in BERTScore besides F1. We observe that the BART baseline tends to over-generate a full summary with low precision scores while CTRLsum is able to focus on keywords-related content." }, { "heading": "4.5 QUESTION-GUIDED SUMMARIZATION", "text": "Setup. We directly test question-guided summarization on reading comprehension benchmarks in a zero-shot setting. Specifically, we evaluate the CNNDM summarization models on in-domain NewsQA (Trischler et al., 2017) and out-of-domain SQuAD 1.1 (Rajpurkar et al., 2016) respectively. We note that some NewsQA test articles are present in the CNNDM summarization training dataset, yet we think it is still a reasonable unsupervised setting since our model never sees questions or answers during training. In addition to comparing with the vanilla BART model, we also include the zero-shot performance from GPT2 language models (Radford et al., 2019) (without fine-tuning) as a reference point. We omit the largest GPT2 model with 1.5B parameters since it cannot be evaluated in our single GPU device due to memory limits. We report F1 scores on the two benchmarks.\nResults. BART is pretrained with a denoising task to predict the denoised version of the source, and performs poorly on zero-shot reading comprehension out of box, as shown in Table 5. Interestingly, however, BART fine-tuned on a summarization task – without seeing any question-answer pairs in the training data – is able to improve the F1 scores by 24.4 and 25.9 points on NewsQA and SQuAD respectively. Moreover, CTRLsum equipped with question keywords is able to further boost the performance by 15.6 and 17.9 points, approaching the supervised MatchLSTM (Wang & Jiang, 2017) score on NewsQA. Such results suggest that summarization might be a suitable transfer task for abstractive reading comprehension, which we leave for future work to explore." }, { "heading": "4.6 AUTOMATIC SUMMARIZATION", "text": "Table 7 shows the uncontrolled summarization performance without any user input, where our method uses the automatically extracted keywords as described in §2.2. On CNNDM and arXiv datasets CTRLsum outperforms the strong BART and PEGASUS baselines by a large margin, leading to new state-of-the-art performance on CNNDM. It also performs comparably to the BART baseline on BIGPATENT in terms of BERTScore, though with an inferior ROUGE-2 score. Yet there is a big performance gap between BART-based models and PEGASUS on BIGPATENT. The reasons might be different dataset processing,4 sub-optimal learning schedule, or inherent difference between BART and PEGASUS." }, { "heading": "4.7 HUMAN EVALUATION", "text": "In this section we present human evaluation results for both controlled and uncontrolled summarization. Full experiment details can be found in Appendix A.2.\nControlled Summarization. We present further human evaluation results to evaluate “control” directly by informing annotators the intended control signal. We conduct experiments on entity and purpose control. Specifically, we inform the annotators our intent (to obtain summaries focused on a specific entity or purpose of patent), then we ask them to provide scores in scale 1-5 over two dimensions: (1) Control Accuracy (CA): whether the summary contains accurate main information with respect to the intent, and (2) Control Relevance (CR): how the summary is relevant to the control intent overall – a summary that contains redundant contents that are unrelated to the intent will be penalized. Results including significance tests are shown in Table 8. The control accuracy for important entity control and purpose control are comparable between BART and CTRLsum without significant difference (p-value > 0.05), while CTRLsum shows significantly better control relevance overall by focusing on the desired information. Also, the unconstrained BART are unable to generate unimportant-entity-related summaries and thus suffers from poor scores on both dimensions.\nUncontrolled Summarization. We follow (Grusky et al., 2018; Fabbri et al., 2020) to ask human annotators from Amazon Mechanical Turk to score summaries (scale 1-5) over four dimensions: (1) Factual Consistency (FAC): the summary should only contain statements that can be entailed by the source document, (2) Relevance (REL): the summary should only contain important information of the source document, (3) Fluency (FLU): each sentence in the summary should be fluent, and (4) Coherence (COH): the summary should be well-structured and well-organized. Results including significance tests are present in Table 9. The quality of summaries from all systems on all dimensions is generally good with a score mostly higher than 4.0. However, most scores do not show significant difference from CTRLsum (Automatic Keyword) with large p-values, despite their very different similarities against the reference summaries in terms of ROUGE/BERTScore (e.g. CTRLsum with oracle keywords). This implies that the summary quality from different systems powered by strong pretrained models like BART has become difficult to be clearly distinguished by non-expert MTurkers. We also note that non-expert human judgement for summarization may be unreliable and exhibit poor correlation with expert judgement (Gillick & Liu, 2010; Fabbri et al., 2020)." }, { "heading": "5 CONCLUSION", "text": "In this paper we propose a generic framework to perform multi-aspect controllable summarization. The model is conditioned on keywords to predict summaries during training. At inference time the control tokens, in the form of keywords or prompts, enable users to interact with models in a very flexible way. Experiments on five different control aspects demonstrate the efficacy of our method.\n4PEGASUS updated the BIGPATENT data to preserve casing and applied some format cleaning." }, { "heading": "A EXPERIMENTAL SETUP DETAILS", "text": "A.1 GENERAL SETUP\nIn this section we include additional experimental details left out in the main content due to space limitations. We fine-tune the pretrained BARTLARGE model in all our experiments. Specifically we use the bart.large checkpoint from fairseq (Ott et al., 2019). For all BART-based summarization models, we fine-tune with learning rate 3e-5 and a polynomial learning rate decay schedule, the optimizer is Adam (Kingma & Ba, 2015) and batch size is 64. Our optimization scheme and hyperparameters follow the BART fine-tuning instructions in fairseq examples. We train the summarization models with 20k steps on CNNDM, 50k steps on arXiv, and 300k steps on BIGPATENT. We train the BERT tagger with learning rate 5e-5, Adam optimizer, and batch size of 128 on all datasets. Similar to summarization models, the tagger is trained with 20k, 50k, and 300k steps on CNNDM, arXiv, and BIGPATENT respectively. Also, we adopt a sliding window approach so that the BERT-based tagger is able to handle sequences that are longer than 512 tokens. For both ROUGE and BERTScore evaluation, we report the F1 measure. We report the rescaled BERTScore, and the hash code is roberta-large_L17_no-idf_version=0.3.6(hug_trans=3.0.2)-rescaled.\nAs mentioned in §2.2, we need three hyperparameters for automatic keywords extraction during inference – the number of pre-selected sentences ns, the selection probability threshold , and the maximum number of keywords mmax. We select these hyperparameters for each dataset based on the uncontrolled summarization ROUGE-2 score on validation dataset. The summarization performance is robust to these hyperparameters in a reasonable range, as shown in Appendix D. Specifically, we use {ns = 10, = 0.25,mmax = 30} for CNNDM, {ns = 10, = 0.15,mmax = 40} for arXiv, and {ns = 5, = 0.15,mmax = 30}.\nInvention Purpose Summarization. In the experiment of summarizing invention purpose on patent articles (§4.4). We examined whether the model would possibly copy source sentences through matching the prompts, we search strings in the form of “the purpose of [some words or phrases] is” among 763 test examples, and only 3 test articles are identified. This means the models are not generating by exactly matching prompts most of the time.\nA.2 HUMAN EVALUATION SETUP\nHere we include details about human evaluation experiments in §4.7.\nControlled Summarization. For controlled summarization, we sample 100 examples for each task, and summaries of each example from all systems are presented together to the human annotator to be scored. For CNNDM we provide article and summaries, while for BIGPATENT we provide reference and summaries using the reference summary as a surrogate for the source article. This is because the source patent documents are very long and hard to be read by non-expert humans. We did not evaluate contribution summarization since it is unrealistic to ask humans to judge contributions of many scientific papers from various domains. We tried to hire workers from Amazon Mechanical Turk first, but we failed to obtain reliable results from them – they often ignored the given user intent and tended to score the text as uncontrolled summaries (reflected by very poor scores on unimportant-entity summaries because these summaries do not contain the main information of the article), even though we instructed them that the control signal is critical. Therefore, we ask two independent human annotators through personal correspondence from the authors of this paper. One of the annotator is a PhD researcher on physics, and the other is a law graduate on intellectual property in the United States. They are able to follow the given control intent and considered more reliable than the MTurkers. We take the average of two annotators as the score for each example, and average over all examples to obtain the final score.\nUncontrolled Summarization. For uncontrolled summarization, we sample 100 examples for each dataset, and hire 3 independent workers from Amazon Mechanical Turk to conduct evaluation. For CNNDM we provide article and summaries, while for arXiv and BIGPATENT we provide reference and summaries using the reference summary as a surrogate for the source article. This is because the source patent documents or scientific papers are very long and hard to be read by non-expert humans. Summaries of each example from all systems are presented together to the human annotator to be\nscored. The median score of 3 workers is taken for each example, and average over all examples is reported." }, { "heading": "B ABLATION ANALYSIS OF ENTITY CONTROL", "text": "In Table 3 we observe that CTRLsum achieves a very high success rate (∼ 95%) of entity control, compared to previous work (Fan et al., 2018) which can only succeed 61.2% and 33.8% of the time on lead-3 and full-article entities respectively. We perform ablation analysis to understand the important gradients that contribute to the success of CTRLsum. We train CTRLsum with another two architectures in addition to BART: (1) convolutional seq2seq (Gehring et al., 2017) with the same hyperparameters as in (Fan et al., 2018), and (2) transformer seq2seq with the same hyperparameters as the base model in (Vaswani et al., 2017). Note that the transformer model is trained from scratch without pretraining. Results are shown in Table 10. CTRLsum parameterized with a weaker convolutional seq2seq architecture fails to depend on the keywords well with an over 40-point success rate drop, yet the success rate of transformer seq2seq without pretraining only drops around 5 points. This implies that the transformer seq2seq architecture is critical for CTRLsum to depend on the keywords well, while pretraining can further improves it.5" }, { "heading": "C ABLATION ANALYSIS ON KEYWORDS AND PROMPTS", "text": "In the controlling aspects we studied CTRLsum uses control tokens either as keywords alone (entity and length), or as keywords and prompts together (contribution, purpose, QA). Here we present further results when control tokens are used as prompts, keywords, or both for entity control, contribution control, and NewsQA tasks. Specifically for entity control, we use the control tokens “a summary focused on [entity] is:” for “prompt” and “prompt + keyword” variants.6 In this case success rate is computed excluding the prompt text. The control tokens for other settings are the same as previous experiments. Results are shown in Table 11, where keywords and prompts are of different importance for different tasks and are complementary in general. For example, using prompts to control entities turns out to be difficult with a very low success rate – we find that the system fails to understand the prompt and produce summaries appropriately in most cases. However, prompts contribute the most to contribution summarization with comparable performance with using prompts and keywords together, while removing prompts and using keywords alone suffers from drastic performance drop to trigger the contribution. For NewsQA task, prompts and keywords demonstrate mixing effectiveness – using either of them alone experiences over 20 F1 points loss compared to using them together.\n5For reference points, the ROUGE-1/2/L scores (with automatic keywords) of CTRLsum (Conv Seq2Seq) is 41.19/18.71/38.05 while CTRLsum (Transformer Seq2Seq) obtained 43.69/20.78/40.55.\n6We tried several prompts variants, for example, QA style ones “Q: What happened to [entity]? A:” or “Q: What do we know about [entity]? A:”. None of them lead to meaningful entity control." }, { "heading": "D ROBUSTNESS ANALYSIS OF KEYWORDS EXTRACTION HYPERPARAMETERS", "text": "Table 12 shows the ROUGE-2 scores of uncontrolled summarization on the validation set with different keywords extraction hyperparameters. We use more fine-grained stride size to iterate the mmax hyperparameter for CNNDM since its source articles are usually shorter than arXiv and BIGPATENT. As observed, the automatic summarization performance is relatively robust to these hyperparameters in a reasonable range." }, { "heading": "E RANDOM OUTPUT EXAMPLES", "text": "In this section, we randomly sample test examples and show the source aticle, reference summary, and the model output from CTRLsum for each control aspect.\nE.1 ENTITY CONTROL\nFor entity control, we randomly sample 3 articles from CNNDM and for each article we randomly select 5 entites as keywords to show the model output.\nTable 13: Random Entity Control Examples\nArticle\nAmericans on the United States’ no-fly list will now be privy to information about why they have been banned from commercial flights and be given the opportunity to dispute their status, according to court documents filed by the Justice Department this week. The revised policy comes in response to a June ruling by a federal judge that said the old process was in violation of the Fifth Amendment’s guarantee of due process. The decision was part of an American Civil Liberties Union lawsuit brought on behalf of 13 Americans on the list. But the ACLU isn’t satisfied with the government’s new policy, outlined in documents filed Monday in federal courts in Oregon (PDF) and Virginia (PDF). \"After years of fighting in court for complete secrecy and losing, it’s good that the government is finally now going to tell people of their status on the No Fly List,\" said Hina Shamsi, director of the ACLU National Security Project and the lead attorney on the case, in a statement. \"Unfortunately, we’ve found that the government’s new redress process falls far short of constitutional requirements because it denies our clients meaningful notice, evidence, and a hearing. The government had an opportunity to come up with a fair process but failed, so we’re challenging it in court again.\" People on the no-fly list, managed by the FBI’s Terrorist Screening Center, are prohibited from boarding a commercial flight for travel into or out of the United States. The number of people on the list is classified. An official with knowledge of the government’s figures told CNN in 2012 that the list contained about 21,000 names, including about 500 Americans. Before the change, American citizens and permanent residents who inquired with the government about being denied aircraft boarding received a letter that neither confirmed nor denied their inclusion on the no-fly list. Now, they’ll be made aware of their status if they apply for redress, with an option to request further information. \"The U.S. government is making enhancements to the Department of Homeland Security Traveler Redress Inquiry Program (DHS TRIP) to provide additional transparency and process for U.S. citizens and lawful permanent residents who have been denied boarding on a commercial aircraft because they are on the No Fly List,\" the Department of Homeland Security said in a statement. In cases in which travelers included on the list request to receive or submit more information about their status, the government will provide a second, more detailed response, identifying \"specific criterion under which the individual has been placed on the No Fly List,\" according to the court documents. An unclassified summary of that information will be provided \"to the extent feasible, consistent with the national security and law enforcement interests at stake,\" court papers said. Those who appear on the no-fly list will then have further opportunity to dispute their status in writing, with supporting materials or exhibits, and will receive a final written decision from the Transportation Security Administration. The 2014 ruling that prompted the policy changes had called for passengers on the list to be given the opportunity to dispute their status before a judge.\nReference Summary Americans on the no-fly list will now get info about why they’ve been banned from flights. ACLU says the policy still denies “meaningful notice, evidence, and a hearing”\nCTRLsum\nthe Fifth Amendment’s: The new policy is in response to a June ruling that said the old process was in violation of the Fifth Amendment’s guarantee of due process.\nCivil Liberties Union: The American Civil Liberties Union says it’s not satisfied with the government’s new policy.\nthe Transportation Security Administration: The Transportation Security Administration will provide travelers with more information about their status. A June ruling had called for passengers on the list to be given the opportunity to dispute their status before a judge.\nVirginia: The ACLU is challenging the new policy in federal courts in Oregon and Virginia.\nFBI: People on the no-fly list, managed by the FBI’s Terrorist Screening Center, are prohibited from boarding commercial flights. The ACLU isn’t satisfied with the government’s new policy.\nTable 14: Random Entity Control Examples\nArticle\nPoint guard Stephen Curry nearly single-handedly outscored New Orleans with 11 first-quarter points as the Warriors built a 15-point lead and rolled to victory in Game One of their Western Conference first-round series. Game Two in the best-of-seven series is scheduled for Monday night in Oakland. Golden State, the top seed in the West, picked up right where it left off in the regular season, recording a 19th straight home win and 40th in 42 games this year. Stephen Curry scored a stunning 34 points for the Golden State Warriors in there play-off game. The Warriors did it by taking a 25-point lead into the final minute of the third quarter, then holding on. ’We missed a lot of free throws, which made it a lot closer than it needed to be,’ coach Steve Kerr said. ’But in the playoffs you’ve just got to get it done somehow. We’re up 1-0. That’s where we want to be.’ Curry led the Warriors with 34 points, hitting 13 of 25 shots and four three-pointers. All five Golden State starters scored in double figures. Guard Klay Thompson complemented Curry with 21 points, while power forward Draymond Green (15 points, 12 rebounds) and center Andrew Bogut (12 points, 14 rebounds) recorded double-doubles. The point guard has been in spectacular form as he looks to lead the Warriors to the NBA glory. Curry celebrates after scoring a three-pointer on his way to scoring 11 first quarter points. New Orleans power forward Anthony Davis scored a game-high 35 points, 20 in the fourth quarter. He also blocked four shots and grabbed seven rebounds. Small forward Quincy Pondexter, assigned to cover Curry most of the game, added 20 points for the Pelicans. . Visiting Washington outscored Toronto 11-4 in the overtime in winning the opener of their first-round playoff series. Washington traveled to Toronto and came away with the victory with a confident overtime performance. Forward Paul Pierce started the overtime with a three-pointer and scored 20 points to lead the Wizards, who let a 15-point lead slip away in the fourth quarter before winning. Guard Bradley Beal added 16 points for Washington and forward Nene contributed 12 points and 13 rebounds. Forward Amir Johnson came off the bench to lead the Raptors with 18 points and guard DeMar DeRozan added 15 points. . Paul Pierce (left) top-scored in the clash with 20 points as the Wizards secured an important play-off win.\nReference Summary Stephen Curry scored 34 points for Golden State against New Orleans. The Californian-based team defeated the Pelicans 106-99. Washington Wizards outscored the Toronto Raptors 11-4 in overtime. Paul Pierce led the scoring with 20 points for the Wizards .\nCTRLsum\nStephen Curry: Stephen Curry scored 34 points as Golden State Warriors beat New Orleans Pelicans. Washington Wizards beat Toronto Raptors in overtime in play-off game.\nOakland: Stephen Curry scored 34 points for the Golden State Warriors. Game Two in the best-ofseven series is scheduled for Monday night in Oakland.\nSteve Kerr: Stephen Curry scored 34 points as the Warriors beat the Pelicans. Coach Steve Kerr said: ‘We missed a lot of free throws, which made it a lot closer than it needed to be’\nKlay Thompson: Stephen Curry scored 34 points as the Warriors beat the Pelicans. Klay Thompson and Draymond Green also scored in double figures .\nAndrew Bogut: Stephen Curry scored 34 points as the Warriors beat the Pelicans. Draymond Green and Andrew Bogut both recorded double-doubles.\nTable 15: Random Entity Control Examples\nArticle\nIt’s the ultimate treat for Benedict Cumberbatch fans and stands an imposing 6ft tall - just like the man himself. But shoppers at London’s Westfield Stratford City shopping centre looked more than a little surprised to discover a chocolate sculpture of Benedict Cumberbatch in their midst. One lady was spotted cautiously approaching the edible artwork before quickly backing off, while another couldn’t quite hide their smile of surprise. Scroll down for video . Finishing touches: The sculpture is readied for its big unveiling at Westfield Stratford City shopping centre. Oh dear: Reaction to the sculpture was mixed, with some shoppers bursting into laughter. Even less impressed was the shopper who stood stony-faced in front of the creation for several moments, while another burst into laughter as soon as she spotted it. It did, however, prove an immediate hit with a pair of police sniffer dogs who wagged their tails as they gave it a thorough sniffing down. . The artwork, which has been given pride of place in the shopping mall’s atrium, was commissioned by UKTV to mark celebrate its screening of the third series of Sherlock. It took a crew of eight people to complete the sculpture, which took over 250 man hours to create and weighs 40kg . Does it look like me? Benedict Cumberbatch strikes a pose with James Corden during an Oscars party. Mixed reaction: A pair of police sniffer dogs loved the sculpture but shoppers looked baffled. Hilarious: A lady bursts into laughter after spotting the 6ft homage to Mr Cumberbatch. Not amused: A shopper looks thoroughly unimpressed as she contemplates the artwork. Luckily for Cumberbatch, who usually enjoys a considerably more complimentary response to projects he’s involved in, the piece will only be in residence temporarily. The 38-year-old actor, who is currently expecting his first child with wife Sophie Hunter, 37, isn’t the only famous face to have found himself the subject of an edible artwork. . In the run up to the release of 50 Shades of Grey, bakers created not one but two 6ft gateaux paying homage to Jamie Dornan. One depicted the actor in the grey suit beloved of his 50 Shades character Christian Grey, while the other showed him topless and came complete with an edible six-pack. Award-winning: Both Jennifer Lawrence and her cake alter-ego have won awards. Homage: The cake, which triumphed at a show last November, was inspired by the Hunger Games . Actress Jennifer Lawrence has also been immortalised in cake, with baker Lara Clarke creating a sweet treat designed to resemble the 24-year-old’s Hunger Games alter-ego, Katniss Everdeen. The confection, which was baked ahead of the release of Mockingjay Part One in November, met with the approval of Lawrence herself, who, when asked about it, said Ms Clarke was ’incredibly talented’. Other A-listers to get the culinary treatment include Kevin Bacon, whose likeness was made from bacon, and Taylor Swift who featured on a cake. The Duke and Duchess of Cambridge were turned into a tasty-looking pizza, while President Barack Obama had his likeness sculpted from butter. Immortalised: The Duke and Duchess of Cambridge have been turned into pizza in the past. ribute: Kevin Bacon has a bacon sculpture to his name while Jamie Dornan has been made into a cake.\nReference Summary A 6ft chocolate sculpture of Benedict Cumberbatch has been unveiled. Toothsome statue has been placed inside a London shopping centre. But shoppers reactions to the creations were decidedly unenthusiastic. One woman glared at it while others just looked thoroughly baffled . It did manage to win the approval of pair of police sniffer dogs. It weighs 40kg and took eight people 250 man hours to create . Other celebrities to get culinary tributes include Jennifer Lawrence. Her 6ft cake won an award - and the 24-year-old’s approval. Actor Kevin Bacon has also been immortalised - in bacon .\nCTRLsum\nWestfield: The sculpture was unveiled at London’s Westfield Stratford City shopping centre.\nJames Corden: Benedict Cumberbatch is not the only famous face to be immortalised in chocolate. Jennifer Lawrence and James Corden have also been turned into cakes .\nSophie Hunter: Actor, 38, is expecting his first child with wife Sophie Hunter.\nJamie Dornan: Shoppers at London’s Westfield Stratford City shopping centre looked baffled by the sculpture. Other famous faces to be immortalised in edible art include Jamie Dornan and Jennifer Lawrence.\nHunger Games: The sculpture was commissioned by UKTV to mark the screening of Sherlock. It follows in the footsteps of other A-listers such as Jamie Dornan and Jennifer Lawrence, who have been immortalised in cakes. Lawrence’s Hunger Games cake won an award at a show last year .\nE.2 PAIRED ENTITY CONTROL\nThe entity control experiments in this paper only consider one entity as the control signal, here we show examples inputting paired entities. Specifically, we are interested in the output when one of the paired entities is important and the other is unimportant. Therefore, we sample 3 articles from CNNDM and for each article we randomly select one important entity and one unimportant entity as paired keywords to show the model output. We repeat this sampling five times for each article to obtain five different summaries.\nTable 16: Random Paired Entity Control Examples\nArticle\nA former U.S. Army enlistee who posted on Facebook about \"the adrenaline rush\" of dying in jihad was arrested Friday and charged with trying to detonate a car bomb at Fort Riley military base in Kansas, authorities said. A second man, who allegedly knew about the bomb plot but didn’t call authorities, was charged with failing to report a felony. John T. Booker Jr. of Topeka, an American citizen also known as Mohammed Abdullah Hassan, was taken into custody near Manhattan, Kansas, in a van that contained what he thought was a bomb, the criminal complaint said. The \"bomb\" had actually been put together by two confidential informants with nonexplosive materials, the complaint said. Fort Riley’s security was never breached and no people were in danger, the U.S. Justice Department said in a press release. Booker enlisted in the Army last year and was due to ship out to basic training April 7, 2014, said Army spokesman Wayne Hall. The criminal complaint said the FBI questioned him March 24, 2014 about comments posted on Facebook, such as, \"Getting ready to be killed in jihad is a HUGE adrenaline rush. I am so nervous. NOT because I’m scare to die but I am eager to meet my lord.\" Booker waived his Miranda rights and told the agents he enlisted to commit an insider attack against American soldiers like Maj. Nidal Hassan had done at Fort Hood, Texas, the complaint said. Hassan opened fire in a building in November 2009, killing 13 people and wounding more than 30. His enlistment was terminated March 24, 2014, at the request of Army Criminal Investigation Command, Hall said. Booker began communicating with a confidential informant later in 2014, the complaint said, and often talked about his plans to engage in violent jihad in support of ISIS. He and the informant watched ISIS videos together, the complaint said, and Booker talked about how he wanted to go to Iraq and turn his weapon on American soldiers when ordered to shoot the enemy. On March 9, Booker said he believed ISIS wanted him to commit a truck bombing in the United States and thought a good target would be nearby Fort Riley, a large Army base that’s home to the 1st Infantry Division, known as \"The Big Red One.\" Booker said \"that detonating a suicide bomb is his No. 1 aspiration because he couldn’t be captured, all evidence would be destroyed and he would be guaranteed to hit his target,\" the criminal complaint said. He made a video with a Fort Riley airfield in the background and said ISIS was coming to kill American soldiers, both abroad and in the United States, the complaint said. Booker acquired components for a bomb and rented a storage locker to store the components, the complaint said. The plan was for confidential informants to build a bomb and for Booker to drive to Fort Riley and detonate it, the complaint said. But the bomb was built with \"inert\" parts and would never explode, the complaint said. On Friday, the informants and Booker drove to what Booker thought was a little-used utility gate near Fort Riley, the complaint said. While Booker was making final connections on the \"bomb,\" the FBI arrested him, the complaint said. He was charged with one count of attempting to use a weapon of mass destruction, one count of attempting to damage property by means of an explosive and one count of attempting to provide material support to the Islamic State of Iraq, a designated foreign terrorist organization. If convicted, he could face life in prison. Alexander E. Blair, 28, of Topeka was taken into custody Friday and charged with failing to report a felony. The FBI said agents interviewed Blair after Booker’s arrest. Blair said he shared some of Booker’s views, knew of his plans to detonate a vehicle bomb at Fort Riley and loaned him money to rent storage space, according to the FBI’s criminal complaint. He said he thought Booker would carry out his plan but did not contact authorities, the complaint said. If convicted, Blair faces a maximum of three years in prison.\nReference Summary Alexander Blair, 28, of Topeka accused of knowing about bomb plot but not contacting authorities. Fort Riley’s security was never breached and the device was \"inert\" and not a threat, authorities say. John T. Booker Jr., 20, of Topeka had acquired bomb parts and made a propaganda video, the Justice Department says.\nCTRLsum\nFort Riley ISIS: John T. Booker Jr. is charged with trying to detonate a car bomb at Fort Riley. A second man, who allegedly knew about the bomb plot but didn’t call authorities, is charged. Booker said he believed ISIS wanted him to commit a truck bombing in the United States.\nJohn T. Booker Jr. FBI: John T. Booker Jr. enlisted in the Army last year and was due to ship out to basic training April 7, 2014. The FBI questioned him March 24, 2014 about comments posted on Facebook.\nJohn T. Booker Jr. Infantry Division: John T. Booker Jr. enlisted in the Army last year and was due to ship out to basic training April 7, 2014. He planned to detonate a car bomb at Fort Riley, a large Army base that’s home to the 1st Infantry Division.\nJustice Department Islamic State of Iraq: The Justice Department says Fort Riley’s security was never breached and no people were in danger. John T. Booker Jr. is charged with attempting to use a weapon of mass destruction. He is also charged with trying to provide material support to the Islamic State of Iraq.\nAlexander Blair ISIS: John T. Booker Jr., 28, charged with trying to detonate a car bomb at Fort Riley. Alexander E. Blair, 28, is charged with failing to report a felony. Booker said he wanted to commit an insider attack against American soldiers in support of ISIS.\nTable 17: Random Paired Entity Control Examples\nArticle\nAnder Herrera has caught the eye in recent weeks after cementing a spot in Manchester United’s starting line-up but it appears he does not actually have a clear sight at goal. Herrera netted a brace against Aston Villa on Saturday afternoon without actually looking at Brad Guzan’s goal as his eyes appeared to be shut when making contact with the ball. In fact, six of Herrera’s seven goals have been scored without him even having to glimpse at either the ball or the opposition’s net. . Manchester United star Ander Herrera scores his side’s opening goal against Aston Villa with his eyes shut. The Spanish midfielder appears to have his eyes closed as he strikes at Brad Guzan’s goal . Aston Villa and England midfielder Fabian Delph attempts to block Herrera’s left-footed shot. His eyes were wide open when he struck an impressive first-time shot against Yeovil in the third round of the FA Cup back in January. . However his double against Aston Villa and his goals against Queens Park Rangers, Leicester, Preston and Swansea all came without Manchester United’s summer signing having to make eye contact with the ball. Herrera appears to have a history of shooting with his eyes closed as the image of his goal for Spain’s Under 20 side back in 2009 shows. Herrera hits the back of the net while representing Spain’s Under 20 side against Tahiti . The former Athletic Bilbao joined the Red Devils for £29million in the summer due to his vision and creativity in midfield. . Louis van Gaal heaped praise on the 25-year-old’s shooting ability after Saturday’s Premier League encounter at Old Trafford, by stating: ’He has a very good kicking technique and he should be more composed,’ Van Gaal said to MUTV. ’I said to him ’you have to control the ball before you shoot’. ’I said that to him again in yesterday’s training session, and then I kissed him at half-time because he had controlled the ball for the first time in his life and then he shot.’ . Herrera’s eyes were also shut for Manchester United’s third goal against Aston Villa on Saturday. His only Manchester United goal scored with his eyes wide open came against Yeovil Town back in January . Herrera scored in Manchester United’s 4-0 win over Queens Park Rangers without looking at the ball. The former Athletic Bilbao man flicked the ball past Preston’s players without having his eyes on the ball . Herrera scored in Manchester United’s 2-1 defeat by Swansea City at the Liberty Stadium in February. He also netted in Manchester United’s shock 5-3 defeat at the hands of Premier League strugglers Leicester.\nReference Summary Ander Herrera has scored seven goals for Man United since joining in June. Herrera’s eyes have been shut when striking the ball for six of his goals. His superb strike against Yeovil Town has been only goal with eyes open. Herrera netted a brace in Manchester United’s 3-1 win over Aston Villa .\nCTRLsum\nAnder Herrera Van Gaal: Ander Herrera has scored six of his seven goals without looking at the ball . Louis van Gaal has praised the Spanish midfielder’s shooting ability .\nYeovil Town Swansea City: Ander Herrera scored his side’s opening goal against Aston Villa with his eyes shut. The 25-year-old has also scored without looking at either the ball or the opposition’s net in games against QPR, Leicester City, Swansea and Yeovil Town .\nManchester United Queens Park Rangers: Ander Herrera appeared to have his eyes closed as he scored against Aston Villa on Saturday. The 25-year-old has scored six of his seven goals without looking at the ball. Herrera also netted with his eyes open in Manchester United’s 4-0 win over Queens Park Rangers .\nAston Villa MUTV: Ander Herrera scored a brace against Aston Villa on Saturday. But the Spanish midfielder appeared to have his eyes closed . Louis van Gaal told MUTV that Herrera should be ’more composed’ .\nHerrera Athletic Bilbao: Ander Herrera has scored six of his seven goals with his eyes closed. The former Athletic Bilbao man joined Manchester United for £29million in the summer .\nTable 18: Random Paired Entity Control Examples\nArticle\nA husband accused of trying to murder his cheating wife was ready to let her have sex with another man once a month as long as she stayed with him, his trial heard yesterday. A close friend told how Joseph O’Riordan, 74, had confided in him about the extraordinary plan for his 47-year-old wife Mandy. O’Riordan, a councillor and former nightclub owner, stabbed her eight times in a jealous rage after finding out she had been having an affair with a postman. . Extraordinary deal: Joseph O’Riordan stabbed his wife of ten years Amanda (left) with a seven inch kitchen knife eight times - yesterday Brighton Crown Court heard he was considering allowing her to have affairs. She suffered life-threatening injuries after being knifed in the torso, chest, arms and back. The jury was also shown dramatic footage of the moment police arrived at the couple’s home to be greeted by a ‘calm’ O’Riordan opening the door. The revelation of his proposal for keeping his wife of ten years came from Alfred Harris. He told how O’Riordan had confided five days before the attack that he believed she was having an affair. O’Riordan was ‘choked up and emotional’ when he said: ‘I think Amanda is playing away. She’s getting her nails and hair done more regularly, she’s been on a diet and doesn’t want sex.’. Asking for a suit: O’Riordan sent his wife this letter from his prison cell. The following day, added Mr Harris, the men met for a pub lunch in O’Riordan’s home village of Polegate, East Sussex. ‘I saw Joe and he told me that Amanda had been seeing someone else – a guy who drove a van. Joe said he loved Amanda to bits and if she wanted to have sex with someone else once a month that would be okay as long as she stayed with him.’. In a statement read to Brighton Crown Court, Mr Harris also described the couple as ‘loving and close’. . He was ‘so shocked’ to learn that O’Riordan had attacked his wife at their flat on a residential care home estate. The jury saw images of four police officers, one of whom was wearing a lapel camera, arriving shortly before 10pm last October 22 after racing to the scene. . PC Dave Catt said they drew their ‘incapacitating’ sprays fearing the knifeman would be still holding his weapon. They were greeted by O’Riordan wearing a blood-spattered light blue shirt and holding a cordless phone on which he had phoned for an ambulance. Mr Catt said O’Riordan admitted: ‘I found out that she was having an affair and I lost it.’. Mrs O’Riordan was moaning and lying on a bed, holding a towel to her stomach with a deep chest wound and serious wounds to her hand and back. Paramedics arrived moments later and took her to hospital. Jurors looked at two screens as images of her husband’s arrest and subsequent detention at a police station were shown. Growing suspicion: Giving evidence yesterday Alfred Harris – a friend of the couple for more than six years – told how O’Riordan had confided in him that he believed his wife was having an affair. Yesterday, jurors at Brighton Crown Court (above) were shown dramatic footage of the moment police arrived at the couple’s home to be greeted by a ‘calm’ Mr O’Riordan opening the door. PC Stuart Kenway told how, as O’Riordan had opened the door, he ‘appeared calm and composed and the situation was surreal’ as he then said: ‘She is in the bedroom – do you want the knife?’ Officers were directed to a 7in blade with a black handle which was in the kitchen. Dr Stephen Drage, an intensive care consultant with Brighton and Sussex University Hospitals, told the jury how seriously Mrs O’Riordan was hurt. ‘It is quite clear she was bleeding to death,’ he said. ‘She underwent life-saving surgery which took six hours.’. O’Riordan denies attempted murder. The trial continues. . . Sorry we are not currently accepting comments on this article.\nReference Summary Joseph O’Riordan, 73, stabbed wife eight times after discovering her affair. She was left with life-threatening injuries to her torso, chest, arms and back. Yesterday Brighton Crown Court heard about deal he was ready to offer her. He had told friend about the idea while in the pub just days before stabbing.\nCTRLsum\nJoseph O’Riordan Alfred Harris: Joseph O’Riordan, 74, is accused of stabbing wife Mandy, 47, eight times. Friend Alfred Harris told how he had told him about the extraordinary plan.\nBrighton Crown Court Stephen Drage: Joseph O’Riordan, 74, accused of stabbing wife Mandy, 47, eight times. Brighton Crown Court heard he was considering allowing her to have affairs. Dr Stephen Drage, an intensive care consultant, told jury how she was ’clearly hurt’\nJoseph O’Riordan Catt: Joseph O’Riordan, 74, is accused of stabbing wife Mandy, 47, eight times. PC Dave Catt said he ’lost it’ when he found out about the affair.\nStuart Kenway Joseph O’Riordan: Joseph O’Riordan, 74, is accused of stabbing wife Mandy, 47, eight times. PC Stuart Kenway told how he ’appeared calm and composed’ .\nBrighton Crown Court Amanda: Joseph O’Riordan, 74, is accused of stabbing wife Amanda, 47, eight times. He denies attempted murder at Brighton Crown Court.\nE.3 LENGTH CONTROL\nFor length control, we randomly sample 3 test articles from CNNDM and for each article we generate five different-length summaries.\nTable 19: Random Length Control Examples. Control tokens are bolded.\nArticle\nFilm director David Lynch has confirmed he will no longer direct the revival of \"Twin Peaks\" – a cult 1990s television show that was set to return in 2016. The offbeat TV series, created by Lynch and Mark Frost, featured a quirky FBI agent who went to the Pacific Northwest town of Twin Peaks to investigate the mysterious murder of a high school girl named Laura Palmer. The groundbreaking series is considered one of the most influential shows in television history. Lynch broke the news about his departure in a series of tweets, saying that the show’s third season will continue without him. He said he felt the network was not offering enough money to produce the show \"the way it needed to be done.\" Lynch also wrote that he had personally called the actors over the weekend to let them know he would no longer be directing. Showtime Network, which will air the nine-episode comeback, released a statement saying they were \"saddened\" by Lynch’s decision. \"We were saddened to read David Lynch’s statement today since we believed we were working towards solutions with David and his reps on the few remaining deal points,\" read the statement. \"Showtime also loves the world of Twin Peaks and we continue to hold out hope that we can bring it back in all its glory with both of its extraordinary creators, David Lynch and Mark Frost, at its helm.\" Showtime announced they would produce a third season in October last year. Actor Kyle MacLachlan, who played the coffee-obsessed FBI agent Dale Cooper in the original series, had confirmed he would reprise the lead role for the new season.\nReference Summary David Lynch says he won’t be directing new episodes of Twin Peaks. Showtime “saddened” over decision, which involved a dispute over money.\nCTRLsum\n(Length bucket 0) director David Lynch longer direct revival Twin Peaks return 2016 | season | Showtime: Film director David Lynch will no longer direct the revival of “Twin Peaks” Showtime announced they would produce a third season in October last year. The show was set to return in 2016.\n(Length bucket 1) director David Lynch longer direct revival Twin Peaks set return 2016 | season | Showtime Network saddened: Film director David Lynch will no longer direct the revival of “Twin Peaks” Showtime Network say they are “saddened” by the decision. The show was set to return for a third season in 2016.\n(Length bucket 2) director David Lynch confirmed longer direct revival Twin Peaks set return 2016 | series | season continue | Showtime Network saddened: Film director David Lynch has confirmed he will no longer direct the revival of “Twin Peaks” The series was set to return in 2016. Lynch broke the news about his departure in a series of tweets. He said the show’s third season will continue without him. Showtime Network said they were “saddened” by Lynch’s decision.\n(Length bucket 3) director David Lynch confirmed longer direct revival Twin Peaks 1990s set return 2016 | series | season continue | Showtime Network saddened decision: SFilm director David Lynch has confirmed he will no longer direct the revival of “Twin Peaks”The 1990s TV series was set to return in 2016. Lynch broke the news about his departure in a series of tweets. The show’s third season will continue without him. Showtime Network said they were “saddened” by Lynch’s decision.\n(Length bucket 4) director David Lynch confirmed longer direct revival Twin Peaks cult 1990s set return 2016 | series Mark Frost FBI agent murder | tweets season continue | Showtime Network statement saddened decision | Kyle: Film director David Lynch has confirmed he will no longer direct the revival of \"Twin Peaks\" The cult 1990s TV series, created by Lynch and Mark Frost, was set to return in 2016. Lynch broke the news about his departure in a series of tweets, saying that the show’s third season will continue without him. Showtime Network, which will air the nine-episode comeback, released a statement saying they were “saddened” by Lynch’s decision. Actor Kyle MacLachlan, who played the coffee-obsessed FBI agent Dale Cooper in the original series, had confirmed he would reprise the lead role for the new season.\nTable 20: Random Length Control Examples. Control tokens are bolded.\nArticle\nWashington (CNN)An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf this month, sparking concern that top Iranian commanders might not be in full control of local forces, CNN has learned. The incident, which has not been publicly disclosed, troubled U.S. military officials because the unsafe maneuver could have triggered a serious incident. It also surprised U.S. commanders because in recent months Iranian forces have conducted exercises and operations in the region in a professional manner, one U.S. military official told CNN. \"We think this might have been locally ordered,\" the official said. The incident took place as the U.S. and other world powers meet with Iran in Switzerland to negotiate a deal limiting Tehran’s nuclear program. At the same time, Iran has been active in supporting proxies in several hotspots in the Persian Gulf and neighboring regions. The Navy MH-60R armed helicopter was flying from the deck of the USS Carl Vinson on a routine patrol in international airspace, the official said. An unarmed Iranian observation Y-12 aircraft approached. The Iranian aircraft made two passes at the helicopter, coming within 50 yards, before the helicopter moved off, according to the official. The official said the helicopter deliberately broke off and flew away in a ’predictable’ manner so the Iranians could not misinterpret any U.S. intentions. The Navy helicopter was in radio contact with the ship during the encounter, but there was no contact between the two aircraft and no shots were fired. The Navy crew took photos of the incident but the military is not releasing them. The U.S. administration is considering a potential demarche protest against Iran, the official said. CNN has reached out to Iranian officials but has not received a response. This type of Iranian observation aircraft generally operates over the Gulf several times a month. But after the recent incident, U.S. naval intelligence did not see it again for two weeks, leading to the conclusion that the incident may have been ordered by a local commander who was then reprimanded by higher-ups. The Pentagon has noted for the last several years that most encounters with the Iranian military at sea or in air are conducted professionally, but that some missions run by Iranian Revolutionary Guard Corps forces have been too aggressive against U.S. forces in the area. The U.S. military’s concern has been that one of these incidents could escalate into a military encounter. This incident \"might have been buffoonery\" the official said, but there is always a risk from such actions. The incident comes as the Navy patrols the Gulf of Aden to watch for Iranian ships the U.S. believes are trying to bring weapons to resupply the Houthi rebels in Yemen. The Navy would share such intelligence with Saudi Arabia, a second U.S. official told CNN.\nReference Summary Iranian plane came within 50 yards of U.S. Navy Sea Hawk copter. Navy copter was on patrol in international airspace. U.S. official think Iranian plane may have been under orders of local commander.\nCTRLsum\n(Length bucket 0) Iranian military aircraft flew 50 yards armed U.S. Navy helicopter Gulf commanders: An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf this month. The incident, which has not been publicly disclosed, has sparked concern that top Iranian commanders might not be in full control of local forces.\n(Length bucket 1) Iranian military observation aircraft flew 50 yards armed U.S. Navy helicopter Gulf Iranian commanders forces: An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf. The incident sparked concern that top Iranian commanders might not be in full control of local forces.\n(Length bucket 2) Iranian military observation aircraft flew 50 yards armed U.S. Navy helicopter Persian Gulf Iranian commanders control forces | incident: An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf. The incident sparked concern that top Iranian commanders might not be in full control of local forces.\n(Length bucket 3) Iranian military observation aircraft flew 50 yards armed U.S. Navy helicopter Persian Gulf Iranian commanders control forces | incident | Iran | protest: An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf. The incident sparked concern that top Iranian commanders might not be in full control of local forces. Iran has been active in supporting proxies in several hotspots in the region. The U.s. administration is considering a potential demarche protest against Iran.\n(Length bucket 4) Iranian military observation aircraft flew 50 yards armed U.S. Navy helicopter Persian Gulf Iranian commanders control local forces | incident officials | official | ordered | Iran | USS | considering demarche protest | Navy Aden: An Iranian military observation aircraft flew within 50 yards of an armed U.S. Navy helicopter over the Persian Gulf. The Navy MH-60R armed helicopter was flying from the deck of the USS Carl Vinson on a routine patrol in international airspace. The incident sparked concern that top Iranian commanders might not be in full control of local forces. \"We think this might have been locally ordered,\" the official said. CNN has reached out to Iranian officials but has not received a response. Incident comes as the Navy patrols the Gulf of Aden to watch for Iranian ships. The administration is considering a potential demarche protest against Iran.\nTable 21: Random Length Control Examples. Control tokens are bolded.\nArticle\nIt’s a sight that draws giggles and curious stares from tourists and other first-timers – an unusual festival where revellers carry gigantic phalluses through the streets of a Japanese city. But for the residents of Kawasaki, who lug erotic shapes of all different sizes, this odd tradition is not a joke. Shinto Kanamara Matsuri started as a small tradition but has grown into a popular a tourist attraction, with participants praying to a god of fertility, child birth and protection from sexually transmitted infections. Participants carry a gigantic phallus through the streets of Kawasaki, Japan during the Shinto Kanamara Matsuri festival. The sight of three large phalluses being paraded through neighbourhoods in the city south of Tokyo draws giggles from tourists. Shinto Kanamara Matsuri, the Festival of the Steel Phallus, started as a small tradition but has grown into a popular a tourist attraction. Known as the Festival of the Steel Phallus, it is held every spring at the phallus-shaped Kanayama Shrine. Festivalgoers parade through the streets with three giant phalluses, while spectators lick lollies or snack on sausages or vegetables shaped as male and female genitalia. Rainy weather didn’t ruin the mood at this year’s festival, which had a massive collection of foreigners, according to Japanese website RocketNews24. They watched as groups of locals carried three heavy phalluses modelled after a mikoshi portable shrine, which is commonly used in Shinto festivals. Residents of Kawasaki carry phalluses of all different sizes while participating in a tradition that began nearly 40 years ago. Participants pray to a god who is said to help with fertility, child birth and protection from sexually transmitted infections. Rainy weather didn’t ruin the mood at this year’s festival, which attracted thousands of spectators from Japan and abroad. Spectators can buy lollies, sausages and vegetables shaped as male and female genitalia while watching the unusual parade. The three mikoshi carried through the streets for hours are the Kanamara Boat Mikoshi (tall and black), the Elizabeth Mikoshi (big and pink) and the Kanamara Big Mikoshi (squat and wooden), RocketNews24 reported. In addition to the parade there are performances and music, and the festival itself raises money and awareness for HIV research. The festival started nearly 40 years ago, and the Kanayama Shrine was once a popular place for prostitutes to pray for protection from sexually transmitted infections. Today, people visit the shrine to pray for everything from fertility and business prosperity to wedded bliss and good health. In addition to the parade there are performances and music, and the festival itself raises money and awareness for HIV research. The Kanayama Shrine was once a popular place for prostitutes to pray for protection from sexually transmitted infections. Today, people visit the shrine to pray for everything from fertility and business prosperity to wedded bliss and good health. Groups of locals carried three heavy phalluses modelled after a mikoshi portable shrine, which is commonly used in Shinto festivals.\nReference Summary Shinto Kanamara Matsuri started as a small tradition but has grown into a popular tourist attraction. Known as the Festival of the Steel Phallus, participants pray to a god of fertility and protection from infections. Rainy weather didn’t ruin the mood at this year’s festival, which attracted a large crowd of holidaymakers .\nCTRLsum\n(Length bucket 0) Shinto Kanamara Matsuri started tradition god fertility child birth protection infections | Steel: Shinto Kanamara Matsuri, the Festival of the Steel Phallus, started as a small tradition. Participants pray to a god of fertility, child birth and protection from sexually transmitted infections.\n(Length bucket 1) Shinto Kanamara Matsuri started tradition praying god fertility child birth protection sexually infections | Festival Steel: Shinto Kanamara Matsuri, the Festival of the Steel Phallus, started as a small tradition. Participants pray to a god of fertility, child birth and protection from sexually transmitted infections.\n(Length bucket 2) Shinto Kanamara Matsuri started tradition popular tourist praying god fertility child birth protection sexually infections | Festival Steel Phallus: Shinto Kanamara Matsuri, the Festival of the Steel Phallus, started as a small tradition but has grown into a popular a tourist attraction. Participants pray to a god of fertility, child birth and protection from sexually transmitted infections.\n(Length bucket 3) Shinto Kanamara Matsuri started tradition popular tourist attraction praying god fertility child birth protection sexually transmitted infections | Festival Steel Phallus: Shinto Kanamara Matsuri, the Festival of the Steel Phallus, started as a small tradition but has grown into a popular a tourist attraction. Participants pray to a god of fertility, child birth and protection from sexually transmitted infections.\n(Length bucket 4) phalluses | Shinto Kanamara Matsuri started small tradition grown popular tourist attraction praying god fertility child birth protection sexually transmitted infections | Kawasaki Japan Matsuri festival | Tokyo | Festival Steel Phallus | 40: Shinto Kanamara Matsuri started as a small tradition 40 years ago. It has grown into a popular tourist attraction in Kawasaki, Japan. The Festival of the Steel Phallus is held every spring at the Kanayama Shrine. Participants carry three phalluses through neighbourhoods south of Tokyo. They are praying to a god of fertility, child birth and protection from sexually transmitted infections. Thousands of tourists attended this year’s festival, which raised money for HIV research.\nE.4 CONTRIBUTION SUMMARIZATION ON SCIENTIFIC PAPERS\nHere we show three random examples from the arXiv test set. Note that this is the test set from (Cohan et al., 2018) instead of the contribution test data collected by us, because we want to show the difference between reference summaries (i.e. the paper abstract) in existing standard paper summarization dataset and our output contribution summaries. We truncate the source articles since they are too long to display.\nTable 22: Random Contribution Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nsynchronization of neural activity appears in different parts of the mammalian cerebral cortex @xcite , and underlies different neural processes in both normal and anomalous brain functions @xcite . it has been suggested that synchronization plays a vital role in information processing in the brain , e.g. , processing information from different sensory systems to form a coherent and unified perception of the external world @xcite . on the other hand , synchronization has been detected in pathological conditions such as parkinson s disease @xcite . and epileptic seizures have long been considered resulting from excessive synchronized brain activity @xcite , though some recent studies suggest that this picture may be an over - simplification @xcite . therefore understanding the mechanisms of synchronization may be a critical step in elucidating how neural systems work @xcite . it has stimulated a great deal of theoretical and numerical works , such as the studies on the effects of the topological properties of underlying networks @xcite and the dynamical properties of synaptic coupling @xcite . it was recently shown that the response time of synaptic couplings influences the stability of synchronized oscillation in the nonlocally coupled hodgkin - huxley ( hh ) equations @xcite . if the response time of synaptic coupling is slower , synchronized activity of the systems is instable for excitatory coupling . however , the underlying dynamical mechanism of the influence is not clear . in experimental studies @xcite , it has been suggested that the generation of prolonged epileptiform neuronal synchronization is favored by lower efficacy of synaptic transmission . the numerical studies @xcite in a detailed computational model revealed that seizure - like activity occurs when the excitatory synapses are weakened , and the results were confirmed experimentally in mouse neocortical slices . according to the common accepted assumption that synchronization of neuronal activity underlies seizures , the dynamical mechanism of synchronization may be useful for understanding the way the biological neural system works . in this work , we numerically investigated the dynamical mechanism underlying the influence of synaptic efficacy on firing synchronization in hh neuron networks . to do this , we first studied the dynamics of the response of hh neuron to excitatory synaptic current . when the efficacy of synapse is low , namely , strength is weak and duration is short , the limit cycle is stable to the perturbation of the synaptic current . when synaptic efficacy is high , synaptic current can induce the transition of the neurons from limit cycle to fixed point or transient state . the transition is determined by dynamics of neuron s ionic channel . the decrease of synaptic current depresses the feedback of sodium ionic current which is responsible for the initiation of the spike . for simplicity we will refer to the transitions as spike death . in neuronal networks , synaptic input of a neuron is the accumulation of the currents received from all presynaptic neurons . when the coherence of firing time of neurons is enhanced by the excitatory interaction , the synaptic input of neurons transforms from the fluctuating waveform into the pulse shape like the signal produced by one synapse . if synaptic efficacy is high , the input signal can induce spike death of the neuron . then spike death disorders the adjustment of the rhythm of neurons and prevents neurons from firing spikes synchronously . in contrast , for synapses of lower efficacy , the duration of synaptic current is too short to induce spike death of neurons . additionally , the firing synchronization is different from synchronous activity of oscillators for the existence of the transitions of neuron s state . the paper is organized as follows . the hh neuron model and the synaptic coupling were introduced in sec . the response of a hh neuron to synaptic current was investigated in sec . the influence of the dynamics of neurons on firing synchrony was shown in sec .\nReference Summary we investigated the influence of efficacy of synaptic interaction on firing synchronization in excitatory neuronal networks . we found spike death phenomena , namely , the state of neurons transits from limit cycle to fixed point or transient state . the phenomena occur under the perturbation of excitatory synaptic interaction that has a high efficacy . we showed that the decrease of synaptic current results in spike death through depressing the feedback of sodium ionic current . in the networks with spike death property the degree of synchronization is lower and unsensitive to the heterogeneity of neurons . the mechanism of the influence is that the transition of neuron state disrupts the adjustment of the rhythm of neuron oscillation and prevents further increase of firing synchronization .\nCTRLsum [the main contributions of this paper are : ( 1 )]: we investigated the dynamical mechanism underlying the influence of synaptic efficacy on firing synchrony in hodgkin - huxley neuron networks ; ( 2 ) we found that the dynamics of synaptic current plays an important role in determining the stability of firing synchronization .\nTable 23: Random Contribution Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nfor the understanding of surface reactions and the characterization of materials it is desirable to measure local forces close to a sample surface . the most common method to measure these surface forces is atomic force microscopy ( afm)@xcite . historically , the first force measurements were static measurements for which the force is presented as a scalar function of the static tip - sample separation , the so - called force curve@xcite . this representation is sufficient for conservative forces but the total tip - surface force may also contain contributions from dissipative forces . since dissipative forces depend on probe velocity and past trajectory , dynamic force spectroscopy methods are required for their measurement . moreover , the visualization of dissipative forces as a function of position is valid only for a specific probe trajectory and simple force curves can not capture the full character of the interaction . despite the development of several dynamic methods@xcite surface forces are still usually treated as functions of the probe position only and represented by simple force curves . here , we present a comprehensive framework for the representation and analysis of complex surface forces as they are measured by dynamic afm . we concentrate on the most common modes of dynamic afm : amplitude - modulated afm ( am - afm ) and frequency - modulated afm ( fm - afm ) , which can be considered as narrow frequency band methods@xcite . we explore the fundamental limit of force reconstruction with narrow band dynamic afm at fixed probe height and show how minimal assumptions allow for a quantitative reconstruction of the tip - surface interaction . at the heart of the afm apparatus is a micro - cantilever with a sharp tip . the cantilever is firmly clamped at one end and the tip is located at the other end which can move freely . it is assumed that surface forces only act on the tip whereas the rest of the cantilever does not experience significant surface forces . in dynamic afm an additional external drive force is applied to maintain an oscillatory motion . thus , the dynamics are governed by the force between tip and surface , the external drive force and the properties of the cantilever beam . since the cantilever is a three dimensional continuum object its motion is usually described by the amplitudes of different oscillation eigenmodes . in general , these modes can cause the cantilever to bend in all directions in space . however , the cantilever is positioned such that the softest flexural modes bend the beam in a plane orthogonal to the surface plane . we restrict ourselves to the case where only these flexural modes are excited by the drive force . due to this experimental configuration the cantilever is much more susceptible to the component of the tip - surface force which is orthogonal to the surface plane . this component of the force is typically the most dominant component and the influence of lateral force components is considered negligible . in this case the cantilever acts as a mechanical projector which reacts only to one component of a three dimensional force vector field . the deflection @xmath0 of a cantilever of length @xmath1 orthogonal to surface is described by a one dimensional euler - bernoulli equation@xcite @xmath2 where @xmath3 is the young s modulus , @xmath4 is the second moment of area , @xmath5 is the mass per unit length of the cantilever , @xmath6 is the position coordinate along the cantilever beam and @xmath7 is the time variable . the force term @xmath8 includes the surface forces acting as a point - like load at position @xmath9 , the external drive force and the hydrodynamic damping due to the surrounding medium@xcite .\nReference Summary in atomic force microscopy ( afm ) tip - surface interactions are usually considered as functions of the tip position only , so - called force curves . however , tip - surface interactions often depend on the tip velocity and the past tip trajectory . here , we introduce a compact and general description of these interactions appropriate to dynamic afm where the measurement of force is restricted to a narrow frequency band . we represent the tip - surface interaction in terms of a force disk in the phase space of position and velocity . determination of the amplitude dependence of tip - surface forces at a fixed static probe height allows for a comprehensive treatment of conservative and dissipative interactions . we illuminate the fundamental limitations of force reconstruction with narrow band dynamic afm and we show how the amplitude dependence of the fourier component of the force at the tip oscillation frequency , gives qualitative insight into the detailed nature of the tip - surface interaction . with minimal assumptions this amplitude dependence force spectroscopy allows for a quantitative reconstruction of the effective conservative tip - surface force as well as a position - dependent damping factor . we demonstrate this reconstruction on simulated intermodulation afm data . _ keywords _ : atomic force microscopy , measurement of force , mechanical resonators , mems / nems , dissipation , intermodulation\nCTRLsum [the main contributions of this paper are : ( 1 )]: a comprehensive framework for the representation and analysis of complex surface forces as they are measured by dynamic atomic force microscopy ( afm ) ; ( 2 ) a study of the fundamental limit of force reconstruction with narrow band dynamic afm at fixed probe height and show how minimal assumptions allow for a quantitative reconstruction of the tip - surface interaction .\nTable 24: Random Contribution Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nin this paper we discuss the mathematical aspects of the problems originating in the solution of nonlinear systems of hyperbolic partial differential equations . these equations describe a large variety of physical phenomena , such as , gasdynamics , magnetohydrodynamics ( mhd ) , shallow water equations , elasticity equations , etc . being nonlinear , these systems usually require numerical methods for their solution . presence of discontinuous solutions motivates the necessity of the development of reliable numerical methods based on the fundamental mathematical properties of hyperbolic systems . although such methods are rather well developed for the euler gasdynamic equations in the conservation law form , their extension to more complicated hyperbolic systems is not straightforward . it requires a mathematical justification of the solution uniqueness , a formulation of the selection principles for relevant solutions , and , finally , an investigation of their physical validity . most of high - resolution methods for gasdynamic equations use the exact or some of the approximate self - similar riemann problem solutions to determine fluxes through the computational cell surfaces . similar methods are expected to be developed for various types of hyperbolic systems . in this case we must construct the elementary self - similar solution using only admissible discontinuities ( entropy consistent , evolutionary , etc . ) . basically the choice of the solution must be made on the basis of the structure of the solution of the extended problem @xcite . all mentioned above makes very important the study of discontinuous solutions behavior under vanishing viscosity and dispersion to create a proper background for the development of high - resolution numerical methods for hyperbolic systems more complicated than the euler equations of gasdynamics . we discuss several analytical and numerical solutions in the mentioned fields which illustrate the complexity of the selection problem and outline the methods of its solution . tvd upwind and symmetric differencing schemes have recently become very efficient tool for solving complex multi - shocked gasdynamic flows . this is due to their robustness for strong shock wave calculations . the extension of these schemes to the equations of the ideal magnetohydrodynamics is not simple . first , the exact solution @xcite of the mhd riemann problem is too multivariant to be used in regular calculations . second , several different approximate solvers @xcite , @xcite , @xcite , @xcite , @xcite , @xcite , and @xcite applied to mhd equations are now at the stage of investigation and comparison . this investigation requires i ) determination of a proper slope limiting method in the parameter interpolation procedure necessary to obtain nonoscillatory schemes of the order of accuracy higher than one ; ii ) development of an efficient entropy correction method necessary to exclude rarefaction shocks ; and , finally , iii ) solution of the problem of excluding the origin of nonevolutionary solutions in ideal mhd calculations . the system of governing equations for a mhd flow of an ideal , infinitely conducting , perfect plasma in the cartesian coordinate system @xmath0 , @xmath1 , @xmath2 with the use of the conventional notations reads ( one fluid approximation ) : @xmath3 where @xmath4 is the vector of conservative variables and @xmath5 , @xmath6 , and @xmath7 are the flux vectors . we introduced here the source term @xmath8 in the form @xmath9 this form of the system can be used to satisfy the divergence - free condition by convecting away the magnetic charge from the computational region @xcite . otherwise , any other well - known method can be used to eliminate the magnetic charge . to determine a numerical flux @xmath10 normal to the computational cell boundary ( @xmath11 is a unit outward vector normal to the cell surface ) one can use the formulas based on the solution of the linearized problem @xmath12. ] ] here @xmath13 and @xmath14 are the matrices formed by the right and by the left eigenvectors , respectively , of the frozen jacobian matrix @xmath15 the matrix @xmath16 is a diagonal matrix consisting of the frozen jacobian matrix eigenvalue moduli . the superscripts @xmath17 and @xmath18 denote the values at the right- and at the left - hand side of the cell boundary .\nReference Summary a number of physical phenomena are described by nonlinear hyperbolic equations . presence of discontinuous solutions motivates the necessity of development of reliable numerical methods based on the fundamental mathematical properties of hyperbolic systems . construction of such methods for systems more complicated than the euler gas dynamic equations requires the investigation of existence and uniqueness of the self - similar solutions to be used in the development of discontinuity - capturing high - resolution numerical methods . this frequently necessitates the study of the behavior of discontinuities under vanishing viscosity and dispersion . we discuss these problems in the application to the magnetohydrodynamic equations , nonlinear waves in elastic media , and electromagnetic wave propagation in magnetics .\nCTRLsum [the main contributions of this paper are : ( 1 )]: the mathematical aspects of the problems originating in the solution of nonlinear systems of hyperbolic partial differential equations ; ( 2 ) the study of discontinuous solutions behavior under vanishing viscosity and dispersion to create a proper background for the development of high - resolution numerical methods for hyperbola systems more complicated than the euler equations of gasdynamics ; and ( 3 ) solution of the problem of excluding the origin of nonevolutionary solutions in ideal magnetohydrodynamics calculations .\nE.5 INVENTION PURPOSE SUMMARIZATION ON PATENT FILINGS\nHere we show three random examples from the BIGPATENT test set. Note that this is the test set from origial BIGPATENT, because we want to show the difference between reference summaries in existing standard dataset and our output purpose summaries. We truncate the source articles since they are too long to display.\nTable 25: Random Invention Purpose Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nreferring to the drawings and , in particular to fig1 therein illustrated is a prior art surgical support mesh 10 . mesh 10 may be manufactured from monofilament or multifilament yarns . prior art mesh 10 , as shown , includes multifilament horizontally - extending yarns 12 and multifilament vertically - extending yarns 14 woven together to form a support trellis . the use of multifilament yarns , such as yarns 12 and 14 , provides a mesh having greater pliability and suppleness than the use of monofilament yarns . these characteristics result from both the smaller diameter of the individual filaments and the interstitial spaces or voids that are located between such filaments . in particular , the flexibility of a filament ( or fiber ) generally increases as its diameter decreases . because the solid cross - sectional area of the filaments of a multifilament yarn is less than the cross - sectional area of a monofilament yarn of equivalent diameter , the multifilament yarn will have a greater degree of flexibility and pliability than that of the monofilament yarn . as shown in fig1 a , each of multifilament yarns 12 and 14 is composed of a plurality of filaments 16 that are intermingled or bundled together to form the yarn . interstitial spaces 18 , which are pockets of air , are formed between adjacent filaments of the yarn . although these voids contribute to the softness and pliability of the formed mesh , they also provide a natural breeding ground for bacteria or other infectious material . surgical mesh is , of course , thoroughly sterilized prior to implantation . nevertheless , surgeons typically prefer the use of monofilament - designed mesh to minimize any risk of infection . as a result , the advantages associated with multifilament - designed mesh ( i . e ., softness and pliability which result in better assimilation of the mesh into the body ) are typically sacrificed . it has been discovered herein that a surgical support mesh having both the softness and pliability of a multifilament - designed mesh and the infection resistance of a monofilament - designed mesh may be produced . particularly , it has been discovered that a support trellis formed of multifilament yarn wherein the interstitial voids located between adjacent filaments are enclosed within an infection - impervious matrix exhibits the desired resistance to harboring of infectious matter without significant loss of flexibility . particularly , the matrix , which completely encloses the interstitial voids between the filaments of the yarn , provides an effective barrier to the passage of infectious matter between the interior and exterior of the yarn . accordingly , any voids remaining in the yarn after encapsulation of such yarn are enclosed ( and thereby sealed ) within the resultant matrix . a first embodiment of the present invention is shown in fig2 . particularly , this first embodiment includes a support trellis 20 formed of multifilament yarns 22 and 24 which overlap at cross - over junctions 25 . subsequent to forming of the trellis , such trellis is encapsulated within a matrix 26 , which is preferably a flexible material that continuously surrounds the exterior of the yarns thereby enclosing interstitial voids 27 located between filaments 28 ( see fig2 a ). in one embodiment , the matrix is formed from a polymeric resin . as shown in fig2 a , the resin can be applied to the yarn in such a manner as to not allow the resin to substantially penetrate into the yarn . particularly , the penetration of the resin can be controlled through the application procedure , e . g ., quantity of resin applied and / or encapsulating time . in such an embodiment , the interstitial spaces are enclosed ( rather than filled ) within the continuous matrix . however , it is contemplated that the resin can be allowed to penetrate into the yarn , thereby substantially filling the void space located therein . in another embodiment of the present invention , individual yarns 29 , as shown in fig3 are encapsulated within matrix 30 prior to forming of the support trellis . fig3 a shows a compressed yarn 29 which provides a trellis having a reduced thickness . as a result of the encapsulation , interstitial voids 32 remaining in the yarn are enclosed ( and thereby sealed ) within the matrix .\nReference Summary a soft and pliable surgical support mesh exhibiting increased resistance to inhabitation of infectious matter . the mesh includes a support trellis formed of multifilament yarns wherein the interstitial voids located between the filaments of said yarns are enclosed within an infection - impervious matrix . the meshes may be designed to be extremely thin yet retain the requisite strength for repairing soft tissue , which allows for a low profile when folded for delivery .\nCTRLsum [the purpose of the present invention is]: to provide a surgical mesh that is resistant to the growth of bacteria and other infectious matter . this is accomplished by encapsulating the interstitial spaces located between the filaments of the yarn within a matrix .\nTable 26: Random Invention Purpose Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nfig1 shows a multicolor web fed rotary printing press 1 in accordance with the invention . the press 1 includes four tower arrangements 2 a , 2 b , 2 c and 2 d for printing a single color or a multicolor image on the webs 4 a , 4 b , 4 c and 4 d . the webs 4 a , 4 b , 4 c and 4 d travel in a substantially linear direction through each of the towers 2 a - 2 d . for example , the web can travel along a substantially vertical path , as shown in fig1 . alternatively , as those skilled in the art will appreciate , the web path can be in a substantially horizontal direction , or in a substantially linear path at any desired angle relative to the vertical direction shown . the towers 2 a - 2 d each include four printing units 6 c , 6 m , 6 y and 6 b for respectively printing an image in cyan , magenta , yellow and black on both sides of each web 4 a - 4 d . other colors besides cyan , magenta , yellow and black can be used . the webs 4 can be , for example , between 1200 and 1600 millimeters wide . each of the printing units 6 c , 6 m , 6 y and 6 b in a tower can be moved along a respective web 4 by a lifting and positioning system 8 shown in fig2 . the lifting and positioning system 8 includes a spindle drive 10 , which has a fixed spindle 12 spanning a range 14 over which the printing units 6 c , 6 m , 6 y and 6 b can be moved . each of the printing units 6 c , 6 m , 6 y and 6 b includes a ball screw 16 , which is rotatably supported in a housing 18 . the ball screw 16 can be rotated by a motor 20 as shown in fig2 . fig2 shows one set of a spindle drive 10 , fixed spindle 12 , ball screws 16 , and motors 20 , but preferably each tower 2 is provided with several sets , one set for each corner of the print unit housing 18 . the motors 20 are controlled by a motor control unit 22 , which receives commands from a remote control 24 . by pressing a button on the remote control 24 , an operator can control the rotation of the motors 20 and thereby the movement direction and position of each printing unit 6 b , 6 y , 6 m and 6 c in a tower 2 . rail systems ( not shown ) fixed to a side frame of each tower 2 can also be used to precisely guide movements of the printing units 6 in the tower . as shown in fig1 and 2 , the position of each of the printing units 6 along the webs 4 and fixed spindles 12 can be controlled by the operator to allow access to a desired part of a printing unit 6 . for example , in fig1 the operator has moved the print unit 6 b of tower 2 b into a position where a printing plate of the print unit 6 b can be most easily accessed . after the printing plate has been accessed , the operator can move the print unit 6 b into a different position so that inker units in an upper part of the printing unit 6 b can be easily accessed . two or more printing units 6 in tower 2 can also be moved as a group . for example , if the operator wants to access the plate cylinder of the printing unit 6 m of tower 2 b shown in fig1 he can simply move the two printing units 6 y , 6 m upwardly together until the top surface 28 of the printing unit 6 y contacts the bottom surface 30 of the printing unit 6 b . thereafter , the operator can move the group of printing units 6 b , 6 y and 6 m upwards into the position shown in tower 2 c of fig1 where the plate cylinder of the printing unit 6 m can be easily accessed . although fig1 and 2 show four printing units 6 for each tower 2 , different numbers of printing units can be used for each tower . fig1 shows that the lower three printing units 6 of the tower 2 b are beneath an operating floor 26 . preferably at least two of the printing units can be lowered beneath the operating floor 26 , and a printing press in accordance with the invention can be configured so that all of the printing units in a tower can be lowered beneath an operating floor . the operating floor can be a floor of a print shop , or can be an elevated platform .\nReference Summary the present invention is directed to a multicolor web fed rotary printing press having printing units that can be moved along a linear section of a web by a positioning mechanism to allow easy access to each of the printing units . the easy access to the printing units significantly reduces maintenance costs and downtime of the press . in addition , the invention provides a printing press that has a reduced overall size and allows the printing units to be arranged in a nested formation during printing operations . this nested formation reduces fanout and paper waste . the linear web section can be oriented vertically or horizontally .\nCTRLsum [the purpose of the present invention is]: to provide a web fed rotary printing press that allows an operator to access a desired part of a printing unit more easily than in the past .\nTable 27: Random Invention Purpose Summarization Examples. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nreferring now to the drawings and more particularly to fig1 - 3 , a fluid system 10 is illustrated and includes a variable displacement axial piston pump 12 that receives fluid from a tank 14 via a conduit 16 and delivers pressurized fluid via a supply conduit 18 to a fluid control valve 20 and selectively through work conduits 22 , 24 to a fluid actuator 26 . in the subject arrangement , the variable displacement axial piston pump 12 is a unidirectional pump that rotates in a counterclockwise direction as driven by a power input shaft 27 . the fluid system 10 also includes first and second pressure sensors 28 , 30 respectively connected to the tank conduit 16 and the supply conduit 18 . the pressure sensors 28 , 30 are operative to sense the pressure in the respective lines and deliver an electrical signal to a controller 32 through electrical lines 34 , 36 . a position sensor 40 is mounted on the variable displacement axial piston pump 12 and operative to sense the displacement of the pump and deliver a signal representative thereof to the controller 32 via an electrical line 42 . various other components could be used in the subject fluid system 10 without departing from the essence of the subject invention . for example , several control valves 20 and associated fluid actuators 26 could be used . likewise , other sensors of various types and styles could be used . the variable displacement axial piston pump 12 includes a housing 44 having a head portion 46 and a body portion 48 . the head portion 46 defines an inlet port passage 50 that is connected to the conduit 16 and an outlet port passage 52 that is connected to the supply conduit 18 . in the subject arrangement , a port plate 54 is disposed between the head portion 46 and the body portion 48 . the construction of the porting within the port plate 54 is more clearly illustrated in fig3 and will be discussed more fully below . it is recognized that the porting illustrated in fig3 could be made within the head portion 46 without departing from the essence of the subject invention . a rotating group 56 is disposed within the body portion 48 and includes a barrel 58 having a plurality of cylinder bores 59 defined therein spaced from one another around an axis of rotation 60 of the barrel 58 . each of the cylinder bores 59 is oriented within the barrel 58 parallel with the axis of rotation 60 . a plurality of piston assemblies 62 are operatively associated with the barrel 58 and each one of the plurality of piston assemblies 62 includes a piston 64 slideably disposed in the respective ones of the plurality of cylinder bores 59 . each one of the plurality of piston assemblies 62 also has a shoe 66 pivotably attached to one end of each piston 64 in a conventional manner . the barrel 58 has an end surface 68 that is in mating , sealing contact with the port plate 54 to provide communication between the cylinder bores 58 and the respective inlet and outlet port passages 50 , 52 of the head portion 46 . a closed chamber 70 is defined in each cylinder bore 59 of the barrel 58 between the end of the piston 64 and the end surface 68 thereof . referring to fig3 the porting between the barrel 58 and inlet and outlet port passages 50 , 52 of the head portion 46 is more clearly illustrated . for explanation purposes only , the “ 270 ” degree position illustrated in fig3 relates to a position on the right side of the drawing of fig1 and the “ 0 ” degree position illustrated in fig3 relates to a position on the right side of the drawing of fig2 . an arcuate slot 72 is defined in the port plate 54 and provides communication between the plurality of closed chambers 70 and the inlet port passage 50 . a plurality of slots 74 are defined in the port plate 54 circumferentially spaced from the arcuate slot 72 and provides communication between the plurality of closed chambers 70 and the outlet port passage 52 .\nReference Summary a variable displacement axial piston pump is typically used to receive fluid from a tank and supply pressurized fluid through a control valve to move an actuator . the present variable displacement axial piston pump has a swashplate arrangement that is capable of being angled in two different directions to control the pressure transitions between the low pressure inlet port passage and the higher pressure outlet port passage as cylinder bores in a barrel of a rotating group rotate through trapped volume regions situated between inlet and outlet port passages of the axial piston pump . movement of the swashplate arrangement in two different directions provides smooth pressure transitions and increases the operating efficiency of the variable displacement axial piston pump .\nCTRLsum [the purpose of the present invention is]: to provide a variable displacement axial piston pump that is capable of delivering a variable amount of pressurized fluid in response to a change in the displacement of the pump .\nE.6 QUESTION-GUIDED SUMMARIZATION\nWe randomly sample 3 articles from NewsQA and show five questions and answers from CTRLsum for each article. We also show the gold answers to these questions.\nTable 28: Random Examples on Question-guided summarization. Control tokens are bolded. “[]” denote that the tokens are used as both keywords and prompts.\nArticle\nTEHRAN, Iran (CNN) – Iran’s parliament speaker has criticized U.S. President-elect Barack Obama for saying that Iran’s development of a nuclear weapon is unacceptable. Iranian President Mahmoud Ahmadinejad has outlined where he thinks U.S. policy needs to change. Ali Larijani said Saturday that Obama should apply his campaign message of change to U.S. dealings with Iran. \"Obama must know that the change that he talks about is not simply a superficial changing of colors or tactics,\" Larijani said in comments carried by the semi-official Mehr News Agency. \"What is expected is a change in strategy, not the repetition of objections to Iran’s nuclear program, which will be taking a step in the wrong direction.\" In his first post-election news conference Friday afternoon, Obama reiterated that he believes a nuclear-armed Iran would be \"unacceptable.\" He also said he would help mount an international effort to prevent it from happening. Larijani said that U.S. behavior toward Iran \"will not change so simply\" but that Obama’s election showed internal conditions in the United States have shifted. He added that Iran does not mind if the United States provides other Persian Gulf countries with nuclear technology, but \"you should know that you cannot prevent the Islamic Republic [from reaching its goals in the nuclear field],\" according to the news agency. Obama cautioned Friday that it had only been a few days since the election and that he was not in office. \"Obviously, how we approach and deal with a country like Iran is not something that we should simply do in a knee-jerk fashion. I think we’ve got to think it through,\" Obama said. \"But I have to reiterate once again that we only have one president at a time. And I want to be very careful that we are sending the right signals to the world as a whole that I am not the president, and I won’t be until January 20th.\" Larijani was speaking two days after Iranian President Mahmoud Ahmadinejad congratulated Obama, the first time an Iranian leader has offered such wishes to a U.S. president-elect since the 1979 Islamic Revolution. One analyst said the welcome was a gesture from the hard-line president that he is open to a more conciliatory relationship with the United States. Ahmadinejad said Tehran \"welcomes basic and fair changes in U.S. policies and conducts,\" according to the state-run Islamic Republic News Agency on Thursday. Relations between the United States and Iran have historically been chilly and have been further strained in recent years over Iran’s nuclear program. Tehran insists that the program exists for peaceful purposes, but the United States and other Western nations are concerned by Iran’s refusal to halt uranium enrichment activities. CNN’s Shirzad Bozorgmehr contributed to this report.\nCTRLsum\n[Q: Iran criticizes who? A:]: U.S. President-elect Barack Obama. Gold Answer: U.S. President-elect Barack Obama\n[Q: What are US and Iran relations tensioned about? A:]: Iran’s nuclear program. Gold Answer: nuclear program.\n[Q: Who said Obama should apply campaign message? A:]: Ali Larijani, Iran’s parliament speaker. Gold Answer: Ali Larijani\n[Q: What should Obama apply according to speaker? A:]: Change in strategy, not the repetition of objections. Gold Answer: his campaign message of change\n[Q: What kind of weapons are being discussed? A:]: Nuclear weapons. Gold Answer: nuclear\nCTRLsum\n[Q: What does the 12 hr shift involved? A:]: Project work. Gold Answer: backbreaking labor, virtually zero outside recognition, and occasional accusations of being shills for the timber industry\n[Q: Which group followed firefighters? A:]: The wildland firefighters of Oregon. Gold Answer: VBS.TV\n[Q: What is the VBS following? A:]: Wildland firefighters who work around active forest fires. Gold Answer: a crew of Grayback forest-firefighters" } ]
2,020
CTRLSUM: TOWARDS GENERIC CONTROLLABLE TEXT SUMMARIZATION
SP:77ff356f24bca397a8f89706e0f89ff14b6b81be
[ "The authors exploit the piecewise linear nature of ReLU neural networks to design a new regularizer that improves the robustness of the neural network. It can be viewed as a alternative to the regularizer proposed in Croce et al. (2018) -- the current regularizer uses the analytic center, whereas the previous work (named MMR) guarantees that are straightforward consequences of the geometric interpretation. The experimental setup is similar to the one in Croce et al. (2018), which compares the result of different adversarial defense methods on MNIST, F-MNIST, and CIFAR10 on small shallow networks. These computational results generally slightly outperform MMR on MNIST." ]
Recent work has demonstrated that neural networks are vulnerable to small, adversarial perturbations of their input. In this paper, we propose an efficient regularization scheme inspired by convex geometry and barrier methods to improve the robustness of feedforward ReLU networks. Since such networks are piecewise linear, they partition the input space into polyhedral regions (polytopes). Our regularizer is designed to minimize the distance between training samples and the analytical centers of their respective polytopes so as to push points away from the boundaries. Our regularizer provably optimizes a lower bound on the necessary adversarial perturbation required to switch an example’s label. The addition of a second regularizer that encourages linear decision boundaries improves robustness while avoiding over-regularization of the classifier. We demonstrate the robustness of our approach with respect to `∞ and `2 adversarial perturbations on multiple datasets. Our method is competitive with state-of-the-art algorithms for learning robust networks while involving fewer hyperparameters. Moreover, applying our algorithm in conjunction with adversarial training boosts the robustness of classifiers even further.
[ { "affiliations": [], "name": "RELU NETWORKS" } ]
[ { "authors": [ "Zeyuan Allen-Zhu", "Y. Li" ], "title": "Backward feature correction: How deep learning performs deep learning", "venue": "ArXiv, abs/2001.04413,", "year": 2020 }, { "authors": [ "Raman Arora", "Amitabh Basu", "Poorya Mianjy", "Anirbit Mukherjee" ], "title": "Understanding deep neural networks with rectified linear units", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi" ], "title": "Towards a theoretical foundation for laplacian-based manifold methods", "venue": "Journal of Computer and System Sciences,", "year": 2008 }, { "authors": [ "Mikhail Belkin", "Partha Niyogi", "Vikas Sindhwani" ], "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "venue": "J. Mach. Learn. Res.,", "year": 2006 }, { "authors": [ "Stephen Boyd", "Lieven Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2004 }, { "authors": [ "Jeremy Cohen", "Elan Rosenfeld", "Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "A randomized gradient-free attack on ReLU networks", "venue": "Pattern Recognition,", "year": 2019 }, { "authors": [ "Francesco Croce", "Maksym Andriushchenko", "Matthias Hein" ], "title": "Provable robustness of relu networks via maximization of linear regions", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Chris Finlay", "Adam M. Oberman" ], "title": "Scaleable input gradient regularization for adversarial robustness", "venue": "CoRR, abs/1905.11468,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Charles Jin", "Martin Rinard" ], "title": "Manifold regularization for locally stable deep neural networks", "venue": null, "year": 2020 }, { "authors": [ "Matt Jordan", "Justin Lewis", "Alexandros G Dimakis" ], "title": "Provable certificates for adversarial examples: Fitting a ball in the union of polytopes", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Alex Krizhevsky", "Vinod Nair", "Geoffrey Hinton" ], "title": "Cifar-10 (canadian institute for advanced research). URL http://www.cs.toronto.edu/ ̃kriz/cifar.html", "venue": "Yann LeCun and Corinna Cortes. MNIST handwritten digit database", "year": 2010 }, { "authors": [ "M. Lecuyer", "V. Atlidakis", "R. Geambasu", "D. Hsu", "S. Jana" ], "title": "Certified robustness to adversarial examples with differential privacy", "venue": "IEEE Symposium on Security and Privacy (SP),", "year": 2019 }, { "authors": [ "Bai Li", "Changyou Chen", "Wenlin Wang", "Lawrence Carin" ], "title": "Second-order adversarial attack and certifiable", "venue": "robustness. CoRR,", "year": 2018 }, { "authors": [ "Chen Liu", "Mathieu Salzmann", "Sabine Süsstrunk" ], "title": "Training provably robust models by polyhedral envelope regularization", "venue": "ArXiv, abs/1912.04792,", "year": 2020 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Guido Montúfar", "Razvan Pascanu", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "On the number of linear regions of deep neural networks", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2,", "year": 2014 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Jonathan Uesato", "P. Frossard" ], "title": "Robustness via curvature regularization, and vice versa", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Yurii Nesterov", "Arkadii Nemirovskii" ], "title": "Interior-Point Polynomial Algorithms in Convex Programming", "venue": "Society for Industrial and Applied Mathematics,", "year": 1994 }, { "authors": [ "Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Krishnamurthy Dvijotham", "Alhussein Fawzi", "Soham De", "Robert Stanforth", "Pushmeet Kohli" ], "title": "Adversarial robustness through local linearization", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In Reliable Machine Learning in the Wild, 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Kevin Roth", "Yannic Kilcher", "Thomas Hofmann" ], "title": "The odds are odd: A statistical test for detecting adversarial examples", "venue": null, "year": 1902 }, { "authors": [ "Hadi Salman", "Greg Yang", "Jerry Li", "Pengchuan Zhang", "Huan Zhang", "Ilya P. Razenshteyn", "Sébastien Bubeck" ], "title": "Provably robust deep learning via adversarially trained smoothed classifiers", "venue": null, "year": 1906 }, { "authors": [ "Hossein Sartipizadeh", "Tyrone L. Vincent" ], "title": "Computing the approximate convex hull in high", "venue": "dimensions. CoRR,", "year": 2016 }, { "authors": [ "Thiago Serra", "Christian Tjandraatmadja", "Srikumar Ramalingam" ], "title": "Bounding and counting linear regions of deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Kouichi Sakurai" ], "title": "One pixel attack for fooling deep neural networks", "venue": null, "year": 2017 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "arXiv, abs/1312.6199,", "year": 2014 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Hongge Chen", "Zhao Song", "Cho-Jui Hsieh", "Duane Boning", "Inderjit S. Dhillon", "Luca Daniel" ], "title": "Towards fast computation of certified robustness for ReLU networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Tsui-Wei Weng", "Huan Zhang", "Pin-Yu Chen", "Jinfeng Yi", "Dong Su", "Yupeng Gao", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Evaluating the robustness of neural networks: An extreme value theory approach", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Eric Wong", "Zico J. Kolter" ], "title": "Provable defenses against adversarial examples via the convex outer adversarial polytope", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning", "venue": "algorithms. CoRR,", "year": 2017 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric Xing", "Laurent El Ghaoui", "Michael Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "Huan Zhang", "Tsui-Wei Weng", "Pin-Yu Chen", "Cho-Jui Hsieh", "Luca Daniel" ], "title": "Efficient neural network robustness certification with general activation functions", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "Moosavi-Dezfooli" ], "title": "Let `(x) denote the loss of a neural network f evaluated at x, and let r∗ be be the minimal perturbation necessary to fool the classifier", "venue": "below. Theorem 2 (Moosavi-Dezfooli et al", "year": 2019 }, { "authors": [ "Wong", "Kolter", "Croce" ], "title": "consisting of two convolutional layers with 16 and 32 filters of size 4× 4 and stride 2, followed by a fully connected layer with 100 hidden units. For all experiments we use batch size 128 and we train the models for 100 epochs. Moreover, we use Adam optimizer (Kingma & Ba, 2015) with the default learning rate 0.001", "venue": "On MNIST and F-MNIST", "year": 2015 }, { "authors": [ "Rauber" ], "title": "We use the same `2-bound on the perturbation as the used in robust error. During training, we perform 40 iterations of the PGD attack for MNIST and F-MNIST, and 7 for CIFAR-10. During evaluation, we use 40 iterations for all datasets. Following Croce et al., the step size is selected as divided by the number of iterations and multiplied by 2", "venue": null, "year": 2017 }, { "authors": [ "of Finlay", "Oberman" ], "title": "2019) is formulated as a norm on the gradient of the loss function with respect", "venue": null, "year": 2019 }, { "authors": [ "∈ R", "Sartipizadeh", "Vincent" ], "title": "2016) propose to iteratively build an estimate of an -approximate convex hull (denoted by E) by selecting a point to add to E such that the maximum over all projections from S \\ E to E is minimized (i.e. is an extreme point of S), where the projection length from a point x to E is computed by solving the following quadratic program", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural networks have been very successful in tasks such as image classification and speech recognition. However, recent work (Szegedy et al., 2014; Goodfellow et al., 2015) has demonstrated that neural networks classifiers can be arbitrarily fooled by small, adversarially-chosen perturbations of their inputs. Notably, Su et al. (2017) demonstrated that neural network classifiers which can correctly classify “clean” images may be vulnerable to targeted attacks, e.g., misclassify those same images when only a single pixel is changed.\nPrevious work demonstrating this fragility of neural network classifiers to adversarial noise has motivated the development of many heuristic defenses including adversarial training (Madry et al., 2018) as well as certifiably robust classifiers such as randomized smoothing (Cohen et al., 2019; Salman et al., 2019) which characterize the robustness of a classifier according to its smoothness.\nThe intrinsic relationship between smoothness, or Lipschitz continuity—and their corresponding local variants—and robustness has motivated a variety of techniques to encourage uniform and local smoothness through the explicit regularization of approximations of the global and local Lipschitz constants (Zhang et al., 2018; Weng et al., 2018a;b). Recently, Lecuyer et al. (2019); Li et al. (2018); Cohen et al. (2019); Salman et al. (2019) proposed and extended a simple, scalable technique—randomized smoothing—to transform arbitrary functions (e.g. neural network classifiers) into certifiably and robust classifiers on `2 perturbations.\nAlternatively, previous work has also addressed adversarial robustness in the context of piecewiselinear classifiers (e.g., feedforward neural networks with ReLU activations). Wong & Kolter (2018); Jordan et al. (2019) propose to certify the robustness of a network f at an example x by considering a bound on the radius of the maximum `p-norm ball contained within a union of polytopes over which f predicts the same class. Related to our work, Croce et al.; Liu et al. (2020) propose maximum margin regularizers (MMR) which quantifies robustness of a network at a point according to the local region in which it lies and the distance to the classification boundary. Recent work also includes recovery and analysis of the piecewise linear function learned by an ReLU neural network during a training process (Arora et al., 2018; Montúfar et al., 2014; Croce & Hein, 2019). Typically, work in this area\ncenters around studying the complexity, interpretation, and improvement of stability and robustness of neural networks. For example, Montúfar et al. (2014); Serra et al. (2017) studied piecewise linear representations of neural networks and proposed the “activation profile” to characterize the linear regions.\nIn this work, we propose a novel regularizer for feedforward piecewise-linear neural networks, including convolutional neural networks, to increase their robustness to adversarial perturbations. Our Geometric Regularization (GR) method is based on the fact that ReLU networks define continuous piecewise affine functions and is inspired by classical techniques from convex geometry and linear programming. We provide a novel robustness certificate based on the local polytope geometry of a point and show that our regularizer provably maximizes this certificate. We evaluate the efficacy of our method on three datasets. Notably, our method works regardless of the perturbation model and relies on fewer hyperparameters compared with related approaches. We demonstrate that our regularization term leads to classifiers that are empirically robust and are comparable to the state of the art algorithms with respect to clean and robust test accuracy under `1 and `∞-norm adversarial perturbations." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we briefly present background terminology pertaining to polytopes and their characterizations, adversarially robust classification, and the polytope decomposition of the domain induced by an ReLU network and its linearization over a given polytope." }, { "heading": "2.1 PIECEWISE-LINEAR NETWORKS", "text": "An ReLU network is a neural network such that all nonlinear activations are ReLU functions, where we denote the ReLU activation by σ : R → R, σ(x) = max{0, x}. Informally, we define σ : Rd → Rd by σ(x) = [σ(x1), . . . , σ(xd)]. Let f : Rd → [0, 1]k be a feedforward ReLU network with L hidden layers; for example, f may map from a d-dimensional image to a k-dimensional vector corresponding to likelihoods for k classes. Let nl be the number of hidden units at layer l, the input layer is of size n0 = d, and let W (l) ∈ Rnl−1×nl and b(l) ∈ Rnl denote the weight matrix and bias vector at layer l, respectively. Since f may be represented as the composition of L+ 1 linear transformations and L continuous piecewise-affine functions, f must necessarily be continuous and piecewise-affine (for brevity, we will say piecewise-linear).\nThe half-space representation, orH-representation, of convex polytopes is defined as follows: Definition 2.1 (Convex polytope). A convex polytope K is the convex hull of finitely many points. Alternatively, a convex polytope may be expressed as an intersection of m half-spaces. The Hrepresentation of a polytope is defined as the solution set to a system of linear inequalities Ax ≤ b:\nK = {x : ∀j ∈ [m], aj · x ≤ bj}\nFollowing definition 3.1 from Croce et al., a function is piecewise-linear if there exists a finite set of convex polytopes {Qr}mr=1 (referred to as linear regions of f ) such that ∪mr=1Qr = Rd and f is affine when restricted to each Qr, i.e., can be expressed on Qr as f(x) = V x+ a.\nGiven a feedforward ReLU network f and an input x ∈ Rd, we intend to recover the polytope Q conditioned on x and the linear restriction of f on Q. Therefore, we need to find A and b such that Ax ≤ b, where A and b define the intersection of k half-spaces, i.e. a polytope, and V and a corresponding to the linearization of f within this polytope such that f(x) = V x+ a when restricted to Q.\nWe follow the formulation and notations of Croce et al.. If x ∈ Rd and g(0)(x) = x we recursively define the pre- and post-activation output of every layer:\nf (l)(x) = W (l)g(l−1)(x) + b(l)\ng(l−1)(x) = σ(f (l)(x))\nThe resulting classifier is then: f (L+1) = W (L+1)g(L)(x) + b(L+1).\nBy rewriting σ as an affine function, we can write f (l) formally as a composition of affine functions:\nf (l)(x) = W (l)Σ(l−1)(x)(. . . (W (1)x+ b(1)) . . .) + b(h),\nwhere we define ∆(l),Σ(l) ∈ Rnl×nl conditioned on x and defined elementwise as:\n∆ (l) i,j = { sign(f (l)i (x)) if i = j 0 otherwise , Σ (l) i,j = { 1 if i = j and f (l)i (x) > 0 0 otherwise\nWe now derive the polytope of f at x and the linear restriction of f on polytope Q. By expanding the composition, we can concisely write f (l)(x) = V (l)x+ a(l), where\nV (l) = W (l)( l−l∏ h=1 Σ(l−h)(x)W (l−h)),\na(l) = b(l) + l−1∑ h=1 ( l−1∏ m=1 W (l+1−m)Σ(l−m)(x))b(h),\n(1)\nand characterize the polytope Q where x lies as the intersection of N = ∑L l=1 nl half-spaces:\nΓl,i = {z ∈ Rd|∆(l)(x)(V (l)z + a(l)) ≥ 0}." }, { "heading": "2.2 POLYTOPE CENTERS", "text": "There are various definitions of the “center” of a polytope. Notably, the Chebyshev center of a polytope K is the center of the largest inscribed ball of K, or equivalently, the interior point of K that maximizes the minimum distance between itself and the boundary of K. More formally, Definition 2.2 (Chebyshev center). Let K be a convex polytope. The Chebyshev center of K with respect to an `p distance is the point x ∈ Rd which satisfies the following min-max problem:\narg min x̂ max x∈K ||x− x̂||2p\nPrevious work explored the Chebyshev center in the context of adversarial machine learning (Croce et al.; Jordan et al., 2019). For example, Croce & Hein (2019) propose to include the minimum distance to the boundary in a non-smooth regularization term in their Maximum Margin Regularizer (MMR) to encourage samples to lie close to the Chebyshev centers of their respective polytopes.\nIn contrast, we explore the application of an alternative polytope center: the analytic center. The analytic center of a convex polytope K expressed via theH-representation Ax ≤ b is canonically defined as an element of K that maximizes the product of the distances to the hyperplanes characterized by the rows of A and the elements of b. Definition 2.3 (Analytic center). Let K be a convex polytope expressed via the H-representation: Ax ≤ b. The analytic center of K is the point x ∈ Rd which satisfies the following objectives:\nxac = arg max x m∏ i=1 (bi − d∑ j=1 aijxj) = arg min x − m∑ i=1 log(bi − d∑ j=1 aijxj) (2) s.t. x ∈ K s.t. x ∈ K,\nwhere the second objective is canonically known as the logarithmic barrier potential (Nesterov & Nemirovskii, 1994).\nIt naturally follows that when boundary-planes of the polytope are symmetric about the analytic center (e.g. for polytopes that satisfy a central symmetry property), it exactly corresponds to the Chebyshev center. A polytope K ⊂ Rd is centrally symmetric if K = −K; that is, x ∈ K if and only if there is a unique point y such that the reflection of x about y is in K: 2y − x ∈ K. The analytic center, namely a weighted analytic center, has been extensively used in interior point (IP) methods for linear programming (Boyd & Vandenberghe, 2004). In general, the measure of closeness\nto the analytic center is an important factor for the convergence of center-following algorithms. The properties of the log barrier potential that make it a popular choice for interior point methods (including being twice continuously differentiable and convex) suggest we might use it as a regularizer to encourage robustness during learning.\nNote, however, that the analytic center depends on how the set of particular inequalities is defined. The addition of redundant inequalities could feasibly push the analytic center arbitrarily close to the boundary of the polytope. We discuss this further in Sec. 3." }, { "heading": "2.3 ROBUST CLASSIFICATION", "text": "Consider again the network f : Rd → Rk, where the input might be a d-dimensional image and the output is a likelihood of that image belongs to one of k classes. The associated classification is then c(x) = arg maxi∈[1,k] fi(x). In adversarial machine learning, we are not just concerned that the classification be correct, but we also want to be robust against adversarial examples, i.e. small perturbations to the input which may change the output to an incorrect class. Definition 2.4 ( -robust). f is called -robust with respect to norm p at x if the classification is consistent for a small ball of radius around x:\nc(x+ δ) = c(x),∀δ : ||δ||p ≤ (3)\nThis -robustness is intimately related to the uniform and local Lipschitz smoothness of f . Recall that a function f has finite, global Lipschitz constant k > 0 with respect to norm || · ||, if\n∃k ≥ 0 s.t. |f(x)− f(x′)| ≤ k · ||x− x′||,∀x, x′ ∈ X\nAn immediate consequence of Eq. 3 and Def. 2.3 is that if f is uniformly L-Lipschitz, then f is -robust at x with = 12L (Pa − Pb) where Pa is the likelihood of the most likely outcome, and Pb is the likelihood of the second most likely outcome (Salman et al., 2019). The piecewise linearity of ReLU networks facilitates the extension of this consequence to the locally Lipschitz regime such that for case 1 of Fig. 1, L corresponds to the norm of the affine map characterized by f conditioned on an input x.\nLocal smoothness is a property exhibited by provably and empirically robust networks. In Fig 2, the adversarially trained neural network is smoother compared to the vanilla network (larger linear regions with linear functions that have smaller slope and smoother transitions—“corner points”—between linear functions). In conjunction with smoothness, adversarial training also results in coarser partitions, i.e. larger polytopes, in contrast to vanilla networks. It is reasonable to assume that both of these properties are desirable in the context of robust classification.\nIn the case of piecewise linear networks, the lower-bound on the robustness of a network at a point given its associated polytope may be described with respect to two cases presented in Fig. 1(a)-(b). In case 1, the point may be closer to the boundary of its polytope compared to the decision boundary, and vica-versa for case 2. It follows that for case 1, the distance to the polytope boundary is a lower bound on the robustness of the network at that point. Croce et al. propose to optimize this certificate exactly, and we propose to optimize a relaxed version presented in the next section." }, { "heading": "3 GEOMETRIC REGULARIZATION", "text": "" }, { "heading": "3.1 ANALYTIC CENTER REGULARIZER (ACR)", "text": "We propose to adopt the logarithmic barrier potential as a regularizer to encourage points be near their analytical centers. We define the analytic center regularization (ACR) as\nJACR(x) = m∑ i=1 log(ai − n∑ j=1 vijxj) (4)\nwhere V and a are computed for network f given an input x via Eq. (1), and row-normalized. In related work, Croce et al.; Liu et al. (2020) both propose variants to the vanilla MMR formulation which involve regularizing a summation over the distances to the k nearest boundary hyperplanes as opposed to only the distance to the nearest plane. This extension aims to ensure that the input is far from the boundary of the polytope. However, in addition to tuning this number, both methods employ a warm-up scheme to gradually increase the number of hyperplanes considered. In contrast, by regularizing the distance to the analytic center, ACR natively inherits the desirable properties of MMR while taking into account all hyperplanes comprising the boundary and involving no additional hyperparameters.\nWe note that many assumptions made regarding the analytic center rely on the polytope P being minimally represented (e.g. the uniqueness of the analytic center). To elucidate this concept, we define redundant hyperplanes:\nDefinition 3.1 (Redundant hyperplanes). Let P := {x|Ax ≤ b} be a polytope comprised of m constraints and i ∈ [m]. aix ≤ b is a redundant constraint, or a redundant hyperplane, if it can be removed from the system without changing P , and irredundant otherwise. If all constraints in Ax ≤ b are irredundant, then the system is irredundant, and redundant otherwise.\nIn other words, the H-representation of a polytope is not unique. In fact, there are infinitely many H-representations of a polytope, but only one minimal representation. It is well-known that the inclusion of redundant hyperplanes may significantly affect the location of the analytic center.\nLemma 3.1 (Boyd & Vandenberghe (2004)). Let P := {x|Ax ≤ b} be an H-representation of a polytope. The inclusion of redundant hyperplanes can make any interior point the analytic center.\nTo address this, we propose a masked variant of the ACR term which ensures that only boundary hyperplanes of P are included in the objective. However, recovering an irredundantH-representation is nontrivial. We apply an approximate method which leverages the primal-dual equivalence between the half-space intersection problem and the convex hull problem. Details are provided in Appendix E.\nIn practice, we found that the application of ACR to all constraints (i.e. without removing redundant constraints) performs as well or better than than the masked variant in addition to having lower computational overhead. We hypothesize that this is due to the composite nature of neural networks and the fact that the output of the network f is dependant on all hyperplanes—not just irredundant hyperplanes—and that ignoring redundant hyperplanes during regularization may adversely affect learning. Additional justification for inclusion of the redundant constraints when evaluating GR can be found in Fig. 11 where we see that the network trained with GR exhibits a more even and symmetric distribution of hyperplanes compared to other networks." }, { "heading": "3.2 LINEAR DECISION BOUNDARY REGULARIZER (LDR)", "text": "Recent work (Moosavi-Dezfooli et al., 2019; Qin et al., 2019) has demonstrated that regularizing the curvature of the loss corresponds to regularizing the curvature of the decision boundary, thus improving robustness. Specifically, Moosavi-Dezfooli et al. (2019); Qin et al. (2019) respectively provide a tight certificate that is linear in the maximum eigenvalue of the Hessian of the loss with respect to the input and an upper bound on the adversarial loss. However, the regularization schemes proposed suffer drawbacks. Namely, both methods require the careful setting of a parameter which characterizes the local approximation of the Hessian and Qin et al. (2019) requires multiple iterations per training iteration. We exploit the intrinsic manifold structure of the data to encourage the loss to behave linearly around data via the following regularization term:∫\nM ||∇2Mf(x)||2dµ(x). (5)\nIn general, we cannot compute this integral becauseM is not known analytically. Belkin et al. (2006); Belkin & Niyogi (2008) propose the following discrete approximation that converges to the integral in the sample limit:\nJLDR(x) = 1\nN2b ∑ i,j∈[Nb] i 6=j ||∇x`(xi)−∇x`(xj)||22 ||xi − xj ||22 , (6)\nwhere Nb is the minibatch size. Note that the contents of batches are randomized each epoch. The application of regularizers based on the graph Laplacian have also been explored in the past (Mishne et al., 2019), including in the context of adversarial robustness (Jin & Rinard, 2020). However, we propose to apply manifold regularization to the gradient of the loss instead of the weights of classifier. In the case of piecewise linear networks, this regularizer may be interpreted as reducing the angle between adjacent linear restrictions of f . Such sharp angles are known to facilitate brittleness in the presence of adversarial noise (Roth et al., 2019)." }, { "heading": "3.3 GEOMETRIC REGULARIZATION OF RELU NETWORKS", "text": "Following the definitions above, we propose the following objective for adversarially robust training\nL(D) = ED[`(x) + λAJACR(x) + λLJLDR(x)] (7) where `(x) is an arbitrary classification loss and λA and λL are hyper-parameters to be optimized. We note that, given an input x, a forward pass through f is sufficient to compute both the linearization of f around x and the associated region (the polytope). Given these entries, optimizing the regularization term corresponds to solving a smooth, unconstrained convex problem - which can be done very efficiently using gradient descent. Analogous to adversarial training, LDR requires computation of ∇x`(x) – performed via backpropagation. However, the gradients are computed only once per batch as opposed to iteratively as in adversarial training-based methods." }, { "heading": "3.4 GEOMETRIC ROBUSTNESS GUARANTEES FOR RELU NETWORKS", "text": "We show that GR provably optimizes a lower bound on the radius of robustness. Given an input x and network f , let dB(x) be the distance from x to the boundary of its polytope and let dD(x) be the distance from x to the decision boundary defined by f . To recover robustness guarantees when dB(x) ≤ dD(x), and demonstrate that GR optimizes a lower bound on the robustness, we rely on the Dikin Ellipsoid (DE), defined by the Hessian of the logarithmic barrier function (Nesterov & Nemirovskii, 1994).\nDefinition 3.2 (Dikin ellipsoid). The Dikin ellipsoid of radius r of a polytopeK expressed byAx ≤ b centered at x̂ is defined as the set: Er(x̂) = {x|(x− x̂)TH(x̂)(x− x̂) ≤ r}, where H is the Hessian\nof the log-barrier function defined to be H = ATS−2A with sij = { aix− bi i = j 0 otherwise .\nNotably, the Hessian of the logarithmic barrier function and the Dikin ellipsoid describe well the local geometry of K at x in the following sense (we refer to Nesterov & Nemirovskii (1994) for details and proofs): (1) the Dikin ellipsoid is contained inside K: E1(x) ⊆ K for any x ∈ K. (2) The Hessian of the log-barrier changes slowly with respect to its local norm; while exploding to infinity at the boundary of K. In other words, the gradient of the logarithmic barrier function at points near the boundary of K is dominated by a component with direction perpendicular to the boundary and pointing towards the interior of K. (3) The volume of the Dikin ellipsoid is guaranteed to be at least 1 mV ol(K) where m is the number of hyperplanes comprising the boundary of K. An example of an analytic center and its Dikin ellipse are given in Figure 1c.\nThe properties of the Dikin ellipsoid imply that it may be used to construct a certificate at x by taking the length of the shortest semi-axis of the ellipsoid. In particular, the length of the minor axis of the Dikin ellipsoid at x may serve as a uniform lower-bound on the radius of robustness, and optimizing Eq. 4 provably improves this radius. More concretely, we get the following robustness guarantees: Theorem 1 (ACR Robustness guarantee for ReLU networks).\n1. If dB(x) ≤ dD(x), then f is locally robust at x with radius = 1/ √ λmax where λmax is\nthe maximum eigenvalue of the Hessian matrix (and 1/ √ λmax is the length of the minor axis of the Dikin ellipsoid) of the log-barrier potential function (maximized when JACR is minimized).\n2. If dB(x) ≥ dD(x), let g = ∇Lx(x). Then, for some constant c, the radius = c||g|| − 2νc2 ||g||3\nwhere ν is the maximum eigenvalue of the Hessian ofL(D)(maximized as JLDR is minimized).\nNote that i-th axis of the ellipse described by the quadratic form xTAx = r, corresponds to the i-th eigenvector of A with length 1√\nλi .\nAs previously stated, we propose to use the length of the shortest minor sub-axis as a certificate of robustness. We call this certificate the Dikin certificate. The proof that the Dikin certificate is a lower bound on the radius of robustness is given in the appendix, and relies on the following property and lemma which directly follow from the definitions of the Dikin ellipsoid and the analytic center. Property 3.1. Let K be a convex polytope. Let x be an interior point of K and let Ex be the Dikin ellipse defined at x. Then Ex ⊆ K. Lemma 3.2. Let K be a centrally symmetric convex polytope, and let H be the Hessian of the logarithmic barrier function defined on an irreducibleH-representation of K. Then, xac = xcheb and λmax(H(xac)) ≤ λmax(H(x)) ∀x ∈ K.\nIn other words, the point that satisfies the largest radius of robustness with respect to the Dikin certificate is exactly the analytic center." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 MAIN EXPERIMENTS", "text": "We provide a variety of experiments to demonstrate the performance of GR. We evaluate GR on three datasets: MNIST (LeCun & Cortes, 2010), Fashion MNIST (Xiao et al., 2017), and CIFAR-10 (Krizhevsky et al.). We consider robustness with respect to both `2 and `∞ distances. We use three criteria: upper and lower bounds on the robust test error for a given threshold and the mean lower bound on the pointwise radius of robustness. Lower bounds on the robust test error for `2 and `∞ are computed using the attack via Projected Gradient Descent (PGD) (Madry et al., 2018). Upper bounds and robust radii are computed using Wong & Kolter (2018).\nWe report our results in Table 1. We compare seven methods: plain training, adversarial training (AT) of (Madry et al., 2018), the robust loss of (Wong & Kolter, 2018) (KW), the MMR and MMR +\nwith adversarial training scheme (Croce et al.), and our approach (GR) with and without adversarial training. All schemes are evaluated on a one hidden layer fully connected network with 1024 hidden units (FC1) and a convolutional network (CNN) with 2 convolutional and 2 dense layers. Note that unlike other reported results which fine-tune their defense to each attack type, we demonstrate empirical robustness against multiple perturbation types by training one model per architecture and dataset and report its robustness against different attack types. Additional details are in the Appendix.\nOn MNIST, we outperform or match MMR’s clean test error and lower-bound robust accuracy performance with respect to both `2 and `∞-norm perturbations. With respect to robust accuracy, we note that although the GR certificate is a lower bound on the MMR certificate, we closely match MMR’s KW-based average certified radius. On F-MNIST and CIFAR10, we again match MMR with respect to clean test error and empirical robust accuracy and certifiable radius. We emphasize that Croce et al. perform grid search on model parameters for each dataset and perturbation, while our performance is based on a single model trained per-dataset.\nWe compare our approach to state-of-the-art first and second-order regularizers in Table 2. Details of these algorithms are provided in the appendix. We note that TRADES and AT remain the state-of-theart heuristic methods, while GR performs competitively with other provable methods that encourage local linearity of the loss. We note that these methods (i.e. LLR, TULIP, and CURE) necessitate\niterating or sampling around each training example to compute their regularization terms and, and thus require additional overhead and setting hyperparameters governing their sampling procedures.\nAs mentioned before, the GR certificate is a lower bound on the certificate proposed by Croce et al. which is a lower-bound on the radius of robustness. We present the empirical differences in point-wise certified radii in Fig. 3. For networks that are adversarially trained or regularized with GR, the distribution of point-wise radii of robustness is close. In contrast, the distributions are significantly different for vanilla networks and networks trained with MMR. The implication is that for certain networks and inputs, the Dikin ellipsoid may not optimally characterize the polytopes, in particular, if theH-representation of a polytope exhibits redundancy concentrated on one side, or if the polytope is poorly conditioned in the sense that H is poorly conditioned (K is geometrically “elongated”)." }, { "heading": "4.2 ROBUSTNESS TO GRADIENT OBFUSCATION", "text": "Athalye et al. (2018); Qin et al. (2019) demonstrated that empirical robustness of neural networks may often be due to the effect of gradient obfuscation—i.e. the network learns to fool first-order attacks by contorting the loss surface. Although this non-linearity of the loss surface may cause networks to appear to exhibit robustness to adversarial attacks, Athalye et al. (2018) further demonstrated that merely running PGD for more iterations reliably produces adversarial perturbations. In contrast, networks that exhibit robustness while being close to locally linear exhibit “true” robustness to adversarial attacks. In Fig 4, we demonstrate that our regularizer produces models that exhibit such robustness to gradient obfuscation. We plot local regions of the loss surface near the decision boundary for various networks and show that training with GR leads to a smooth loss in the region of samples near the decision boundary, replicating the effect of adversarial training, while the local loss surface exhibited by networks trained with MMR leads to highly non-linear loss surfaces." }, { "heading": "5 CONCLUSION", "text": "We have introduced a method based on leveraging the geometry of piecewise-linear neural networks for training classifiers that are provably robust to norm-bounded adversarial attacks. Our approach is based on the notion that piecewise-linear classifiers with large linear regions and smoother decision boundaries are robust. We demonstrate that our method learns robust networks that exhibit properties typical to robust networks and perform competitively with current state-of-the art methods, including related, geometrically-motivated techniques. By exploiting the geometry of the network and data, our method relies on fewer hyperparameters compared to related approaches. Furthermore, our method is scalable and leads to models which are simultaneously robust to multiple perturbation models." }, { "heading": "A PRELIMINARIES", "text": "Let Ax ≤ b be an irreducible, affine matrix inequality characterizing a polytope in Rd where A ∈ Rm×n, x ∈ Rn and b ∈ Rn. We denote the feasible set characterizing the polytope by K:\nK = {x ∈ Rm|Ax− b ≤ 0} Without loss of generality, let the rows of A be normalized under any `p norm. Then, the `p distance between any interior point x of K to the i-th plane on the boundary of K is nothing but di(x) = a T i x− bi.\nFor completeness, we restate the curvature-based certificate of Moosavi-Dezfooli et al. (2019) below. Theorem 2 (Moosavi-Dezfooli et al. (2019)). Let `(x) denote the loss of a neural network f evaluated at x, and let r∗ be be the minimal perturbation necessary to fool the classifier. Let H be the Hessian matrix of `(x), H = ∂\n2` ∂xi∂xj ∈ Rd×d. Then, ||r∗|| is the radius of robustness of f at x with respect to the second order Taylor expansion of ` at x:\nr∗ = arg min r ||r|| s.t. `(x) +∇`(x)T r + 1 2 rTHr ≥ t\nfor a threshold t. Let x be such that c := t− `(x) ≥ 0, and let g = ∇`(x). Assume that ν := λmax and let u be the eigenvector corresponding to ν. Then, we have\n||g|| ν\n(√ 1 + 2νc\n||g||2 − 1\n) ≤ ||r∗||)\n≤ |g Tu| ν\n(√ 1 + 2νc\n(gTu)2 − 1 ) The above bounds can be further simplified to:\nc ||g|| − 2ν c\n2\n||g||3 ≤ ||r∗|| ≤ c gTu" }, { "heading": "B PROOFS", "text": "B.1 THE DIKIN ELLIPSOID OF RADIUS 1 IS CONTAINED WITHIN K. (BOYD & VANDENBERGHE, 2004)\nLet Ex denote the Dikin ellipse of radius 1 at x ∈ K. Note that since x ∈ K, di(x) ≥ 0∀i. Let y ∈ Ex. Additionally, let H be the Hessian of the logarithmic barrier function at a feasible point x:\nH(x) = ATS−2A with sij = { di(x) i = j\n0 otherwise*\nThe statement can be proven by showing that y ∈ K (i.e. y is feasible). Equivalently, we will show that di(y) ≥ 0 ∀i.\n(y − x)TH(x)(y − x) ≤ 1 by definition\n=⇒ m∑ i=1 〈ai, y − x〉2 d2i ≤ 1\n=⇒ 〈ai, y − x〉 2\ndi(x)2 ≤ 1 ∀i since all entries in the summation are nonnegative\n=⇒ ( di(x)− di(y)\ndi(x)\n)2 ≤ 1 ∀i\n=⇒ ∣∣∣∣1− di(y)di(x) ∣∣∣∣ ≤ 1 ∀i Hence, for all i, 0 ≤ di(y)di(x) , so di(y) ≥ 0 ∀i, and y ∈ K.\nB.2 LEMMA 3.2\nRecall the problem of recovering the Analytic center defined in Eq 4:\nxac = arg max x∈K m∏ i=1 bi − d∑ j=1 aijxj = arg max x∈K m∏ i=1 di(x) (8)\nFor brevity, let F = ∏m i=1 di(x). Then the problem may be concisely written as\narg max x∈K F (x) (9)\nWe will show that for any convex polytope K satisfying a central symmetry property (for example, zonotopes), that the maximizer of this problem is precisely a unique origin of the polytope. By definition, for centrally symmetric polytopes, for any constraint aTi x ≤ b there is a corresponding constraint aTi x ≤ −b, and there exists a unique interior point xc such that if x ∈ K, its inversion with respect to xc is also in K: x̄ ∈ K where x̄ = 2xc− x. We call this point xc, the origin of K. A simple consequence of this definition is that xc lies at the midpoint of the line xx̄. It follows that if xi is the minimum-distance projection from x onto boundary plane i, and if x̄i lies on plane j, then di = dj .\nThe lemma is a simple consequence of this definition. For any perturbation applied to xc, let i > 0 be the variation in the `p distance associated with the perturbation to boundary plane i. By the linearity of the boundary planes, the variation in the `p distance suffered by the inverse boundary plane of boundary plane i is − i.\nF (x) = m/2∏ i (di(xc)− i)(di(xc) + i)\n≤ m/2∏ i di(xc) 2 = F (xc)\nso arg maxx∈K F (x) = xc, and xac = xc. Accordingly, xc also corresponds to the Chebyshev center. Note that we can restate the definition of the Chebyshev center using the above notation:\nxcheb = arg max x min i di(x) (10)\nAgain, for any perturbation to xc, we have that\nmin i {min{di(xc)− i, di(xc) + i}} = min i {di(xc)− i}\n≤ min i di(xc)\nso the solution to Eq. 10 is the origin of K, and xac = xcheb. Finally, let H be the Hessian of the logarithmic barrier function at a feasible point x:\nH(x) = ATS−2A with sij = { di(x) i = j\n0 otherwise*\nThen, for any interior point x, it follows that\nλmax(H(xac)) = max i 1√ di(xac)\n≤ max i 1√ di(x) = λmax(H(x))\nB.3 THEOREM 1\nPROOF OF (1)\nFirst, note that if dB(x) ≤ dD(x), then the predicted class of x does not change for ball of radius dB(x) around x—Bp(x, dB(x)). Let Ex be the Dikin ellipsoid at x characterized by H(x), the\nHessian of the logarithmic barrier potential evaluated at x. Let ri be the length of i-th shortest sub-axis of Ex. By definition,\nri = 1√ λi\nwhere λi is the i-th largest eigenvalue of H(x).\nSince Ex ⊆ Kx, the polytope in which x lies, by Property 3.1, it follows that\nmin i ri = 1√ λmax ≤ dB(x)\nwhere λmax is the largest eigenvalue of H(x) and 1√λmax is a lower bound on the minimal `p-norm perturbation necessary to change the class.\nPROOF OF (2)\nNote that if dD(x) ≤ dB(x), then the Dikin certificate is not a sufficient lower-bound on the radius of robustness r∗, however, the bound of Theorem 2 holds globally:\nc ||g|| − 2ν c\n2\n||g||3 ≤ ||r∗||\nwhere c is a loss-dependant constant, g = ∇`(x), and ν = λmax." }, { "heading": "C MAIN EXPERIMENTS", "text": "the final set of hyperparameters will be made available on acceptance. In order to make a comparison to the robust training of Croce et al.; Wong & Kolter (2018) we take their publicly available models.\nWe perform adversarial training using the PGD attack of Madry et al. (2018) with a random selection of 50% clean and 50% adversarial examples in every batch. For the `2-norm, we used the implementation from Rauber et al. (2017) to perform PGD-based `2 attacks. We use the same `2-bound on the perturbation as the used in robust error. During training, we perform 40 iterations of the PGD attack for MNIST and F-MNIST, and 7 for CIFAR-10. During evaluation, we use 40 iterations for all datasets. Following Croce et al., the step size is selected as divided by the number of iterations and multiplied by 2.\nC.1 DETAILS OF STATE-OF-THE-ART METHODS\nWe use publicly available implementation for all methods. We denote L(D) = ED[`(x) +R(x)] to be the objective (e.g. as in Eq. 7.) We briefly describe how each technique we compare to in Table 2 formulates theR(x) term below: Gradient Regularization (TULIP) (Finlay & Oberman, 2019). The Gradient Regularization term of Finlay & Oberman (2019) is formulated as a norm on the gradient of the loss function with respect to the input:\nR(x) = β||∇x`(x)||\nLocally-Linear Regularization (LLR) (Qin et al., 2019). Qin et al. (2019) propose to regularize the local linearity of the classifier:\nR(x) = λγ( , x) + µ||δTLLR∇x`(x)|| where γ( , x) = maxδ∈B(x, ) g(f, δ, x) and g(f, δ, x) = |`(x+ δ)− `(x)− δ∇x`(x)| Locally-Lipschitz Regularization (TRADES) (Zhang et al., 2019). Trades is the current empirical state of the art and is motivated by the relationship between robustness and local Lipschitzness.\nR(x) = β max x′∈B(x, )\n`(x, x′)\nCurvature Regularization (CURE) (Moosavi-Dezfooli et al., 2019) Propose to regularize the curvature of the loss with respect to the input:\nR(x) = λ||H(x)||2F Note that H is estimated through iterative approximation by computing finite differences." }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "D.1 VISUALIZING THE EFFECTS OF REGULARIZATION\nWe explore the geometry of polytopes. In Fig. 5, we overlay histograms corresponding to the radii of Dikin ellipsoids of samples pre- and post-robust training. It is clear that robust training increases the size of polytopes, agreeing with the intuition of the GR certificate.\nIn Fig. 6 we visualize the decision boundary for different values of λL via Eq. 11. As expected, we see that as this weight is increased, the network imposes higher cost on the local loss curvature resulting in a linearization of the decision boundary and lower overall confidence in predictions over the domain. However, something unusual happens when λL exceeds a threshold: the network exhibits uncertainty uniformly over the domain, and both clean and adversarial test accuracy drop significantly.\nIn Table 4, we highlight the importance of both parts of the regularization, i) penalization of the distance to boundary of the polytope ii) penalizing the curvature of decision boundaries across polytopes via ablative experiments. We train FC1 on MNIST using the each component of the regularization scheme. We clearly see that lower bounds on robustness computed using PGD is always significantly better when both terms are used. In order to achieve the best empirical performance, it is necessary to both increase the distance of the points to the boundaries of the polytope and regularize the curvature of the decision boundary between polytopes.\nD.2 ANALYSIS OF ELLIPSOIDS AND ANALYTIC CENTERS\nThe geometry of the partitioning and the associated polytopes induced by an ReLU network is not obvious. Intuitively, we would like the partitions to be semantically reasonable, i.e. cleanly partition the domain into class-regions. However, this is not the case as implied by the existence of adversarial examples. An immediate consequence of the GR certificate is that we expect “realistic” examples to lie far from the boundary of their respective polytopes. We encourage this via the JACR regularizer.\nAs a consequence, after training a robust network, we expect the true analytic centers of a given polytope to be semantically interpretable and close to “real” samples. We demonstrate that this is the case in Figure 8. First, note that the ACs of polytopes produced by a vanilla network are entirely noisy. We hypothesize (Fig. 5) that partitions converge early in training, and that the linear functions on each partition are the focus of the majority of training. As a consequence, the content of polytopes for non-robust networks are near-random. It is clear that the analytic centers of robust network polytopes are more interpretable (Fig. 5).\nD.3 VISUALIZING THE STRUCTURE OF ROBUST MODELS\nWe demonstrate several properties of networks trained with robust regularization terms. We train a single hidden layer feed forward ReLU network on the Two Moons dataset. The decision boundaries are presented in Fig. 10. We estimate decision boundaries by plotting\nr(x) = max i∈k L(y)−max j∈k j 6=i L(y) (11)\nwhere L(x) = e x∑\nk∈[K] e x and y are the logits of f evaluated at x. In other words, r(x) directly\ncorresponds to the second term of the Lipschitz robustness bound of f at x. We first note in Fig. 10 that MMR and GR preserve the decision boundary learned by the vanilla network for the most\npart, however, the network trained via adversarial training has over-regularized, resulting in a suboptimal classifier. Interestingly, the regularizer we propose does exhibit a significant gap between the most likely and second most likely class likelihoods this is a desirable property with respect to the GR robustness guarantee. We also notice that the network trained with GR exhibits less uniform smoothness compared to the other regularizers as seen in Fig. 9. MMR and adversarial training appear to over-regularize the network by imposing smoothness uniformly over the domain.\nIn Figures 11 and 12 we demonstrate that the network trained with GR exhibits spatial smoothness in contrast to the MMR term, i.e. neighboring points exhibit similar regularization loss. We hypothesize that this may partially provide evidence for (1) the empirical reduction in the number of linear partitions and increases their size (Fig. 5) such that more points are co-located within the interior of the same polytope and (2) smoother optimization.\nIn Fig. 9 we visualize the local smoothness of robust and non-robust networks. We uniformly sample points from the 2-d cube and approximate the stochastic local Lipschitz constant of f by computing the following term:\nL̂(x) = 1\nN N∑ i=1 f(x)− f(x− δi) δ2i\n(12)\nwhere δi ∼ U(0, ]. Note that for large values of L around a point x, f exhibits local instability, and potentially brittleness with respect to adversarial perturbations of x. In conjunction with Fig 10, we see that GR along with AT and MMR exhibits behavioral properties that are typical of robust networks (e.g. local smoothness and confidence).\nWe also plot visualizations of layer-weights for a fully connected network It is well-known that structured neuron weights lead to more robust networks (Allen-Zhu & Li, 2020). However it has only been recently deeply explored in the context of adversarial machine learning (Allen-Zhu & Li, 2020). We demonstrate that networks trained with GR exhibit this property of encouraging neurons to learn structured features." }, { "heading": "E EFFICIENT PRUNING OF REDUNDANT POLYTOPE REPRESENTATIONS", "text": "To remedy the issue of redundant representations of polytopes, we propose the following maskedversion of ACR:\nACRmasked(x) = ∑ i∈Ω log(ai − d∑ j=1 vijxj), (13)\nwhere Ω is the index-set of irredundant hyperplanes. To compute Ω, we leverage the primal-dual equivalence between half-plane intersections and convex hulls and adopt the approximate convex-hull algorithm of Sartipizadeh & Vincent (2016) to prune redundant hyperplanes.\nGiven a finite set of points S = {x1, . . . , xN} where each xi ∈ Rd, Sartipizadeh & Vincent (2016) propose to iteratively build an estimate of an -approximate convex hull (denoted by E) by selecting a point to add to E such that the maximum over all projections from S \\ E to E is minimized (i.e. is an extreme point of S), where the projection length from a point x to E is computed by solving the following quadratic program:\nd(v, E) = min αi ||v − |E|∑ i=1 αixi||2 s.t. α ≥ 0, |E|∑ i=1 αi = 1\ninstead of exactly solving this problem, Sartipizadeh & Vincent (2016) propose to select points that are -close to the convex hull - i.e. by requiring that maxv∈S d(v, E) ≤ . In summary, at iteration t the algorithm selects xt to add to E according to the following decision rule:\nxt = arg minx∈S\\E max v∈S\\E d(v, E ∪ x)\nNote that we are not guaranteed uniqueness, in that there may be multiple E with the same number of elements, satisfying the distance requirements. However, as increases, the size of a minimal representation, E , may decrease.\nNotably, the above algorithm runs in time independent-of the dimension: O(K3/2N2 log K ) for N points in dimension d, and where K is the number of iterations. Additionally, it is provably correct, i.e. after the algorithm terminates the set E is guaranteed to be an -approximate convex hull of S where an -approximate convex hull is defined to be: Definition E.1 ( -approximate convex hull). An -approximate convex hull of S is the convex hull of a minimal subset E ⊆ S such that ∀z ∈ S, d(z,S) ≤ ." } ]
2,020
null
SP:a26ff5fe208e5a7a24775d3823d886fc68b89997
[ "This paper studies the problem of attacking graph neural networks for spatio-temporal prediction problems (e.g., traffic speed prediction). The input of the problem is a spatio-temporal sequence represented as graphs at time t-N+1 to t, where a graph neural network is trained to predict the graph sequence for time t+1 to t+M. The aim is to add perturbations to the input graph sequence, such that the predicted graph sequence varies from the ground truth as much as possible. " ]
Spatiotemporal forecasting plays an essential role in intelligent transportation systems (ITS) and numerous applications, such as route planning, navigation, and automatic driving. Deep Spatiotemporal Graph Neural Networks, which capture both spatial and temporal patterns, have achieved great success in traffic forecasting applications. Though Deep Neural Networks (DNNs) have been proven to be vulnerable to carefully designed perturbations in multiple domains like objection classification and graph classification, these adversarial works cannot be directly applied to spatiotemporal GNNs because of their causality and spatiotemporal mechanism. There is still a lack of studies on the vulnerability and robustness of spatiotemporal GNNs. Particularly, if spatiotemporal GNNs are vulnerable in real-world traffic applications, a hacker can easily cause serious traffic congestion and even a city-scale breakdown. To fill this gap, we design One Vertex Attack to break deep spatiotemporal GNNs by attacking a single one vertex. To achieve this, we apply the genetic algorithm with a universal attack method as the evaluation function to locate the weakest vertex; then perturbations are generated by solving an optimization problem with the inverse estimation. Empirical studies prove that perturbations in one vertex can be diffused into most of the graph when spatiotemporal GNNs are under One Vertex Attack.
[]
[ { "authors": [ "Naveed Akhtar", "Ajmal Mian" ], "title": "Threat of adversarial attacks on deep learning in computer vision: A survey", "venue": "arXiv preprint arXiv:1801.00553,", "year": 2018 }, { "authors": [ "Scott Alfeld", "Xiaojin Zhu", "Paul Barford" ], "title": "Data poisoning attacks against autoregressive models", "venue": "In AAAI,", "year": 2016 }, { "authors": [ "Shaojie Bai", "Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and deep locally connected networks on graphs", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Heng Chang", "Yu Rong", "Tingyang Xu", "Wenbing Huang", "Honglei Zhang", "Peng Cui", "Wenwu Zhu", "Junzhou Huang" ], "title": "A restricted black-box adversarial framework towards attacking graph embedding models", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Yize Chen", "Yushi Tan", "Baosen Zhang" ], "title": "Exploiting vulnerabilities of load forecasting through adversarial attacks", "venue": "In ACM ICFES,", "year": 2019 }, { "authors": [ "Hanjun Dai", "Hui Li", "Tian Tian" ], "title": "Adversarial attacks on graph structured data", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Ian J. Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Shengnan Guo", "Youfang Lin", "Ning Feng", "Chao Song", "Huaiyu Wan" ], "title": "Attention based spatialtemporal graph convolutional networks for traffic flow forecasting", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Weiwei Hu", "Ying Tan" ], "title": "Black-box attacks against RNN based malware detection algorithms", "venue": "arXiv preprint arXiv:1705.08131,", "year": 2017 }, { "authors": [ "Fazle Karim", "Somshubra Majumdar", "Houshang Darabi" ], "title": "Adversarial attacks on time series", "venue": "arXiv preprint arXiv:1902.10755,", "year": 2019 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "arXiv preprint arXiv:1611.01236,", "year": 2016 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Yaguang Li", "Rose Yu", "Cyrus Shahabi", "Yan Liu" ], "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "venue": "arXiv preprint arXiv:1707.01926,", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ian J. Goodfellow", "Somesh Jha", "Z. Berkay Celik", "Ananthram Swami" ], "title": "Practical black-box attacks against deep learning systems using adversarial examples", "venue": "arXiv preprint arXiv:1602.02697,", "year": 2016 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Ananthram Swami", "Richard E. Harang" ], "title": "Crafting adversarial input sequences for recurrent neural networks", "venue": "arXiv preprint arXiv:1604.08275,", "year": 2016 }, { "authors": [ "John Roddick", "Myra Spiliopoulou" ], "title": "A bibliography of temporal, spatial and spatio-temporal data mining research", "venue": "In ACM SIGKDD Explorations Newsletter,", "year": 1999 }, { "authors": [ "Ishai Rosenberg", "Asaf Shabtai", "Yuval Elovici", "Lior Rokach" ], "title": "Defense methods against adversarial examples for recurrent neural networks", "venue": null, "year": 1901 }, { "authors": [ "David I. Shuman", "Sunil K. Narang", "Pascal Frossard", "Antonio Ortega", "Pierre Vandergheynst" ], "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "venue": "IEEE Signal Processing Magazine,", "year": 2013 }, { "authors": [ "Jiawei Su", "Danilo Vasconcellos Vargas", "Sakurai Kouichi" ], "title": "One pixel attack for fooling deep neural networks", "venue": null, "year": 2019 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Xianfeng Tang", "Yandong Li", "Yiwei Sun", "Huaxiu Yao", "Prasenjit Mitra", "Suhang Wang" ], "title": "Transferring robustness for graph neural network against poisoning attacks", "venue": "ICWSDM,", "year": 2020 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses", "venue": "arXiv preprint arXiv:1705.07204,", "year": 2017 }, { "authors": [ "Zonghan Wu", "Shirui Pan", "Guodong Long", "Jing Jiang", "Chengqi Zhang" ], "title": "Graph wavenet for deep spatial-temporal graph modeling", "venue": "In IJCAI,", "year": 2019 }, { "authors": [ "Bing Yu", "Haoteng Yin", "Zhanxing Zhu" ], "title": "Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting", "venue": "In IJCAI,", "year": 2018 }, { "authors": [ "Xingyu Zhou", "Yi Li", "Carlos A. Barreto", "Peter Volgyesi", "Xenofon Koutsoukos" ], "title": "Load forecasting with adversarial attacks in power systems using deepforge", "venue": "ASHTSS,", "year": 2019 }, { "authors": [ "Daniel Zugner", "Stephan Gunnemann" ], "title": "Adversarial attacks on graph neural network via meta learning", "venue": "In ICLR,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Spatiotemporal traffic forecasting has been a long-standing research topic and a fundamental application in intelligent transportation systems (ITS). For instance, with better prediction of future traffic states, navigation apps can help drivers avoid traffic congestion, and traffic signals can manage traffic flows to increase network capacity. Essentially, traffic forecasting can be modeled as a multivariate time series prediction problem for a network of connected sensors based on the topology of road networks. Given the complex spatial and temporal patterns governed by traffic dynamics and road network structure (Roddick & Spiliopoulou, 1999), recent studies have developed various Graph Neural Networks-based traffic forecasting models (Yu et al., 2018; Wu et al., 2019; Li et al., 2017; Guo et al., 2019).\nThese deep learning models have achieved superior performance compared with traditional multivariate time series forecasting models such as vector autoregression (VAR). However, recent research has shown that deep learning frameworks are very vulnerable to carefully designed attacks (Kurakin et al., 2016b; Goodfellow et al., 2014; Papernot et al., 2016a; Tramèr et al., 2017; Kurakin et al., 2016a). This raises a critical concern about the application of spatiotemporal GNNbased models for real-world traffic forecasting, in which robustness and reliability are of ultimate importance.\nFor example, with a vulnerable forecasting model, a hacker can manipulate the predicted traffic states. Feeding these manipulated values into the downstream application can cause severe problems such as traffic congestion and even city-scale breakdown. However, it remains unclear how vulnerable these GNN-based spatiotemporal forecasting models are. Particularly, previous adversarial works cannot be directly applied to fool GNN-based spatiotemporal forecasting models because of their causality and spatiotemporal mechanism, which is detailed in Section 2.\nThe goal of this paper is to understand and examine the vulnerability and robustness of GNN-based spatiotemporal forecasting models. In doing so, we design a One Vertex Attack (OVA) framework\nto break these forecasting models by manipulating only one vertex in the graph. We first propose a universal attack method against spatiotemporal GNNs by applying the inverse estimation to avoid using future ground truth. Then, we utilize the genetic algorithm, whose evaluation function is composed of the proposed universal attack method, to locate the “weakest” vertex. Here the weakest vertex refers to the vertex where attacking it will cause maximum damage to the forecasting models. Finally, we generate perturbations by solving an optimization problem.\nIt should be noted that poisoning all vertices even multiple vertices in real-world applications is impossible, because the large scale of graph. For instance, the graph of traffic forecasting applications generally covers 1000 square kilometers, and it is unrealistic to organize Harker vehicles to poison all vertices in such a large scale road network. Hence, the proposed one-vertex attack is a realistic solution to evaluate the robustness and vulnerability of spatiotemporal forecasting models deployed in real-world applications.\nTo prove the effectiveness of the proposed OVA method, we test it in two spatiotemporal traffic datasets with three different Spatiotemporal GNNs. The proposed method can cause at least 15% accuracy drop, and there are about 10% vertices severely impacted with the boundary of speed variation limited to 15km/h.\nThe contribution of this paper can be summarized as follows.\n• First, to the best of our knowledge, this is the first study on attacking Spatiotemporal GNNs by poisoning only one vertex.\n• Second, we proposed a novel OVA method that is able to find the weakest vertex and generate optimal adversarial perturbations.\n• Third, we empirically study the effectiveness of the proposed method with multiple experiments on real-world datasets." }, { "heading": "2 RELATED WORK", "text": "Adversarial Attacks against Time Series Analysis. Some previous works (Chen et al., 2019; Zhou et al., 2019; Alfeld et al., 2016; Karim et al., 2019) proposed adversarial attack methods against Autoregressive models or time series classification models. The above works only consider univariate time series. Different from these works, we focus on complex spatiotemporal domains. The input of spatiotemporal GNNs is the temporal dynamic graph rather than regular matrices or sequences. We take the spatial correlation into consideration while the above works didn’t.\nAdversarial Attacks against Graph Neural Networks. Many studies (Dai et al., 2018; Zugner & Gunnemann, 2019; Chang et al., 2020; Tang et al., 2020) utilized Reinforcement Learning (RL), meta learning, or genetic algorithm to fool GNNs in node, edge, and graph classification domains by tuning the graph topology. All these studies involve no temporal variation in their graphs, and they mainly focus on the spatial pattern. These cannot be applied to fool spatiotemporal forecasting models because of the lack of temporal correlation. Particularly, attacking spatiotemporal forecasing models deployed in real-world applications by graph topology-based attack methods (Zugner & Gunnemann, 2019; Chang et al., 2020) are unrealistic, because tuning the graph topology represents tuning the sensor network that collects spatiotemporal data continuously and any modification on sensors can be easily sensed by the sensor network manager.\nAdversarial Attacks against Recurrent Neural Network. Recent studies (Rosenberg et al., 2019; Papernot et al., 2016b; Hu & Tan, 2017) demonstrated RNN classifiers were vulnerable to adversarial sequences. These adversarial works require the ground truth to compute adversarial sequences. Because of the forecasting applications’ causality, the future ground truth is unavailable. Besides, these works focus on regular vectors or matrices, rather than irregular graphs. Hence these adversarial sequence generation models cannot be directly applied to attack spatiotemporal GNN-based forecasting models.\nOne Pixel Attack for Fooling Deep Neural Networks. Su et al. (2019) utilized Differential Evolution (DE) to generate the perturbation to poison one pixel in images, and then fool CNNs. Similar to one pixel attack, we only poison one vertex in graphs. However, images are regular-structured,\nand Su et al. (2019) consider no temporal variation. In addition, one pixel attack requires the ground truth to compute perturbations. In forecasting applications, the ground truth is the future traffic state, and it is inaccessible. The above features prevents one pixel attack from poisoning spatiotemporal forecasting models." }, { "heading": "3 METHODOLOGY", "text": "" }, { "heading": "3.1 SPATIOTEMPORAL SEQUENCE FORECASTING AND SPATIOTEMPORAL GNNS", "text": "Because of the impossibility of deploying sensors as a regular grid in real-world applications, the form of spatiotemporal data is generally irregular. Consequently, to better mine the spatial information, the spatiotemporal sequence is represented as a temporally varied graph rather than a regular grid. The spatiotemporal sequence can be represented as Gt = {Vt, E ,W}, where E is the set of edges in the graph, W is the weighted adjacency matrix whose every element describe the spatial relationship between different variates, Vt = {v1,t, . . . , vn,t} is the set of condition values (e.g. traffic speed and traffic volume) collected from sensors on timestamp t, and n is the number of sensors (Shuman et al., 2013).\nMultistep spatiotemporal sequence forecasting can be formulated as Equation 1. Previous conditions from timestamp t−N + 1 to t are fed into a forecasting model F that outputs predictions of future conditions from t + 1 to t + M . In general, M ≤ N . The above process is customarily called sequence-to-sequence (seq2seq) forecasting.\n{G∗t+M , ...,G∗t+1} = F ({Gt, ...,Gt−N+1}) (1) where G∗i denotes the prediction of the condition on timestamp i. Most state-of-art spatiotemporal sequence forecasting models output a single future condition, which will be in turn fed as input into the model to forecast the next condition. This process is named as the recursive multistep forecasting, which can be represented as Equation 2.\n G∗t+1 = F ({Gt,Gt−1, ...,Gt−N+1}) G∗t+2 = F ({G∗t+1,Gt, ...,Gt−N+2})\n... G∗t+M = F ({G∗t+M−1,G∗t+M−2, ...,Gt−N+M})\n(2)\nMost state-of-art forecasting models, F , are constructed based on spatiotemporal GNNs (Li et al., 2017; Wu et al., 2019; Yu et al., 2018; Guo et al., 2019). Spatiotemporal GNNs are composed of both spatial layers and temporal layers. In general, gated linear unit (GLU) or Gated-CNN (Bai et al., 2018) works as the temporal layer to capture the temporal patterns embedded in the spatiotemporal sequence, and the Graph-CNN (Shuman et al., 2013; Bruna et al., 2014) works as spatial layers to capture the spatial patterns.\nIn this paper, we focus on adversarial studies towards recursive multistep spatiotemporal sequence forecasting. Our studies can be easily extended to seq2seq multistep forecasting." }, { "heading": "3.2 UNIVERSAL ADVERSARIAL ATTACK AGAINST SPATIOTEMPORAL GNNS", "text": "In this section, we point out the form of adversarial attack against the spatiotemporal forecasting, and outline the gap between attacking spatiotemporal GNNs and attacking CNNs or GNNs. Then we propose the inverse estimation to fill the gap. Finally, we design the universal adversarial attack against spatiotemporal GNNs." }, { "heading": "3.2.1 ADVERSARIAL ATTACKS AGAINST SPATIOTEMPORAL FORECASTING", "text": "Adversarial attacking against recursive multistep forecasting can be formed as Equation 3. The goal is to mislead spatiotemporal GNNs to generate false forecasting by adding perturbations.\nF ({Gt, ...,Gt−N+1}+ {ρt, ..., ρt−N+1}) 6= Gt+1 s.t. ‖ρi‖p ≤ ξ ∀ i ∈ {t, ...t−N + 1}\n(3)\nwhere ρi denotes perturbations on timestamp i, ‖·‖p denotes `p-norm, and ξ denotes the pre-defined constant to constrain the perturbation scale. In real-world traffic applications, ξ control the hacker’s driving behavior to balance the attack performance and detection avoidance.\nBecause of spatiotemporal sequence’s causality, we cannot access the future condition that works as the ground truth of forecasting models. In other words, Gt+1 in Equation 3 is not available on timestamp t. Previous adversarial studies against CNNs, RNNs, and GNNs (Dai et al., 2018; Papernot et al., 2016b; Kurakin et al., 2016b; Alfeld et al., 2016) almost all involve the ground truth in the perturbation computation. In fooling spatiotemporal GNNs as Equation 3, the ground truth, Gt+1, is still inevitable. As we mentioned above, the future condition is unavailable, and thus we cannot generate adversarial perturbations directly as Equation 3." }, { "heading": "3.2.2 INVERSE ESTIMATION", "text": "We propose Inverse Estimation to avoid using the future ground truth in fooling spatiotemporal GNNs. First, Equation 3 is transformed to Equation 4, which represents our goal is to fool spatiotemporal GNNs to generate opposite forecasting.\narg min {ρt,...,ρt−N+1}\n‖F ({Gt, ...,Gt−N+1}+{ρt, ..., ρt−N+1})−G̃t+1‖2+α· t∑\ni=t−N+1 max(0, ρ2i−ξ) (4)\nwhere G̃t+1 denotes the opposite condition of Gt+1, α denotes the penalty factor. The constrain in Equation 3 is replaced with a regularization term in Equation 4 to constrain the perturbation scale. The penalty factor α is set as 100 to make sure the scale penalty term is much larger than the first term in Equation 4 so that the scale of the computed perturbation is strictly forced. The above idea is similar to targeted attacks (Akhtar & Mian, 2018). However, classical targeted attacks still utilize the ground truth in perturbation computations.\nTo use no future information, the opposite of future condition, G̃t+1, is estimated by computing the opposite of the most recent condition, which is represented as Equation 5.\nG̃t+1 ← G̃t = {Ṽt, E ,W} (5)\nwhere Ṽt = {ṽ1,t, ..., ṽn,t} denotes a collection of condition values opposite to these collected from sensors. Take the traffic condition for instance, when the condition is “congested/low speed”, its opposite is “free/high speed”, and vice versa. ṽi,t, the opposite of vi,t, is computed as Equation 6.\nṽi,t = { max(V), vi,t < mid min(V), vi,t ≥ mid\n(6)\nwhere mid, max(V), and min(V) represent the mean, maximum, and minimum value of the spatiotemporal dataset, respectively.\nInverse Estimation outperforms directly estimating the future ground truth. The error of the estimation on the opposite of ground truth is smaller than errors of any direct estimation. To validate the above assumption, we carry out a test on PeMS dataset. We compare the proposed inverse estimation with three types of ground truth estimation methods, namely estimating by the most recent traffic condition (MR), spatiotemporal graph convolution neural network (STGCN), and AutoRegressive Integrated Moving Average (ARIMA). The experiment result is shown as Table 1. The proposed Inverse Estimation’s performance, including mean absolute error (MAE), mean absolute percent error (MAPE), root mean square error (RMSE), and perfect estimation ratio (PER), is better than others’. It should be noted 99.56% IE’s estimations are exactly equal to the opposite of ground truth." }, { "heading": "3.2.3 UNIVERSAL ATTACKS AGAINST SPATIOTEMPORAL SEQUENCE FORECASTING", "text": "Adversarial perturbations generated by the subsection 3.2.2 vary with the input graph. Only if the perturbation keeps being updated, it will be effective all the time. The universal attack denotes that the perturbation is consistent and independent from the input, which can be represented as Equation 7.\narg min ρu\n‖F ({Gt, ...,Gt−N+1}+ {ρu})− G̃t+1‖2 + α ·max(0, ρ2u − ξ) (7)\nwhere ρu denotes the universal perturbation.\nThe universal perturbation can be generated by solving equation 7. After the universal perturbation is generated, there is no need to update it when new data come. The proposed universal attack will be utilized to locate which vertex to attack for one vertex attack." }, { "heading": "3.3 LOCATING WEAKEST VERTEX", "text": "In this subsection, we first mathematically define the “weakness” of a vertex in a graph. The “weakness” of the jth vertex denotes the number of influenced vertices when the jth vertex is attacked by the proposed universal universal perturbation, which is shown as equation 8.\nweakj = ‖Kθ { F ({Gt, ...,Gt−N+1}+Mj · ρu)− Gt+1 } ‖0 (8)\nwhere Mj · ρu denotes that all elements except the one corresponding jth vertex are set to 0, and Kθ{·} denotes an element-wise filter to set elements whose absolute value is smaller than θ to 0. A greater “weakness” value represents more vertices will be influenced if the jth vertex is attacked. We will attack the vertex with the largest “weakness” value. In traffic forecasting applications, θ is set as 5 empirically.\nA possible method to locate the weakest vertex is the complete traversal algorithm. However, this method is time consuming. To reduce the time cost, we utilize the genetic algorithm to locate the weakest vertex, which is shown as follows.\n• First, the initial candidate set is composed of s vertices with the most edges. • Second, the updated candidate set is computed as equation 9\nvi(g + 1) = vr1(g) + p(vr2(g)− vr3(g)) (9)\nwhere vi denotes the position of the ith vertex, g denotes the gth iteration, r1, r2, and r3 are random numbers with different values, and p is the parameter set to be 0.5 empirically.\n• Third, compare the updated candidates’ weakness with the previous candidates’, then keep s candidates with the largest weakness value.\n• Fourth, repeat the second and third step until the candidate set is consistent or g > 10. Select the weakest vertex to attack. It should be noted that the bound of g controls the tradeoff of the proposed solution’s effectiveness and efficiency. The larger bound represents the proposed solution is much closer to the complete traversal algorithm." }, { "heading": "3.4 ONE VERTEX ATTACKING AGAINST SPATIOTEMPORAL GNNS", "text": "After the weakest vertex is located, we generate one vertex perturbation based on equation 10. Poisoning the weakest vertex in a graph with the carefully designed perturbation can fool the spatiotemporal GNN-based traffic forecasting system.\narg min MJ ·{ρt,...,ρt−N+1}\n‖F ({Gt, ...,Gt−N+1}+MJ · {ρt, ..., ρt−N+1})− G̃t‖2 + α · Ronevertex\nRonevertex = t∑\ni=t−N+1 max(0, (MJ · ρi)2 − ξ)\n(10)\nwhere J denotes the index of the weakest vertex, andMJ ·ρi is the generated perturbation. It should be noted that ‖MJ · ρi‖0 ≤ 1. Different from adversarial attack methods proposed as Equation 4 and Equation 7, the one vertex attack keeps poisoning one vertex in the graph, while others poison the entire graph. As for realworld traffic forecasting applications, poisoning the entire sensor network deployed in road networks is unrealistic, while one vertex attack is much more harmful.\nIn reality, the vertex denotes a sensor like a loop detector. The one vertex perturbation denotes a vehicle’s speed shift. If a hacking vehicle’s speed varies following the perturbation computed as Equation 10, it can fool the entire traffic forecasting system, not only at the vertex where the hacking vehicle is, but also at other vertices and even vertices far away from the attacked vertex." }, { "heading": "4 EVALUATION AND RESULTS", "text": "The evaluation of the proposed method is based on two traffic datasets, namely PeMS and METRLA(S). PeMS records 44-days traffic speed data which was collected from 200 stations of Caltrans Performance Measurement System (PeMS). METR-LA(S) records four months of traffic speed on 100 stations on the highways of Los Angeles County. Our experiments are conducted under an NVIDIA DGX station with 4 NVIDIA Tesla V100 GPU cards.\nWe test three spatiotemporal GNNs including STGCN (Yu et al., 2018), DCRNN (Li et al., 2017), and Graph WaveNet (Wu et al., 2019). Each dataset is split into 3 subsets: 70% for training, 10% for validation, and 20% for testing. All setting parameters are as same as related papers (Yu et al., 2018; Li et al., 2017; Wu et al., 2019) except the number of input and output channels accordingly set as the number of stations in the said dataset. In addition, we use the validation set to locate the attack position, and generate OVA perturbations in real-time for the test set. As for measurement, we introduce three metrics to measure the proposed method’s effectiveness.\n• MAPE Increase (MAPEI) - Mean Absolute Percentage Error (MAPE) is a measure of prediction accuracy and smaller MAPE represents better predictions. An increase in MAPE thus translates into a decrease in the prediction accuracy. • Normalized MAPE Increase (NMAPEI) - It denotes the ratio between MAPEI and the\nMAPE before attacking. • k%-Impacted Vertices - Counts the number of vertices with NMAPEI being greater than k%." }, { "heading": "4.1 TRADEOFF BETWEEN ATTACK PERFORMANCE AND DETECTION AVOIDANCE", "text": "In real-world traffic applications, the generated perturbations represent the hacking vehicle’s speed shifts. The parameter ξ, which is used to limit the driving behavior, in Equation 10 balances the attack performance and detection avoidance.\nWe first propose an experiment to test how the parameter ξ influences the effectiveness of the proposed one vertex attack method. In this subsection, 15min traffic speed forecasting is undertaken by STGCN, DCRNN, and Graph Wavenet that work as the targeted models and the experiment is conducted on META-LA(S). These models are attacked by the proposed OVA with different ξ.\nTable 2 shows the number of impacted vertices with different ξ. When the √ ξ is equal to 20, over 10% vertices, whose NMAPEI are greater than 40%, are severely impacted, even with only one vertex attacked. With a small √ ξ, there are about 50% vertices are influenced. Based on the results shown as Table 2, we can conclude that perturbations will be diffused from one vertex to most of the graph when we apply spatiotemporal GNN-based forecasting models. The greater the perturbation is, the more vertices in the graph will be influenced.\nSetting ξ into an appropriate range is important. An extremely large ξ, which represents abnormal driving behaviors in traffic domains, will be detected easily. By analyzing PeMS and META-LA(S), speed variation within 15km/h occurs frequently, and consequently, we regard the accessible boundary of speed variation is 15km/h, namely √ ξ = 15." }, { "heading": "4.2 EFFECTIVENESS AND EFFICIENCY OF LOCATING WEAKEST VERTEX", "text": "In this subsection, experiments on PeMS are carried out to prove the effectiveness and efficiency of the proposed weakest vertex locating strategy. STGCN works as the model to attack. Three locating strategies, namely locating the vertex with the most edges (MOS), locating the vertex with the highest centrality (CEN), and locating the weakest vertex by the complete traversal algorithm (CT), work as baselines. After locating the weakest vertex by different strategies, perturbations are computed as Equation 10, and then fed into STGCN. NMAPEI and 30%-IV are recorded in Table 3.\nThe proposed strategy’s effectiveness is close to the complete traversal algorithm. In this experiment, it locates the same weakest vertex as the complete traversal algorithm does. Poisoning the vertex with the most edges or the highest centrality cannot fool the forecasting model effectively. A possible reason is that these vertices’ robustness is improved by their neighbors because of the STGCN’s spatiotemporal mechanism.\nIn addition, the proposed strategy spends 1104 seconds to locate the vertex to attack, while the complete traversal algorithm spends 1795 seconds. The proposed strategy can reduce the time cost of locating the weakest vertex." }, { "heading": "4.3 EFFECTIVENESS OF ONE VERTEX ATTACK", "text": "In this subsection, experiments on PeMS are designed to prove the effectiveness of the proposed method. 15min traffic speed forecasting is undertaken by three spatiotemporal GNN methods that work as models to attack. We compare the proposed OVA with four baselines that are detailed as follows.\nSTGCN DCRNN Grave Wavenet NMAPEI 30%-IV NMAPEI 30%-IV NMAPEI 30%-IV\nProposed 15.2% 17 16.7% 22 15.5% 21 RAN 2.1% 1 2.7% 0 2.3% 0\nRAN2 1.7% 0 2.3% 0 2.1% 0 MOS 4.5% 3 4.7% 3 5.7% 2 MFGSM-3 27.3% - 24.4% - 25.8% - MFGSM-2 15.4% - 15.6% - 16.2% -\nIn these experiments, √ ξ is set to 15 for methods that attack only one vertex. “MFGSM- ” is used to point out the perturbation constrain of MFGSM. Because it attacks all vertices rather than one vertex, we set as 3 and 2 respectively.\nTable 4 shows the experiment results. Our proposed one vertex attack method outperforms attacking one vertex randomly (RAN) and attacking the vertex with the most edges (MOS), which represents that the proposed method on locating the weakest vertex works. Our method outperforms attacking the weakest vertex with GWN (RAN2), which represents the proposed method can generate the optimal perturbations for one vertex attack.\nFig 1a shows an attack result on STGCN. Vertex A is attacked with the poisoned input as Fig 1b, and predicted sequences in vertex B and C are shown as Fig 1c and Fig 1d respectively. B and C are far away from A and there is no attack on B and C. Fig 1c shows an example that the spatiotemporal GNN-based forecasting model mispredicts “congestion” as “uncongested” at about the 60th step.\nFor traffic forecasting in PeMS, the proposed method can cause over 15% accuracy drop for all three Spatiotemporal GNN models, and about 10% vertices are seriously impacted (these vertices’ NMAPEI are greater than 30%) with √ ξ equal to 15.\nAttacking all vertices are always better than attacking only one vertex, but the proposed method’s effectiveness is similar with attacking all vertices by MFSDM with approximately equal to 10% ·√ ξ, which is concluded by comparing the proposed method with MFGSM-3 and MFGSM-2 in table 4. It should be noted that MFGSM attacks 200 vertices, while OVA only attacks one." }, { "heading": "5 CONCLUSION", "text": "This paper proposed One Vertex Attack that can break spatiotemporal GNN-based forecasting models by poisoning only one vertex. The generated perturbation will be diffused to numerous vertices in the graph when Spatiotemporal GNNs are under attack.\nFuture works for the proposed study can be summarized as follows. First, we utilized a universal adversarial attack method to measure the “weakness” of vertices. We do not include temporal patterns in our measurement. Consequently, involving temporal patterns in the evaluation is a possible modification. Second, we design a genetic algorithm-based method to find the “weakest” vertex in a graph to attack. This might not be the optimal solution. Third, studies on the scalability of one vertex attack is valuable.\nBesides, as spatiotemporal applications require reliable algorithms, how to defend these adversarial attacks, and how to build more robust spatiotemporal GNN-based models are still valuable." } ]
2,020
ONE VERTEX ATTACK ON GRAPH NEURAL NETWORKS-BASED SPATIOTEMPORAL FORECASTING
SP:6ae82744b305ffa175e482c92cc79137456bc2ee
[ "The authors provide a new method for detecting when deep networks are likely to fail and demonstrate through extensive experimentation its accuracy against generalization errors, out of distribution samples and adversarial attacks. The method builds on prior Mahalanobis metric of (Kimin Lee, et al., A unified framework for detecting out-of-distribution and adversarial samples. 2018) in two respects. First the authors use a single GMM fit to the model parameters that is class agnostic rather than a set of GMMs for each class, thus making it suitable for application in semi-supervised datasets. Second, the authors show ability to detect instances from a test set that are likely to cause a misclassification due to a failure to generalize. Surprisingly, the proposed approach performs better in most cases than the prior Mahalanobis approach even though it requires less information (no labels). " ]
Although deep neural networks are effective on supervised learning tasks, they have been shown to be brittle. They are prone to overfitting on their training distribution and are easily fooled by small adversarial perturbations. In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize. We propose a generative model of the features extracted by a classifier, and show using rigorous hypothesis testing that errors tend to occur when features are assigned low-probability by our model. From this observation, we develop a detection criteria for samples on which a classifier is likely to fail at test time. In particular, we test against three different sources of classification failures: mistakes made on the test set due to poor model generalization, adversarial samples and out-of-distribution samples. Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semisupervised way.
[]
[ { "authors": [ "Terrance DeVries", "Graham W. Taylor" ], "title": "Learning confidence for out-of-distribution detection in neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Alex Graves", "Abdel-rahman Mohamed", "Geoffrey Hinton" ], "title": "Speech recognition with deep recurrent neural networks", "venue": "IEEE international conference on acoustics, speech and signal processing,", "year": 2013 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q. Weinberger" ], "title": "On calibration of modern neural networks", "venue": "CoRR, abs/1706.04599,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Dan Hendrycks", "Kevin Gimpel" ], "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "venue": "CoRR, abs/1610.02136,", "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "CoRR, abs/1608.06993,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Volodymyr Kuleshov", "Stefano Ermon" ], "title": "Estimating uncertainty online against an adversary", "venue": "In AAAI,", "year": 2017 }, { "authors": [ "Alexey Kurakin", "Ian J. Goodfellow", "Samy Bengio" ], "title": "Adversarial machine learning at scale", "venue": "CoRR, abs/1611.01236,", "year": 2016 }, { "authors": [ "Samuli Laine", "Timo Aila" ], "title": "Temporal ensembling for semi-supervised learning", "venue": "arXiv preprint arXiv:1610.02242,", "year": 2016 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "arXiv preprint arXiv:1711.09325,", "year": 2017 }, { "authors": [ "Kimin Lee", "Kibok Lee", "Honglak Lee", "Jinwoo Shin" ], "title": "A unified framework for detecting out-ofdistribution and adversarial samples", "venue": null, "year": 2018 }, { "authors": [ "Jesse Levinson", "Jake Askeland", "Jan Becker", "Jennifer Dolson", "David Held", "Soeren Kammel", "J Zico Kolter", "Dirk Langer", "Oliver Pink", "Vaughan Pratt" ], "title": "Towards fully autonomous driving: Systems and algorithms", "venue": "IEEE Intelligent Vehicles Symposium (IV),", "year": 2011 }, { "authors": [ "Shiyu Liang", "Yixuan Li", "R. Srikant" ], "title": "Principled detection of out-of-distribution examples in neural networks", "venue": "CoRR, abs/1706.02690,", "year": 2017 }, { "authors": [ "David JC MacKay" ], "title": "A practical bayesian framework for backpropagation networks", "venue": "Neural computation,", "year": 1992 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "arXiv preprint arXiv:1706.06083,", "year": 2017 }, { "authors": [ "Andrey Malinin", "Mark Gales" ], "title": "Predictive uncertainty estimation via prior networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Eric Nalisnick", "Akihiro Matsukawa", "Yee Whye Teh", "Dilan Gorur", "Balaji Lakshminarayanan" ], "title": "Do deep generative models know what they don’t", "venue": null, "year": 2018 }, { "authors": [ "Radford M Neal" ], "title": "Bayesian learning for neural networks, volume 118", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": null, "year": 2011 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuoglu" ], "title": "Pixel recurrent neural networks", "venue": "arXiv preprint arXiv:1601.06759,", "year": 2016 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Oriol Vinyals", "Lasse Espeholt", "Alex Graves", "Koray Kavukcuoglu" ], "title": "Conditional image generation with pixelcnn decoders", "venue": "arXiv preprint arXiv:1606.05328,", "year": 2016 }, { "authors": [ "George Papamakarios", "Theo Pavlakou", "Iain Murray" ], "title": "Masked autoregressive flow for density estimation", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Nicolas Papernot", "Patrick D. McDaniel", "Xi Wu", "Somesh Jha", "Ananthram Swami" ], "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "venue": "CoRR, abs/1511.04508,", "year": 2015 }, { "authors": [ "John C. Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "In ADVANCES IN LARGE MARGIN CLASSIFIERS,", "year": 1999 }, { "authors": [ "Tim Salimans", "Andrej Karpathy", "Xi Chen", "Diederik P Kingma" ], "title": "Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications", "venue": "arXiv preprint arXiv:1701.05517,", "year": 2017 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "CoRR, abs/1710.10766,", "year": 2017 }, { "authors": [ "Yi Sun", "Ding Liang", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deepid3: Face recognition with very deep neural networks", "venue": "CoRR, abs/1502.00873,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian J. Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "CoRR, abs/1312.6199,", "year": 2013 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble adversarial training: Attacks and defenses, 2017", "venue": null, "year": 2017 }, { "authors": [ "Jiajun Wu", "Ilker Yildirim", "Joseph J Lim", "Bill Freeman", "Josh Tenenbaum" ], "title": "Galileo: Perceiving physical object properties by integrating a physics engine with deep learning", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Zhihao Zheng", "Pengyu Hong" ], "title": "Robust detection of adversarial attacks by modeling the intrinsic properties of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning algorithms have shown remarkable success in challenging supervised learning tasks such as object classification (He et al., 2016) and speech recognition (Graves et al., 2013). Deep neural networks in particular, have gained traction because of their ability to learn a hierarchical feature representation of their inputs. Neural networks, however, are also known to be brittle. As they require a large number of parameters compared to available data, deep neural networks have a tendency to latch onto spurious statistical dependencies to make their predictions. As a result, they are prone to overfitting and can be fooled by imperceptible adversarial perturbations of their inputs (Szegedy et al., 2013; Kurakin et al., 2016; Madry et al., 2017). Additionally, modern neural networks are poorly calibrated and do not capture model uncertainty well (Gal & Ghahramani, 2016; Kuleshov & Ermon, 2017; Guo et al., 2017). They produce confidence scores that do not represent true probabilities and consequently, often output predictions that are over-confident even when fed with out-of-distribution inputs (Liang et al., 2017). These limitations of neural networks are problematic as they become ubiquitous in applications where safety and reliability is a priority (Levinson et al., 2011; Sun et al., 2015).\nFully probabilistic, generative models could mitigate these issues by improving uncertainty quantification and incorporating prior knowledge (e.g, physical properties (Wu et al., 2015)) into the classification process. While great progress has been made towards designing generative models that can capture high-dimensional objects such as images (Oord et al., 2016a; Salimans et al., 2017), accurate probabilistic modeling of complex, high-dimensional data remains challenging.\nOur work aims at providing an understanding of these failure modes under the lens of probabilistic modelling. Instead of directly modeling the inputs, we rely on the ability of neural networks to extract features from high-dimensional data and build a generative model of these low-dimensional features. Because deep neural networks are trained to extract features from which they output classification predictions, we make the assumption that it is possible to detect failure cases from the learned representations.\nGiven a neural network trained for image classification, we capture the distribution of the learned feature space with a Gaussian Mixture Model (GMM) and use the predicted likelihoods to detect inputs on which the model cannot produce reliable classification results. We show that we are able to not only detect adversarial and out-of-distribution samples, but surprisingly also identify inputs from\nthe test set on which a model is likely to make a mistake. We experiment on state-of-the-art neural networks trained on CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) and show, through statistical hypothesis testing, that samples leading to classification failures tend to correspond to features that lie in a low probability region of the feature space.\nContributions Our contributions are as follows:\n• We provide a probabilistic explanation to the brittleness of deep neural networks and show that classifiers tend to make mistakes on inputs with low-probability features.\n• We demonstrate that a simple modeling by a GMM of the feature space learned by a deep neural network is enough to model the probability space. Other state-of-the-art methods for probabilistic modelling such as VAEs (Kingma & Welling, 2013) and auto-regressive flow models (Papamakarios et al., 2017) fail in that regard.\n• We show that generative models trained on the feature space can be used as a single tool to reliably detect different sources of classification failures: test set errors due to poor generalization, adversarial samples and out-of-distribution samples." }, { "heading": "2 RELATED WORK", "text": "An extensive body of work has been focused on understanding the behaviours of deep neural networks when they are faced with inputs on which they fail. We provide a brief overview below:\nUncertainty quantification Uncertainty quantification for neural networks is crucial in order to detect when a model’s prediction cannot be trusted. Bayesian approaches (MacKay, 1992; Neal, 2012; Blundell et al., 2015), for example, seek to capture the uncertainty of a network by considering a prior distribution over the model’s weights. Training these networks is challenging because the exact posterior is intractable and usually approximated using a variety of methods for posterior inference. Closely related, Deep Ensembles (Lakshminarayanan et al., 2017) and Monte-Carlo Dropout (Gal & Ghahramani, 2016) consider the outputs of multiple models as an alternative way to approximate the distribution. Model calibration (Platt, 1999; Guo et al., 2017) aims at producing confidence score that are representative of the likelihood of correctness. Uncertainty quantification may also be obtained by training the network to provide uncertainty measures. Prior Networks (Malinin & Gales, 2018) model the implicit posterior distribution in the Bayesian approach, DeVries & Taylor (2018); Lee et al. (2017) have the network output an additional confidence output. These methods require a proxy dataset representing the out-of-distribution samples to train their confidence scores.\nOur method differs from the above as it seeks to give an uncertainty estimation based on a model trained with the usual cross-entropy loss. It does not require additional modelling assumptions, nor modifications to the model’s architecture or training procedure. As such, it relates closely to threshold-based methods. For example, Hendrycks & Gimpel (2016) use the logits outputs as a measure of the network’s confidence and can be improved using Temperature Scaling (Guo et al., 2017; Liang et al., 2017), a post-processing method that calibrates the model. Our work derives a confidence score by learning the probability distribution of the feature space and generalizes to adversarial samples (Szegedy et al., 2013), another source of neural networks’ brittleness.\nAdversarial samples Methods to defend against adversarial examples include explicitly training networks to be more robust to adversarial attacks (Tramèr et al., 2017; Madry et al., 2017; Papernot et al., 2015). Another line of defense comes from the ability to detect adversarial samples at test time. Song et al. (2017) for example, use a generative model trained on the input images to detect and purify adversarial examples at test time using the observation that adversarial samples have lower predicted likelihood under the trained model. Closer to our work, Zheng & Hong (2018) and Lee et al. (2018) train a conditional generative model on the feature space learned by the classifier and derive a confidence score based on the Mahalanobis distance between a test sample and its predicted class representation. Our method makes the GMM class-agnostic, making it applicable to settings where labels are not available at inference time. We further show that the unsupervised GMM improves on the Mahalanobis score on the OOD detection task." }, { "heading": "3 DETECTING MISTAKES", "text": "Detecting samples on which a trained classifier is likely to make a mistake is crucial when considering the range of applications in which these models are deployed. However, predicting in advance whether a sample will fail seems challenging, especially when the sample is drawn from the same distribution as the train set. To illustrate this, we show in Fig. 1, samples from the CIFAR-100 training dataset and compare them to test samples and adversarial examples that our DenseNet model fails to classify properly. In both cases, it is not obvious to the human eye what fundamentally differs between correct and incorrect samples. Our main intuition is that a generative model trained on the feature space could capture these subtle differences." }, { "heading": "3.1 BACKGROUND", "text": "We consider the problem of classification where we have access to a (possibly partially) labeled dataset D = {(Xi, yi)}Ni=1 where (Xi, yi) ∈ X × Y . Samples are assumed to be independently sampled from a distribution pdata(X, y) and we denote the marginal over X as pdata(X). We will denote fθ : X −→ F = RD the feature extractor part of our neural network, where θ represents the parameters of the network and F is the feature space of dimension D. Given an input X, the predictions probabilities on the label space Y are then typically obtained using multivariate logistic regression on the extracted features.\np(y|X, θ,W,b) = softmax(Wfθ(X) + b) (1) where (W,b) represent the weights and bias of the last fully-connected layer of the neural network. The model prediction is the class with the highest predicted probability: ŷ(X) = argmaxy∈Y p(y|X, θ,W,b). The parameters (θ,W,b) are trained to minimize a crossentropy loss on the training set and performance is evaluated on the test set.\nLearning the data structure with Generative Models Understanding the data structure can greatly improve the ability of neural models to generalize. Recently, great progress has been made in designing powerful generative models that can capture high-dimensional complex data such as images. PixelCNN (Salimans et al., 2017; Oord et al., 2016b;a) in particular, is a state-of-the-art deep generative model with tractable likelihood that represents the probability density of an image as a fully factorized product of conditionals over individual pixels of an image.\npCNN (X) = n∏ i=1 pφ(Xi|X1:i−1) (2)\nFlow models such as the Masked Autogressive Flow (MAF) (Papamakarios et al., 2017) model provide similar tractability by parameterizing distributions with reversible functions which make that likelihood easily tractable through the change of variable formula. Another widely used class of generative models assumes the existence of unobserved latent variables. Gaussian Mixture Models, for example, assume discrete latents (corresponding to the mixture component). Variational autoencoders (Kingma & Welling, 2013) use continuous latent variables and parameterize the (conditional) distributions using neural networks." }, { "heading": "3.2 MODELING THE FEATURE SPACE", "text": "We identify two main reasons why characterizing examples over which a classifier is likely to make a mistake is difficult. First, modeling the input data distribution pdata(X), as done in Song et al. (2017) to detect adversarial examples, is challenging because of the high-dimensional, complex nature of the image space X . This approach also fails at detecting out-of-distribution samples, with state-of-the art models assigning higher likelihoods to samples that completely differ from their train set (Nalisnick et al., 2018). Second, a model of pdata(X) doesn’t capture any information about the classifier itself.\nTo overcome these difficulties, we propose to model the underlying distribution of the learned features F = fθ(X), where X ∼ pdata(X). Extracted features have lower dimension which makes them easier to model and they give access to some information on the classifier. Specifically, we are interested in comparing features Fc of samples that are correctly classified with features Fw of samples that are incorrectly classified by a trained neural network. Fc and Fw can be described as elements of the following sets:\nFc ∈ C = {fθ(X)|ŷ(X) = y, (X, y) ∈ X × Y} (3) Fw ∈ W = {fθ(X)|ŷ(X) 6= y, (X, y) ∈ X × Y} (4)\nThe distribution of the extracted features is modeled by:\np(F) = K∑ k=1 πkN (F;µk,Σk) (5)\nwhere K is the number of Gaussians in the mixture, πk,µk,Σk are the model parameters. We choose Σk to be diagonal in all our experiments. After training a neural network to convergence, we learn the parameters of the GMM using the EM algorithm. Our training set is built from the features extracted from the training image set by the trained classifier." }, { "heading": "3.3 DETECTING CLASSIFICATION MISTAKES", "text": "We posit that classification mistakes are linked to extracted features that are unusual under the training distribution. By modeling the feature space learned by the classifier, our generative model will be able to detect an input that will lead to a potential classification mistake. We found that a simple generative model is surprisingly good at capturing the distribution of the feature space and can detect when an input will lead to a classification mistake based on its predicted feature log-likelihood.\nStatistical Hypothesis Testing We consider pC(Fc) the distribution of features Fc = fθ(X) where (X, y) ∼ pdata(X, y) and ŷ(X) = y, and pW(Fw) the distribution of features Fw = fθ(X) where (X, y) ∼ pdata(X, y) and ŷ(X) 6= y. These correspond to features extracted on correctly classified vs. incorrectly classified examples. Note that these distributions not only depend on the underlying data distribution but also on the classifier’s parameters (θ,W,b).\nAssuming we have access to samples Fc,1, . . . ,Fc,n ∼ pC and Fw,1, . . .Fw,m ∼ pW our null hypothesis H0 and alternative hypothesis H1 are:\nH0 : pC = pW H1 : pC 6= pW (6) We use the Mann-Whitney U-test, which assumes that samples can be ranked. The test statistic is defined by ranking all samples of the two groups together and using the sum of their ranks.\nUC = RC − n(n+ 1)\n2 UW = RW −\nm(m+ 1)\n2 (7)\nwhere RC and RW are the sum of ranks of samples Fc and Fw respectively. The statistic for the statistical test is U = min(UC , UW), which has a distribution that can be approximated by a normal distribution under the null hypothesis. In our approach, samples are ranked based on their predicted probability.\nSince our test statistic directly uses the predicted likelihood of a feature, we deduce from it a simple per-sample test to determine if an input is likely to be misclassified. Given a threshold T , a test sample X is rejected as being misclassified if p(fθ(X)) < T . The value of the threshold is chosen by cross-validation on the validation set to obtain a good trade-off between precision and recall." }, { "heading": "4 EXPERIMENTS", "text": "We run experiments on the CIFAR-100 dataset, containing 32 × 32 color images used for image classification with 100 classes. All reported results give the mean and standard deviation over 5 independent runs. Additional experiments on a model trained on the smaller CIFAR-10 dataset are also available in the appendix. We examine two state-of-the-art deep neural networks, DenseNet100 (Huang et al., 2016) and Wide ResNet-28 (Zagoruyko & Komodakis, 2016) trained with the usual cross-entropy loss. In the setting where only a small number of labels is available, we train a WRN-28 model with 100 labeled samples per class using Temporal ensembling (Laine & Aila, 2016). This self-ensembling training method takes advantage of the stochasticity provided by the use of dropout and random augmentation techniques (e.g. random flipping and cropping).\nMistake Detection Using statistical testing, we verify that the trained model learns a distribution that differentiates correct and incorrect samples. We sum up the performance of our method by reporting the AUC-ROC and AUC-PR obtained on the test set.\nTo motivate the use of high-level features, we adapt the detection method used by Song et al. (2017) to the mistake detection problem and compare the performance with our proposed method. We train a PixelCNN on the image dataset and use the predicted likelihood values to detect classification mistakes. We evaluate mistake detection on the test set and first compare the distribution predicted by PixelCNN on the images with the distribution predicted by a GMM-100 model on extracted features (Figure 2). Using the Mann-Whitney U-test, we verify that the distribution learned by GMM-100 differentiates correct and incorrect samples (p = 1.9e−13). On the other hand, because PixelCNN is trained without knowledge of the classifier’s internal representations, the distributions of correct and incorrect samples predicted under PixelCNN are almost indistinguishable (p = 8.58e−5).\nAdditionally, we experimented with more flexible likelihood models to model the feature space such as the Variational Auto Encoder (Kingma & Welling, 2013) and Masked Autoregressive Flow (Papamakarios et al., 2017). Surprisingly, we found that a simple Gaussian Mixture Model is better at detecting classification mistakes than these more flexible models. Finally, we also compare with other threshold-based methods: using the predicted logits and calibrated scores obtained after Temperature Scaling. Detection performance is summed up in Figure 3 for DenseNet and WideResNet models trained on CIFAR-100. GMM models trained on the features outperform all other generative\nmodels trained either on images or on the feature space. We find that using a GMM has similar performance than calibrated scores on the Wide ResNet but not on the DenseNet. This is explained by the fact that our DenseNet model has much lower accuracy than Wide ResNet (72.76% v. 80.22%) and therefore does not produce overly confident predictions. Additional results are available in the appendix.\nIn the next experiments, we show that although using predicted logits provides reliable detection of test set mistakes, this metric doesn’t generalize to adversarial or out-of-detection samples. On the other hand, our approach to train a generative model on the feature space can be applied to these other sources of classification errors.\nAdversarial samples We craft adversarial samples from test samples using the Fast-Gradient Sign Method (FGSM) proposed by Goodfellow et al. (2014) and the Basic Iteration Method (BIM) (Kurakin et al., 2016). Both methods move the input image in the direction of the gradient of the loss while restraining the adversarial sample to be in a `1 ball of ray attack around the original input. This ensures that the generated adversarial sample is visually indistinguishable from the original.\nFigure 4 shows that the GMM is sensitive to features extracted from adversarial samples, as they are assigned higher BPDs than clean samples. We also plot the ROC curves and corresponding AUC metrics that are obtained by using the predicted BPD to detect adversarial samples.\nWe compare our approach with other possible detection metrics. In particular, the method proposed by Zheng & Hong (2018) and the Mahalanobis score from Lee et al. (2018) also leverage the feature space to detect adversarial inputs. These approaches use one different model per class and therefore require labels to train while we only train one GMM in an unsupervised manner. ROC curves are shown in Figure 5 and a full comparison table with higher attack values for both attacks is shown in\nthe appendix. Our method, using a GMM-1000 provides better detection performance of adversarial samples than calibrated and non-calibrated logit scores. Most notably, in a semi-supervised setting (Figure 5c), our method surpasses all others on attacks with low attack values.\nOut-of-Distribution Detection We also test the use of feature log-likelihood values on the task of detection out-of-distribution samples. As Out-of-Distribution samples we use Random Gaussian Noise, SVHN (Netzer et al., 2011), Tiny ImageNet (Russakovsky et al., 2015), and Fashion MNIST (Xiao et al., 2017). OOD detection results are reported in Table 2 for each model we trained. Our experiments show that it is not possible to rely on calibrated probability scores for OOD detection, and that our method yields better detection results than using the Mahalanobis score in some cases. We also highlight that a PixelCNN trained on CIFAR has very poor detection results on image datasets that visually look very different from its original training set (FashionMNIST and SVHN).\nThis is a result of the generative model assigning higher likelihood to these OOD samples. Table 3 in the annex also shows that only calibrated scores fail to detect random gaussian noise as an OOD sample." }, { "heading": "5 CONCLUSION", "text": "Using statistical hypothesis testing we provided a general characterization of inputs that lead to classification mistakes by deep neural networks. With a simple Gaussian Mixture Model, we model the distribution of the feature space learned by a classifier and verified that features extracted from inputs consistently lie outside of the training distribution and can be detected by their low predicted log probability. Compared to other score-based methods, our characterization holds for a variety of classification failure modes in deep neural networks: adversarial sample detection, out-of-distribution detection and test time classification mistakes." }, { "heading": "A ADDITIONAL DETECTION RESULTS", "text": "" }, { "heading": "B PURIFICATION", "text": "B.1 METHOD\nThe purification process aims at moving the feature F extracted by the classifier to a low BPD region. This can be formulated as a joint optimization problem where we want to find features F with minimal BPD, while being close to the initial extracted features Fref .\nF? = argmin F BPD(F) + ν‖F− Fref‖22 (8)\nν is a hyperparameter that defines how close the new feature should be to the initial one. As the objective is not convex and there is no close form solution for stationary points, we use gradient descent with regards to F to minimize the objective function.\nF := F− (∇FBPD(F) + 2ν(F− Fref )) (9)\nB.2 PURIFICATION RESULTS\nPurification of features is performed with 100 iterations of gradient descent steps to optimize the objective function. We test the performance of purification for both classification and semi-supervised classification tasks on CIFAR-100.\nWe report the accuracy on validation and test set obtained after purification with different GMMs and for different values of learning rates and regularization strength ν in Table 4. For classification, our networks are DenseNet (DN-100) and Wide ResNet (WRN-28). For semi-supervised classification, we apply temporal ensembling to wide ResNet (TE-WRN-28). Our results show that this purification procedure is able to correct classification mistakes on previously unseen samples and results in an accuracy gain for the model without the need to retrain. However the purification method also leads to new classification mistakes, which means that the net improvement on the accuracy reaches 0.6% on the DenseNet model at most." }, { "heading": "C EXPERIMENTAL SETUP", "text": "Dataset and preprocessing We trained on CIFAR-10 and CIFAR-100 Krizhevsky (2009) with 5,000 images held-out validation images. Inputs were preprocessed with per-channel standardization before training.\nDenseNet We use bottleneck layers and compression rate θ = 0.5, growth rate k = 12 and depth L = 100. The model is trained with batch size 64 for 300 epochs with a learning rate 0.1, dropout rate 0.2 and L2 regularization weight 1e−4. We use ReLU non-linearities except for the last layer where we use a tanh non-linearity to ensure the extracted features are bounded. For optimization, we use Stochastic Gradient Descent with a Nestrov momentum of 0.9. The learning rate is divided by 10 at epoch 150 and 175.\nWide ResNet Wide ResNet Zagoruyko & Komodakis (2016) is trained with growth rate k = 10 and depthL = 28 and batch size 100 for 200 epochs, with a learning rate 0.1, dropout rate 0.3 andL2 regularization weight 5e−4. Data augmentation is applied during training with random translation by up to 2 pixels and random horizontal flips.\nTemporal Ensembling For the semi-supervised setting, we only keep 100 samples per label in the train set. We train a Wide ResNet using Temporal Ensembling with a maximum weight decay of 100.\nPixelCNN The PixelCNN model is trained with the PixelCNN++ ameliorations from Salimans et al. (2017) for our experiments. The model is trained for 5000 epochs with dropout rate 0.5 and learning rate 1e−4.\nVAE The VAE is trainer for 1000 epochs with a learning rate of 0.001 and decay rate of 0.9995. The encoder and decoder architecture are fully connected layers with ReLU non-linearities, one hidden layer of size 512 and latent dimension of 128. The model was trained with Adam.\nMAF The Masked Autoregressive Flow model is trained for 1000 epochs with a learning rate of 0.01 and batch size 32 using Adam Optimizer. We used a 5-layer MADE model with hidden layer size of 128.\nTemperature Scaling The temperature for Temperature Scaling is optimized using the L-BFGSB optimization algorithm with a maximum of 100 iterations. We use ECE with B = 10 bins to evaluate the success of the calibration." } ]
2,020
null
SP:85919ada5493c7f63cbd171e7f9738fee02d8dfb
[ "The proposed contribution of this work is to build of the existing literature which uses variational autoencoders for causal inference by (1) allowing an explicit mechanism for modeling irrelevant covariates and (2) incorporating targeted regularization into the latent variable nnet framework. Optimization of the model is done by minimizing the ELBO subject to a penalty term which the authors refer to as “targeted regularization”. This term is essentially an application of the TMLE model. Both of these proposals appear to improve the performance on what are now pretty standard benchmark datasets (jobs and ihdp). " ]
Undertaking causal inference with observational data is extremely useful across a wide range of domains including the development of medical treatments, advertisements and marketing, and policy making. There are two main challenges associated with undertaking causal inference using observational data: treatment assignment heterogeneity (i.e., differences between the treated and untreated groups), and an absence of counterfactual data (i.e. not knowing what would have happened if an individual who did get treatment, were instead to have not been treated). We address these two challenges by combining structured inference and targeted learning. To our knowledge, Targeted Variational AutoEncoder (TVAE) is the first method to incorporate targeted learning into deep latent variable models. Results demonstrate competitive and state of the art performance.
[]
[ { "authors": [ "G.W.S. Athey" ], "title": "Imbens. Recursive partitioning for heterogeneous causal effects", "venue": null, "year": 2017 }, { "authors": [ "Eli Bingham", "Jonathan P. Chen", "Martin Jankowiak", "Fritz Obermeyer", "Neeraj Pradhan", "Theofanis Karaletsos", "Rohit Singh", "Paul A. Szerlip", "Paul Horsfall", "Noah D. Goodman" ], "title": "Pyro: Deep universal probabilistic programming", "venue": "J. Mach. Learn. Res.,", "year": 2019 }, { "authors": [ "D.M. Blei", "A. Kucukelbir", "J.D. McAuliffe" ], "title": "Variational inference: a review for statisticians", "venue": null, "year": 2018 }, { "authors": [ "L. Bottou", "J. Peters", "Quinonero-Candela J", "D.X. Charles", "D.M. Chickering", "E. Portugaly", "D. Ray", "P. Simard", "E. Snelson" ], "title": "Counterfactual reasoning and learning systems: the example of computational advertising", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "C.P. Burgess", "I. Higgins", "A. Pal", "L. Matthey", "N. Watters", "G. Desjardins", "A. Lerchner" ], "title": "Understanding disentangling in Beta-VAE", "venue": null, "year": 2018 }, { "authors": [ "V. Chernozhukov", "D. Chetverikov", "M. Demirer", "E. Duflo", "C. Hansen", "W. Newey" ], "title": "Double/debiased/Neyman machine learning of treatment effects", "venue": "American Economic Review,", "year": 2017 }, { "authors": [ "B. Dai", "Z. Wang", "D. Wipf" ], "title": "The usual suspects? reassessing blame for VAE posterior collapse", "venue": null, "year": 1912 }, { "authors": [ "A. D‘Amour" ], "title": "On multi-cause causal inference with unobserved confounding: Counterexamples, impossibility and alternatives", "venue": "Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "A. Deaton", "N. Cartwright" ], "title": "Understanding and misundertstanding randomized controlled trials", "venue": "Social Science and Medicine,", "year": 2018 }, { "authors": [ "R.H. Dehejia", "S. Wahba" ], "title": "Propensity score-matching methods for nonexperimental causal studies", "venue": "Review of Economics and Statistics,", "year": 2002 }, { "authors": [ "V. Dorie" ], "title": "Non-parametrics for causal inference", "venue": "https://github.com/vdorie/npci,", "year": 2016 }, { "authors": [ "A. Gabbay", "Y. Hosen" ], "title": "Demystifying inter-class disentanglement", "venue": null, "year": 1906 }, { "authors": [ "R.T. Gross" ], "title": "Infant health and development program (IHDP): enhancing the outcomes of low birth weight, premature infants in the United States. MI: Inter-university Confortium for Political and Social Research", "venue": "Ann Arbor,", "year": 1993 }, { "authors": [ "M.P. Grosz", "J.M. Rohrer", "F. Thoemmes" ], "title": "The taboo against explicit causal inference in nonexperimental psychology", "venue": "Perspectives on Psychological Science,", "year": 2020 }, { "authors": [ "R. Guo", "J. Li", "H. Liu" ], "title": "Learning individual causal effects from networked observational data", "venue": "Association for Computing Machinery,", "year": 2020 }, { "authors": [ "J. Haggstrom" ], "title": "Data-driven confounder selection via Markov and", "venue": "Bayesian networks. Biometriks,", "year": 2017 }, { "authors": [ "F.R. Hampel" ], "title": "The influence curve and its role in robust estimation", "venue": "Journal of the American Statistical Association,", "year": 1974 }, { "authors": [ "N. Hassanpour", "R. Greiner" ], "title": "Counterfactual regression with importance sampling weights", "venue": "Proceedings of the 28th International Joint Conference on Artificial Intellgence", "year": 2019 }, { "authors": [ "N. Hassanpour", "R. Greiner" ], "title": "Learning disentangled representations for counterfactual regression", "venue": "ICLR,", "year": 2020 }, { "authors": [ "M. Hernan" ], "title": "The c-word: scientific euphemisms do not improve causal inference from observational data", "venue": "American Journal of Public Health,", "year": 2018 }, { "authors": [ "I. Higgins", "L. Matthey", "A. Pal", "C. Burgess", "X. Glorot", "M. Botvinick", "S. Mohamed", "A. Lerchner" ], "title": "Beta-VAE: Learning basic visual concepts with a constrained variational framework", "venue": null, "year": 2017 }, { "authors": [ "J.L. Hill" ], "title": "Bayesian nonparametric modeling for causal inference", "venue": "Journal of Computational and Graphical Statistics,", "year": 2011 }, { "authors": [ "G.W. Imbens", "D.B. Rubin" ], "title": "Causal inference for statistics, social, and biomedical sciences. An Introduction", "venue": null, "year": 2015 }, { "authors": [ "A. Jesson", "S. Mindermann", "U. Shalit", "Y. Gal" ], "title": "Identifying causal effect inference failure with uncertainty-aware models", "venue": null, "year": 2007 }, { "authors": [ "E.H. Kennedy" ], "title": "Statistical causal inferences and their applications in public health research, chapter Semiparametric theory and empirical processes in causal inference", "venue": null, "year": 2016 }, { "authors": [ "D.P. Kingma", "J.L. Ba" ], "title": "Adam: a method for stochastic optimization", "venue": null, "year": 2017 }, { "authors": [ "D.P. Kingma", "M. Welling" ], "title": "Auto-encoding variational Bayes", "venue": null, "year": 2014 }, { "authors": [ "N. Kreif", "K. DiazOrdaz" ], "title": "Machine learning in policy evaluation: new tools for causal inference", "venue": null, "year": 1903 }, { "authors": [ "K. Kuang", "P. Cui", "B. Li", "M. Jiang", "S. Yang", "F. Wang" ], "title": "Treatment effect estimation with datadriven variable decomposition", "venue": null, "year": 2017 }, { "authors": [ "R.J. LaLonde" ], "title": "Evaluating the econometric evaluations of training programs with experimental data", "venue": "The American Economic Review, pp", "year": 1986 }, { "authors": [ "J. Lezama" ], "title": "Overcoming the disentanglement vs reconstruction trade-off via Jacobian supervision", "venue": null, "year": 2019 }, { "authors": [ "F. Locatello", "S. Bauer", "M. Lucic", "G. Ratsch", "S. Gelly", "B. Scholkopf", "Bachem O" ], "title": "Challenging common assumptions in the unsupervised learning of disentangled representations", "venue": null, "year": 2019 }, { "authors": [ "C. Louizos", "U. Shalit", "J. Mooij", "D. Sontag", "R. Zemel", "M. Welling" ], "title": "Causal effect inference with deep latent-variable models", "venue": "31st Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "J. Lucas", "G. Tucker", "R. Grosse", "M Norouzi" ], "title": "Understanding posterior collapse in generative latent variable models. ICLR, 2019a", "venue": null, "year": 2019 }, { "authors": [ "J. Lucas", "G. Tucker", "R. Grosse", "M. Norouzi" ], "title": "Don’t blame the ELBO! a linear VAE perspective on posterior collapse. arXiv:1911.02469v1, 2019b", "venue": null, "year": 2019 }, { "authors": [ "M. Maier", "K. Marazopoulou", "D. Arbour", "D. David" ], "title": "A sound and complete algorithm for learning causal models from relational data", "venue": "Proceedings of the 29th Conf. on Uncertainty in Artificial Intelligence,", "year": 2013 }, { "authors": [ "I. Mayer", "J. Josse", "F.J.P. Raimundo", "Vert" ], "title": "MissDeepCausal: causal inference from incomplete data using deep latent variable models", "venue": null, "year": 2002 }, { "authors": [ "M.R. Montgomery", "M. Gragnolati", "K.A. Burke", "E. Paredes" ], "title": "Measuring living standards with proxy variables. Demography", "venue": null, "year": 2000 }, { "authors": [ "J.M. Mooij", "O. Stegle", "D. Janzing", "K. Zhang", "B. Scholkopf" ], "title": "Probabilistic latent variable models for distinguishing between cause and effect", "venue": null, "year": 2010 }, { "authors": [ "D. Moyer", "S. Gao", "R. Brekelmans", "G.V. Steeg", "A. Galstyan" ], "title": "Invariant representations without adversarial training", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "H. Oktay" ], "title": "Using latent variable models to improve causal estimation", "venue": "PhD thesis, Doctoral Dissertation", "year": 2018 }, { "authors": [ "M. Petersen", "L. Balzer", "D. Kwarsiima", "N. Sang", "G. Chamie", "J. Ayieko", "J. Kabami", "A. Owaraganise", "T. Liegler", "F. Mwangwa", "K. Kadede" ], "title": "Association of implementation of a universal testing and treatment intervention with HIV diagnosis, receipt of antiretroviral therapy, and viral suppression in East Africa", "venue": "Journal of American Medical Association,", "year": 2017 }, { "authors": [ "M. Rolinek", "D. Zietlow", "G. Martius" ], "title": "Variational autoencoders pursue PCA directions (by accident)", "venue": null, "year": 2019 }, { "authors": [ "P.R. Rosenbaum", "D.B. Rubin" ], "title": "The central role of the propensity score in observational studies for causal effects", "venue": null, "year": 1983 }, { "authors": [ "D.B. Rubin" ], "title": "Causal inference using potential outcomes: Design, modeling, decisions", "venue": "Journal of the American Statistical Association,", "year": 2005 }, { "authors": [ "M.S. Schuler", "S. Rose" ], "title": "Targeted maximum likelihood estimation for causal inference in observational studies", "venue": "American Journal of Epidemiology,", "year": 2016 }, { "authors": [ "P. Schwab", "L. Linhardt", "W. Karlen" ], "title": "Perfect match: a simple method for learning representations for counterfactual inference with neural networks", "venue": null, "year": 2019 }, { "authors": [ "U. Shalit", "F.D. Johansson", "D. Sontag" ], "title": "Estimating individual treatment effect: generalization bounds and algorithms", "venue": null, "year": 2017 }, { "authors": [ "A. Sharma", "G. Gupta", "R.A. Prasad", "Chatterjee", "L. Vig", "G. Shroff" ], "title": "MultiMBNN: matched and balanced causal inference with neural networks", "venue": null, "year": 2004 }, { "authors": [ "C. Shi", "D.M. Blei", "V. Veitch" ], "title": "Adapting neural networks for the estimation of treatment effects", "venue": "33rd Conference on Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "B. Siegerink", "W. den Hollander", "M. Zeegers", "R. Middelburg" ], "title": "Causal inference in law: an epidemiological perspective", "venue": "European Journal of Risk Regulation,", "year": 2016 }, { "authors": [ "R. Silva", "R. Scheine", "G. Clark", "P. Spirtes" ], "title": "Learning the structure of linear latent variable models", "venue": "Journal of Machine Learning Research,", "year": 2006 }, { "authors": [ "J.A. Smith", "P.E. Todd" ], "title": "Does matching overcome LaLonde’s critique of nonexperimental estimators", "venue": "Journal of Econometrics,", "year": 2005 }, { "authors": [ "N. Tishby", "N. Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": null, "year": 2015 }, { "authors": [ "M.J. van der Laan", "S. Rose" ], "title": "Targeted Learning - Causal Inference for Observational and Experimental Data", "venue": null, "year": 2011 }, { "authors": [ "M.J. van der Laan", "S. Rose" ], "title": "Targeted Learning in Data Science", "venue": null, "year": 2018 }, { "authors": [ "M.J. van der Laan", "R.J.C.M. Starmans" ], "title": "Entering the era of data science: targeted learning and the integration of statistics and computational data analysis", "venue": "Advances in Statistics,", "year": 2014 }, { "authors": [ "M.J. Vowels" ], "title": "Limited functional form, misspecification, and unreliable interpretations in psychology and social science", "venue": null, "year": 2009 }, { "authors": [ "L. Yao", "S. Li", "Y. Li", "M. Huai", "J. Gao", "A. Zhang" ], "title": "Representation learning for treatment effect estimation from observational data", "venue": "32nd Conference on Neural Information Processing Systems (NeurIPS),", "year": 2018 }, { "authors": [ "J. Yoon", "J. Jordan", "M. van der Schaar" ], "title": "GANITE: Estimation of individualized treatment effects using generative adversarial nets", "venue": null, "year": 2018 }, { "authors": [ "Z. Zhang", "Q. Lan", "Y. Wang", "N. Hassanpour", "R. Greiner" ], "title": "Reducing selectin bias in counterfactual", "venue": null, "year": 2020 }, { "authors": [ "Silva" ], "title": "The use of a latent variable model allows us to infer unobserved/hidden confounders (Louizos et al., 2017), although the use of VAEs means that there is some uncertainty concerning the guarantees of the learned model (Rolinek et al., 2019; Dai et al., 2019; Lucas et al., 2019a;b). This uncertainty notwithstanding, previous implementations for latent variable models with causal inference show promising results in comparison with other methods", "venue": null, "year": 2006 }, { "authors": [ "Louizos et al", "Mayer et al", "Hassanpour", "Greiner" ], "title": "2020). We wish to encode the m-dimensional covariates x on a manifold via a stochastic mapping p(z|x), where latent variables z provide a compact representation. The distribution", "venue": null, "year": 2020 }, { "authors": [ "Shi" ], "title": "2018) and comprises 608 untreated and 139 treated samples (747 in total). There are 25 covariates, 19 of which are discrete/binary, and the rest are continuous. The outcome for the IHDP data is continuous and unbounded", "venue": "NPCI data generating package (Dorie,", "year": 2016 } ]
[ { "heading": null, "text": "Undertaking causal inference with observational data is extremely useful across a wide range of domains including the development of medical treatments, advertisements and marketing, and policy making. There are two main challenges associated with undertaking causal inference using observational data: treatment assignment heterogeneity (i.e., differences between the treated and untreated groups), and an absence of counterfactual data (i.e. not knowing what would have happened if an individual who did get treatment, were instead to have not been treated). We address these two challenges by combining structured inference and targeted learning. To our knowledge, Targeted Variational AutoEncoder (TVAE) is the first method to incorporate targeted learning into deep latent variable models. Results demonstrate competitive and state of the art performance." }, { "heading": "1 INTRODUCTION", "text": "The estimation of the causal effects of interventions or treatments on outcomes is of the upmost importance across a range of decision making processes and scientific endeavours, such as policy making (Kreif & DiazOrdaz, 2019), advertisement (Bottou et al., 2013), the development of medical treatments (Petersen et al., 2017), the evaluation of evidence within legal frameworks (Pearl, 2009; Siegerink et al., 2016) and social science (Vowels, 2020; Hernan, 2018; Grosz et al., 2020). Despite the common preference for Randomized Controlled Trial (RCT) data over observational data, this preference is not always justified. Besides the lower cost and fewer ethical concerns, observational data may provide a number of statistical advantages including greater statistical power and increased generalizability (Deaton & Cartwright, 2018). However, there are two main challenges when dealing with observational data. Firstly, the group that receives treatment is usually not equivalent to the group that does not (treatment assignment heterogeneity), resulting in selection bias and confounding due to associated covariates. For example, young people may prefer surgery, older people may prefer medication. Secondly, we are unable to directly estimate the causal effect of treatment, because only the factual outcome for a given treatment assignment is available. In other words, we do not have the counterfactual associated with the outcome for a different treatment assignment to that which was given. Treatment effect inference with observational data is concerned with finding ways to estimate the causal effect by considering the expected differences between factual and counterfactual outcomes.\nWe seek to address the two challenges by proposing a method that incorporates targeted learning techniques into a disentangled variational latent model, trained according to the approximate maximum likelihood paradigm. Doing so enables us to estimate the expected treatment effects, as well as individual-level treatment effects. Estimating the latter is especially important for treatments that interact with patient attributes, whilst also being crucial for enabling individualized treatment assignment. Thus, we propose the Targeted Variational AutoEncoder (TVAE), undertake an ablation study, and compare our method’s performance against current alternatives on two benchmark datasets." }, { "heading": "2 BACKGROUND", "text": "Problem Formulation: A characterization of the problem of causal inference with no unobserved confounders is depicted in the Directed Acyclic Graphs (DAGs) shown in Figs. 1(a) and 1(b). Fig. 1(a) is characteristic of observational data, where the assignment of treatment is related to the covariates. Fig. 1(b) is characteristic of the ideal RCT, where the treatment is unrelated to the covariates. Here, xi ∼ p(x) ∈ Rm represents the m-dimensional, pre-treatment covariates for individual i assigned factual treatment ti ∼ p(t|x) resulting in factual outcome yti ∼ p(y|x, t). Together, these constitute dataset D = {[yi, ti,xi]}Ni=1 where N is the sample size. The conditional average treatment effect for an individual with covariates xi may be estimated as τ̂i(xi) = E[yi|xi, do(t = 1) − yi|xi, do(t = 0)], where the expectation accounts for the nondeterminism of the outcome (Jesson et al., 2020). Alternatively, by comparing the post-intervention distributions when we intervene on treatment t, the Average Treatment Effect (ATE) is τ̂(x) = Ex[E[y|x, do(t = 1)] − E[y|x, do(t = 0)]]. Here, do(t) indicates the intervention on t, setting all instances to a static value, dynamic value, or distribution and therefore removing any dependencies it originally had (Pearl, 2009; van der Laan & Rose, 2018; 2011). This scenario corresponds with the DAG in Fig. 1(b), where treatment t is no longer a function of the covariates x.\nUsing an estimator for the conditional mean Q(t,x) = E(y|t,x), we can calculate the Average Treatment Effect (ATE) and the empirical error for estimation of the ATE (eATE).1 In order to estimate eATE we assume access to the ground truth treatment effect τ , which is only possible with synthetic or semi-synthetic datasets. The Conditional Average Treatment Effect (CATE) may also be calculated and the Precision in Estimating Heterogeneous Effect (PEHE) is one way to evaluate a model’s efficacy in estimating this quantity. See the appendix for the complete definitions of these terms.\nThe Naive Approach: The DAG in Fig. 1(a) highlights the problem with taking a naive approach to modeling the joint distribution p(y, t,x). The structural relationship t ← x → y indicates both that the assignment of treatment t is dependent on the covariates x, and that a backdoor path exists through x to y. In addition to our previous assumptions, if we also assume linearity, adjusting for this backdoor path is a simple matter of adjusting for x by including it in a logistic regression. The naive method is an example of the uppermost methods depicted in Fig. 2, and leads to the largest bias. The problem with the approach is (a) that the graph is likely misspecified such that the true relationships between covariates as well as the relationships between covariates and the outcome may be more complex. There is also problem (b), that linearity is not sufficient to ‘let the data speak’ (van der Laan & Rose, 2011) or to avoid biased parameter estimates. Using powerful nonparametric models (e.g., neural networks) may solve the limitations associated with linearity and interactions to yield a consistent estimator for p(y|x), and such a model is an example of the middlemost methods depicted in Fig. 2. However, this estimator is not targeted to the estimation of the causal effect parameter τ , only predicting the outcome, and we require a means to reduce residual bias.\nTargeted Learning: Targeted Maximum Likelihood Estimation (TMLE) (Schuler & Rose, 2016; van der Laan & Rose, 2011; 2018; van der Laan & Starmans, 2014) falls under the lowermost\n1For a binary outcome variable y ∈ {0, 1}, E(y|t,x) is the same as the conditional probability distribution p(y|t,x).\nmethods depicted in Fig. 2 and follows an approach involving three main steps: (1) estimation of the conditional mean E(y|t,x) with estimatorQ0(t,x), (2) estimation of the propensity scores with estimator g(t|x), and (3) updating the conditional mean estimatorQ0 to getQ∗ using the propensity scores to attain an estimate for the causal parameter τ .\nThe propensity score for individual i is defined as the conditional probability of being assigned treatment g(ti,xi) = p(t = ti|x = xi), ∈ [0, 1] (Rosenbaum & Rubin, 1983). The scores can be used to compensate for the relationship between the covariates and the treatment assignment using Inverse Probability of Treatment Weights (IPTWs), reweighting each sample according to its propensity score. Step (3) is undertaken using ‘clever covariates’ which are similar to the IPTWs. They form an additional covariate variable H(1,xi) = g(1|xi)−1 for individual i assigned treatment, and H(0,xi) = −g(1|xi)−1 for individual i not assigned treatment. Note that when we condition on a single numeric value we imply an intervention (e.g. g(1|xi) ≡ g(do(t = 1)|xi)). A logistic regression is then undertaken as y = σ−1[Q0(t,x)] + H(t,x) where σ−1 is the logit/inverse sigmoid function, Q0(t,x) is set to be a constant, suppressed offset and represents a fluctuation parameter which is to be estimated from the regression. Once has been estimated, we acquire an updated estimator:\nQ1(do(t = t),x) = σ [ σ−1[Q0(t,x)] + H(t,x) ] (1)\nThis equation tells us that our new estimator Q1 is equal to the old estimator balanced by the corrective H(t,x) term. This term adjusts for the bias associated with the propensity scores. When the parameter is zero, it means that there is no longer any influence from the ‘clever covariates’ H(). The updated estimator Q1 can then be plugged into the estimator for τ̂ (Q1;x). When the optimal solution is reached (i.e. when = 0), the estimator also satisfies what is known as the efficient Influence Curve (IC), or canonical gradient equation (Hampel, 1974; van der Laan & Rose, 2011; Kennedy, 2016):\nN∑ i=1 IC∗(yi, ti,xi) = 0 = N∑ i=1 [H(ti,xi)(yi −Q(ti,xi)) +Q(1,xi)−Q(0,xi)− τ (Q;x)] (2)\nwhere IC(yi, ti,xi) represents the IC, and IC∗(yi, ti,xi) represents the efficient IC for consistent Q and g. It can be seen from the right hand side Eq. 2 that at convergence, the estimator and its estimand are equal: yi = Q(ti,xi) and Q(1,xi) − Q(0,xi) = τ (Q;x). Over the whole dataset, all terms in Eq. 2 ‘cancel’ resulting in the mean ¯IC = 0. As such, the logistic regression in Eq. 1 represents a solution to the IC via a parametric submodel.\nThe TMLE method provides a doubly robust, asymptotically efficient estimate of the causal or ‘target’ parameter, and these theoretical guarantees make it attractive for adaptation into neural networks for causal effect estimation." }, { "heading": "3 METHODOLOGY", "text": "In this section we present the Targeted Variational AutoEncoder (TVAE), a deep generative latent variable model that enables estimation of the average and conditional average treatment effects (ATE and CATE resp.) via the use of amortized variational inference techniques and Targeted Maximum Likelihood Estimation (TMLE). For a review on the relevant VAE theory, see the appendix. A top-\nlevel diagram for TVAE is shown in Fig. 3 and follows the structure implied by the DAG in Fig. 1(c). A more detailed architectural block-diagram is also presented in the appendix.2\nAssumptions: As is common (Yao et al., 2020; Guo et al., 2020; Rubin, 2005; Imbens & Rubin, 2015) when undertaking causal inference with observational data, we make three assumptions: (1) Stable Unit Treatment Value Assumption (SUTVA): the potential outcomes for each individual or data unit are independent of the treatments assigned to all other individuals, such that there are no interactions between individuals. (2) Positivity: the assignment of treatment probabilities are all non-zero and non-deterministic p(t = ti|x = xi) > 0, ∀ t and x. (3) Ignorability: all confounders are observed such that the likelihood of treatment for two individuals with the same covariates is equal, and the potential outcomes for two individuals with the same covariates are also equal s.t. (yt=1,yt=0) ⊥⊥ t|x and t ⊥⊥ (yt=1,yt=0)|x.3\nTVAE: If one had knowledge of the true causal DAG underlying a set of data, one could undertake causal inference without being concerned for issues relating to structural misspecification. Unfortunately, and this is particularly the case with observational data, we rarely have access to this knowledge. Quite often an observed set of covariates x are modelled as a group of confounding variables (as per the DAG in Figure 1a). Furthermore, and as noted by Zhang et al. (2020), researchers may be encouraged to incorporate as many covariates into their model as possible, in an attempt to reduce the severity of the ignorability assumption. However, including more covariates than is necessary leads to other problems relating to the curse of dimensionality and (in)efficiency of estimation.\nA large set of covariates may be separable into subsets of factors such as instrumental, risk, and confounding factors. Doing so helps us to match our model more closely to the true data generating process, as well as to improve estimation efficiency by ‘distilling’ our covariate adjustment set. Prior work has explored the potential to discover the relevant confounding covariates via Bayesian networks (Haggstrom, 2017), regularized regression (Kuang et al., 2017), and deep latent variable models based on Variational Autoencoders (VAEs) (Zhang et al., 2020; Louizos et al., 2017). The first two methods identify variables (and are variable selection algorithms), whereas VAEs infer them, and learn compact, disentangled representations of the observations. The benefit of the latter approach is that it (a) infers latent variables on a datapoint-by-datapoint basis (rather than deriving subsets from population aggregates), (b) under additional assumptions, VAEs have been shown to infer hidden confounders in the presence of noisy proxy variables, thereby potentially reducing the reliance on ignorability) (Louizos et al., 2017), and (c) makes no assumptions about the functional form used to map between covariate and latent space.\nWe seek to infer and disentangle the latent distribution into subsets of latent factors using VAEs. These latent subsets are {zt, zy, zc, zo}, which represent the instrumental factors on t, the risk factors on y, the confounders on both t and y, and factors solely related to x, respectively. Without inductive bias, consistently disentangling the latent variables into these factors would be impossible (Locatello et al., 2019). In TVAE this inductive bias is incorporated in a number of ways: firstly, by incorporating supervision and constraining zt and zy to be predictive of t and y, respectively; secondly, by constraining zc to be predictive of both t and y; and finally, by employing diagonal-\n2Source code will also made available in supplementary material upon acceptance. 3Taken together, assumptions (2) and (3) constitute strong ignorability.\ncovariance priors (isotropic Gaussians) to encourage disentanglement and independence between latent variables. The structural inductive bias on the model is such that zy , and zt, and zc learn factors relevant to outcome and treatment, for which we provide explicit supervision, thereby leaving zo for all remaining factors.\nIn general, it is impossible to isolate the effect of t→ y due to unobserved confounding (D‘Amour, 2019), and this is why we make the assumption of ignorability. However, it is worth noting that, under additional assumptions, deep latent variable techniques have been shown to be able to infer hidden confounders from what are known as noisy proxy variables present in the dataset (see e.g., Montgomery et al. 2000; Louizos et al. 2017). The assumption of ignorability then shifts from ‘all confounders are observed’, to ‘all unobserved confounders have been inferred from proxies’. Whilst the capability to infer confounders from proxies represents an additional motivation for the use of VAEs, the focus of this work is not to explore whether and by how much we are able to do so, and we therefore maintain the assumption of ignorability.\nInference:\nq(zt|x) = Dzt∏ d=1 N (µd = f1d(x), σ2d = f2d(x)); q(zy|x) = Dzy∏ d=1 N (µd = f3d(x), σ2d = f4d(x)) q(zc|x) = Dzc∏ d=1 N (µd = f5d(x), σ2d = f6d(x)); q(zo|x) = Dzo∏ d=1 N (µd = f7d(x), σ2d = f8d(x))\np(t̂|zt, zc) = Bern(f9(zt, zc)) = Bern(gq(zt, zc)) p(ŷ|zy, zc, t) = Bern(t̂ · f10(zy, zc) + (1− t̂) · f11(zy, zc)) = Bern(Qq(.))\nGeneration:\np(z{o,t,c,y}) = D{zo,t,c,y}∏ d N (z{o,t,c,y}d|0, 1); p(t̂|zt, zc) = Bern(h1(zt, zc)) = Bern(gp(.))\np(ŷ|zy, zc, t̂) = Bern(t̂ · h2(zy, zc) + (1− t̂) · h3(zy, zc)) = Bern(Qp(.)) p(x̂bin|zc, zo, zt, zy) = Bern(h6(zc, zo, zt, zy))\np(x̂cont|zc, zo, zt, zy) = Dxcont∏ d=1 N (xcont,d|µd = h4(zc, zo, zt, zy), σ2d = h5(zc, zo, zt, zy))\n(3)\nThe proof for identifiability under the assumption of ignorability (or, alternatively, under the assumption that all unobserved confounders have been inferred from proxy variables) has been derived previously by Louizos et al. (2017) and Zhang et al. (2020). The factor zo is d-separated from t and y given x, and does not affect the identification of the causal effect. i.e., p(y|do(t),x) = p(y|do(t), z{t,o,y,c}) = p(y|t, zy, zc) (see Zhang et al. 2020 and Louizos et al. 2017). We impose the priors and parameterizations denoted in Equation 3, where D(.) is the number of dimensions in the respective variable (latent or otherwise), and f1−11 and h1−6 represent fully connected neural network functions. The parameters for these neural networks are learnt via variational Bayesian approximate inference (Kingma & Welling, 2014) according to the following objective:\nLELBOi = N∑ i Eqcqtqyqo [ log p (x̂i|zt, zc, zy, zo) + log p ( t̂i|zt, zc ) + log p (ŷi|ti, zy, zc) ] − [DKL (q (zt|xi) ||p (zt)) +DKL (q (zc|x, ) ‖p (zc)) +DKL (q (zy|xi) ‖p (zy)) +DKL (q (zo|xi) ‖p (zo))] (4)\nNote that all Gaussian variance parameterizations are diagonal. In cases where prior knowledge dictates a discrete rather than continuous outcome, equivalent parameterizations to those in Eqs. 3 may be employed. For example, in the IHDP dataset, the outcome data are standardized to have a variance of 1, and the outcome generation model becomes a Gaussian with variance also equal to 1.\nNote that separate treatment and outcome classifiers are used both during inference and generation (Qq, gq and Qp, gp resp.). The classifiers for inference have separate parameters to those use during generation. Predictors or classifiers of outcome incorporate the two-headed approach of (Shalit et al., 2017), and use ground-truth t are used for Qq whereas samples t̂ are used for Qp. For unseen test cases, either the ground-truth t or an sampled treatment t̂ from treatment classifier gp may be used to simulate an outcome. During training t̂ is used.\nWe now introduce the targeted regularization, the purpose of which is to encourage the outcome to be independent of the treatment assignment. Following Eq. 1, we define the fluctuation sub-model and corresponding logistic loss for finding as:\nQ̂(g, ti, z y i , z c i , ) = σ [ σ−1[Q(ti, z y i , z c i )] + ( I(ti = 1)\ng(ti = 1; zti, z c i ) − I(ti = 0) g(ti = 0; zti, z c i )\n)] (5)\nξi(Q̂; ) = −yi log(Q̂(g, ti, zyi , zci , ))− (1− yi) log(1− Q̂(g, ti, zyi , zci , )) (6) In Eq. 5, I is the indicator function. For an unbounded regression loss, mean squared error loss may be used (see appendix). Note that the logistic loss is suitable for continuous outcomes bounded between 0 and 1 (see van der Laan & Rose 2011, pp.121:132 for proof). Putting it all together, we then optimize to find generative parameters for functions h1−6, inference parameters for functions f1−12, and fluctuation parameter as follows:\nL = min [ N∑\ni\n( LELBOi + λTLξi(Q, g, ) )] ; ∂ ∂ L∗ ∣∣∣∣ =0 = ¯IC ∗ = 0 (7)\nWhere λTL represents a hyperparameter loss weight for the targeted regularization. At convergence, = 0 when Q and g become consistent estimators, satisfying the conditions for the EIC (see Eq. 2 and reference van der Laan & Rose 2011, pp125:128).\nA further element that differentiates our work from one other recent contribution (Shi et al., 2019) that uses targeted regularization is that the gradients resulting from ξ are not taken with respect to gp or gq (which are the propensity score arms which we assume to be consistent and unbiased). Targeted learning is concerned with de-biasing the outcome classifierQ using propensity scores from g. In other words, assuming the propensity scores are well estimated, the targeted learning regularizer is intended to affect the outcome classifier only, and not the propensity score estimator. It is therefore more theoretically aligned (with the targeted learning literature) to apply regularization to the outcome estimatorQ, and not to g. As per Eq. 1, in TMLE, g is assumed to be a consistent estimator, forming part of the de-biasing update process for Q, but it is not subject to update itself. In order to prevent the regularization from affecting the propensity arms, the gradients from the regularizer are only taken with respect to all parameters that influence this outcome classifier (which include upstream parameters for Qq as well as the more direct parameters Qp). We use Pytorch’s ‘detach’ method on the propensity scores when calculating the targeted regularization. This method decouples the propensity score arm from backpropagation relating to the computation of the regularization value. In contrast, Dragonnet (Shi et al., 2019) applies regularization to the entire network.\nIn summary, the notable aspects of our model are as follows: the introduction of a new latent variable zo for factors unrelated to outcome and/or treatment to aid the recovery of the true underlying structure; the ability to estimate both individual level and average treatment effects; and, as far as we are aware, the first incorporation of targeted learning in a deep latent variable approach, and one which backpropagates the regularization gradients to specific outcome-related parameters in the network." }, { "heading": "4 RELATED WORK", "text": "There are a number of ways to mitigate the problems associated with the confounding between the covariates and the treatment. For a review on such methods, readers are pointed to the recent survey by (Yao et al., 2020). Here we consider methods that utilize neural networks as part of their models, but note that many non-neural network methods exist (Chernozhukov et al., 2017; van der Laan & Rose, 2011; van der Laan & Starmans, 2014; Rubin, 2005; Hill, 2011; Athey & Imbens, 2016).\nPerhaps the most similar works to ours are those of Dragonnet (Shi et al., 2019) and TEDVAE (Zhang et al., 2020). We discuss the differences between these and TVAE in turn. Dragonnet\nincorporates the same targeted learning regularization process which allows for the simultaneous optimization of Q and . However, the method sacrifices the ability to estimate individual treatment effect in preference to achieving good estimation of the average treatment effect across the sample. Indeed, they do not report PEHE. Finally, Dragonnet applies regularization to the entire network, whereas we specifically ‘target’ the regularization to the outcome prediction arm by restricting the backpropagation of gradients.\nTEDVAE, on the other hand, builds on CEVAE (Louizos et al., 2017) and seeks inference and disentanglement of the latent instrumental, risk, and confounding factors from proxy variables with a variational approach. However, it has no means to allocate latent variables that are unrelated to treatment and/or outcome (i.e., TVAE’s zo). The advantage of including factors zo with a variational penalty is that the model has the option to use them, or not to use them, depending on whether they are necessary (i.e. KL is pushed to zero). It is important not to force factors unrelated to treatment and outcome into z{c,y,t} because doing so restricts the overlap between the class of models that can be represented using TEDVAE, and the class of models describing the true data generating process.\nOther methods include GANITE (Yoon et al., 2018) which requires adversarial training, and may therefore be more difficult to optimise. PM (Schwab et al., 2019), SITE (Yao et al., 2018), and MultiMBNN (Sharma et al., 2020) incorporate propensity score matching. TARNET (Shalit et al., 2017) inspired the two-headed outcome arm in our TVAE, as well as the three-headed architecture in (Shi et al., 2019). RSB (Zhang et al., 2019) incorporates a regularization penalty, based on the Pearson Correlation Coefficient, intended to reduce the association between latent variables predictive of treatment assignment and those predictive of outcome." }, { "heading": "5 EXPERIMENTS", "text": "We perform an ablation study, beginning with (a) TVAE (base) which is equivalent to TEDVAE (b) TVAE with zo, and (c) TVAE with both with zo and targeted regularization ξ during training. In order to fairly evaluate the benefits of introducing zo, we ensure that the total number of latent dimensions remains constant. We undertake the ablation study on a synthetic dataset which we call TVAEsynth, before comparing against methods on both the IHDP dataset (Hill, 2011; Gross, 1993) and the Jobs dataset (LaLonde, 1986; Smith & Todd, 2005; Dehejia & Wahba, 2002). In particular, the synthetic dataset was intentionally created such that not all covariates are exogenous and so that there exist some latent factors not related to outcome or treatment. Thus, we should expect a significant improvement in performance to occur with the introduction of zo, demonstrating the importance of incorporating inductive bias that closely matches the true structure of the data. Note that while these datasets vary in whether the outcome variable is continuous (IHDP, TVAESynth) or binary (Jobs), the treatment variable is always binary. Whilst it is possible to undertake Targeted Learning on continuous treatment effects, we leave this to future work.\nFor the IHDP dataset, we evaluate our network on the Average Treatment Effect estimation error (eATE), and the Precision in Estimation of Heretogeneous Effect (PEHE). As per (Louizos et al., 2017; Shalit et al., 2017; Yao et al., 2018), for the Jobs dataset (for which we have only partial effect supervision) we evaluate our network on the Average Treatment effect on the Treated error (eATT) to approximate the eATE, and the policy risk Rpol to approximate the error on the CATE. See the appendix for definitions of these metrics. When estimating treatment effects, 100 samples are drawn for each set of input covariates xi. We compare against Dragonnet (Shi et al., 2019), TEDVAE (Zhang et al., 2020), CEVAE (Louizos et al., 2017), GANITE (Yoon et al., 2018), Targeted Maximum Likelihood Estimation (TMLE) (van der Laan & Rose, 2018), and TARNET + variants (Shalit et al., 2017). We provide results for both within sample and out-of-sample performance. It is worth noting that within sample and out-of-sample results are equally valid for treatment effect estimation, because the network is never supervised on treatment effect (Shi et al., 2019).\nFor model selection we follow Louizos et al. (2017) and Zhang et al. (2020) and use the minimum validation loss on the total objective function. Whilst some model selection heuristics exist that serve as surrogates for the eATE itself (e.g., see Hassanpour & Greiner 2019 or Athey & Imbens 2016) we take the same view as Zhang et al. (2020), namely that the development of our model ‘should be self-sufficient and not rely on others’. Furthermore, we undertake minimal hyperparameter tuning for the simple reason that, in real-world applications, the supervision required for effective tuning would not be available. For all experiments, we undertake 100 replications and provide mean and\nstandard error. See the appendix for details on these datasets and the architecture, as well as training and testing details.\nAblation Study - TVAESynth: The results for the ablation study on the synthetic dataset are shown in Table 1. The results demonstrate that both eATE and PEHE are significantly improved by the incorporation of zo or targeted regularization, with a combination of the two yielding the best results for both within sample and out of sample testing. The fact that TVAE +zo outperforms TVAE +z∗o despite the latter having a larger latent capacity, suggests that reducing the capacity of the latent space has a beneficial, regularizing effect. Based on the results of this ablation, the benefits of this regularizing effect appear to be distinct from the benefits that derive from the addition of miscellaneous factors. Finally, the results indicate negligible empirical benefits to restricting the backgpropagation of the regularizer to non-propensity related parameters. However, the restriction of the backpropagation (according to our implementation), more closely aligns with the original TMLE and efficient influence curve theory, and we therefore retain this feature for the remaining experiments.\nIHDP Results: Results on IHDP are shown in Table 2 and indicate state of the art performance for both within sample and out-of-sample eATE and PEHE. The results corroborate the ablation results in Table 1, in that the incorporation of zo and targeted regularization result in monotonic improvement above TEDVAE. TVAE is outperformed only by Dragonnet on within-sample eATE performance. However, this method does not provide estimations for indvidual CATE, and is limited to the estimation of average treatment effects.\nJobs Results: The results for the Jobs data are shown in Table 3. GANITE was found to perform the best across most metrics, although this method has been argued to be more reliant on larger sample sizes than others, on the basis that it performs relatively poorly on the smaller IHDP dataset (Yoon et al., 2018). Furthermore, GANITE relies on potentially unstable/unreliable adversarial training (Moyer et al., 2018; Lezama, 2019; Gabbay & Hosen, 2019). Finally, TVAE outperforms GANITE on eATT, is consistent (beyond 2 decimal places) across out-of-sample and within-sample evaluations and has a lower standard err, and is competitive across all metrics. On this dataset, the concomitant improvements associated with the additional latent factors and targeted learning were negligible." }, { "heading": "6 CONCLUSION", "text": "In this work we aimed to improve existing latent variable models for causal parameter estimation in two ways: Firstly, by introducing a latent variable to model factors unrelated to treatment and outcome, thereby enabling the model to more closely reflect the data structure; and secondly, by incorporating a targeted learning regularizer with selected backpropagation to further de-bias outcome predictions. Our experiments demonstrated concomitant improvements in performance, and our comparison against other methods demonstrated TVAE’s ability to compete with and/or exceed state of the art for both individual as well as average treatment effect estimation. For future work, we plan to explore the application of TVAE to longitudinal data with continuous or categorical treatment. There is also opportunity to explore the use of TVAE in inferring hidden confounders from proxies in the dataset, as well as interpreting the model to explore and validate what information is inferred in the model’s latent factors. Additionally, it was noted from the ablation study results that the restriction of regularization gradients did not yield a significant change in performance when compared with applying the regularization to the entire network. As well as undertaking further experiments to understand this behavior, we propose to explore alternative ways to integrate the targeted learning update procedure to the learning procedure. Finally, the ablation results indicated that part of the improvement associated with the introduction of zo is associated with a regularizing effect relating to the reduction in the dimensionality of zc. This aspect of the model’s behavior also lends itself to future exploration." }, { "heading": "A APPENDIX", "text": "This appendix includes details on the metrics used for evaluating treatment effect estimation; background on Variational AutoEncoders; a formulation of the targeted learning regularizer for continuous, unbounded outcomes (as used with the IHDP dataset evaluations); details about the datasets; and training, testing, hyperparameters, architecture, and hardware details. Source code and data (including IHDP (Hill, 2011; Gross, 1993), Jobs (LaLonde, 1986; Smith & Todd, 2005), and our own TVAESynth) will also be attached as supplementary material for reproducibility purposes upon publication." }, { "heading": "A.1 METRICS", "text": "This section presents the metrics used for the experiments (Section 5 in the main paper).\nThe Average Treatment Effect (ATE) and error on the estimation of ATE (eATE) is given in Eq. 8.\nτ̂ (Q;x) = 1\nN N∑ i=1 (Q(1,xi)−Q(0,xi)), ATE = | 1 N N∑ i=1 (τ̂ (Q;xi)− τ(x))| (8)\nTo estimate the error on the model’s capacity to model the Conditional Average Treatment effect (CATE), the Precision in Estimation of Heteogenous Effects (PEHE) is given in Eq. 9.\nPEHE = √√√√ 1 N N∑ i=1 (τ̂ (Q;xi)− τ(xi))2 (9)\nIt can be seen from Equations 8 that the ATE is essentially the expectation of the conditional treatment effect (conditioned on the covariates for each individual) over the data set (Jesson et al., 2020). For scenarios when a proportion of the dataset is from a Randomized Controlled Trial (RCT), as is the case for the Jobs dataset, we may use the error on the estimation of Average Treatment effect on the Treated (ATT), which is given in Eq. 10 (Shalit et al., 2017; Louizos et al., 2017):\neATT = | 1|T1| ∑ i∈T1 yi − 1 |T0| ∑ j∈T0 yj − 1 |T1| ∑ i∈T1 (Q(1,xi)−Q(0,xi))| (10)\nwhere T = T1 ∪ T0 constitutes all individuals in the RCT, and the subscripts denote whether or not those individuals were in the treatment (subscript 1) or control groups (subscript 0). The first two terms in Eq. 10 comprise the true ATT, and the third term the estimated ATT. In datasets where the ground-truth for the CATE is not available (as is the case with the Jobs dataset) we may use the policy risk as a proxy for PEHE:\nRpol = 1− ( E[yt=1|π(x) = 1]p(π(x) = 1) + E[yt=0|π(x) = 0]p(π(x) = 0) ) (11)\nwhere π(xi) = 1 is the policy to treat when ŷt=1i − ŷt=0i > , and π(xi) = 0 is the policy not to treat otherwise (Yao et al., 2018; Shalit et al., 2017). is a treatment threshold. This threshold can be varied to understand how treatment inclusion rates affect the policy risk. For our experiments we set to zero, as per (Shalit et al., 2017; Louizos et al., 2017).\nA.2 VARIATIONAL AUTOENCODERS\nThis part of the appendix is referenced in Section 3 (Methodology) of the main paper.\nWe now consider the theory behind a popular and powerful latent variable representation learning and density estimation method known as Variational AutoEncoders (VAEs) (Kingma & Welling, 2014). Although adversarial methods (Goodfellow et al., 2014) have been shown to be effective for density estimation, they are also troublesome to train. Hence we choose VAEs which have more stable training dynamics (Moyer et al., 2018). A further motivation for the use of VAEs relates to their ability to infer latent variables. The Ignorability assumption holds that all confounders are observed, and much causal inference is undertaken with this assumption (Mooij et al., 2010; Maier et al., 2013; Oktay, 2018; Silva et al., 2006). The use of a latent variable model allows us to infer unobserved/hidden confounders (Louizos et al., 2017), although the use of VAEs means that there is some uncertainty concerning the guarantees of the learned model (Rolinek et al., 2019; Dai et al., 2019; Lucas et al., 2019a;b). This uncertainty notwithstanding, previous implementations for latent variable models with causal inference show promising results in comparison with other methods (Louizos et al., 2017; Mayer et al., 2020; Hassanpour & Greiner, 2020).\nWe wish to encode the m-dimensional covariates x on a manifold via a stochastic mapping p(z|x), where latent variables z provide a compact representation. The distribution pθ(z|x) also serves as the posterior to the generative model pθ(x) = ∫ pθ(x|z)p(z)dz having parameters θ. Marginalizing out z is usually intractable, and the true posterior p(z|x) is unknown, so a simpler approximating posterior qφ(z|x) having parameters φ is introduced (Blei et al., 2018) such that:\nlog pθ(x) = Eqφ(z|x) [ log\npθ(x|z)p(z) qφ(z|x) + log qφ(z|x) pθ(z|x)\n] (12)\nTaken together with the expectation, the second term on the right hand side Eq. 12 represents the Kullback-Liebler Divergence (KLD) between the approximating posterior and the true posterior. The KLD is always positive, and by ignoring this term, we are left with a lower-bound on the loglikelihood known as the variational lower bound, or Evidence Lower BOund (ELBO). The VAE provides the means to scale this amortized variational inference to intractable, high-dimensional problems, and minimizes the negative log likelihood over a dataset of N samples by adjusting the parameters of neural networks {θ, φ} according to the ELBO.\n1\nN n∑ i=1 − log pθ(xi) ≤ 1 N N∑ i=1 ( −Eqφ(z|xi) [log pθ (xi|z)] + βDKL [qφ(z|xi)‖p(z)] ) , (13)\nwhere β = 1 is used for the standard variational approximation procedure, but may be set empirically (Higgins et al., 2017), annealed (Burgess et al., 2018) or optimized according to the Information Bottleneck principle (Alemi et al., 2017; Tishby & Zaslavsky, 2015). The first term in Eq. 13 is the negative log-likelihood and is calculated in the form of a reconstruction error. The second term is the KLD between the approximating posterior and the prior, and therefore acts as a prior regularizer. Typically, the family of isotropic Gaussian distributions is chosen for the posterior qφ(.), and an isotropic Gaussian with unit variance for the prior p(z), which helps to encourage disentanglement (Higgins et al., 2017)." }, { "heading": "A.3 TARGETED REGULARIZATION FOR BOUNDED AND UNBOUNDED OUTCOMES", "text": "This part of the appendix is referenced in Section 3 (Methodology) of the main paper. In contrast with Eq. 6 in the main paper, this formulation of targeted regularization is for an unbounded, continuous outcome (as is the case in the IHDP experiments). A mean-squared error version of ξ, similar to the one found in (Shi et al., 2019), is given as follows:\nQ̂(ti, z y i , z c i , ) = Q(ti, z y i , z c i ) +\n( I(ti = 1)\ng(ti = 1; zti, z c i ) − I(ti = 0) g(ti = 0; zti, z c i )\n) (14)\nξi(Q̂, g;φc,t,y, ) = (yi −Q(ti, zyi , zci , ))2 (15)\nHowever, we stress that this formulation is only appropriate for unbounded outcomes, and not continuous outcomes in general (van der Laan & Rose, 2011). For continuous, bounded outcomes, as well as binary outcomes, the NLL formulation (as in the main paper) has been shown to be more appropriate, and theoretically sound." }, { "heading": "A.4 DATASETS", "text": "This part of the appendix is referenced in Section 5 (Experiments) of the main paper.\nWe utilize 100 replications of the semi-synthetic Infant Health and Development Program (IHDP) dataset (Hill, 2011; Gross, 1993)4 The linked version (see footnote) corresponds with setting A of the NPCI data generating package (Dorie, 2016), which is the version that is used most frequently in other comparisons (e.g., see Shi et al. 2019; Shalit et al. 2017; Yao et al. 2018) and comprises 608 untreated and 139 treated samples (747 in total). There are 25 covariates, 19 of which are discrete/binary, and the rest are continuous. The outcome for the IHDP data is continuous and unbounded. Similarly to (Louizos et al., 2017; Shalit et al., 2017) and others, we utilize a 60/30/10 train/validation/test split.\nWe also utilize the job outcomes dataset which we refer to as Jobs (LaLonde, 1986; Smith & Todd, 2005).5 Unlike the IHDP datset, Jobs is real-world data with a binary outcome. We follow a similar procedure to (Shalit et al., 2017) who indicate that they used the Dehejia and Wahba (Dehejia & Wahba, 2002) and PSID comaprison samples. The Dehejia-Wahba sample comprises 260 treated samples and 185 control samples, along with the PSID comparison group comprising 2490 samples’. The dataset contains a mixture of observational and Randomized Controlled Trial data. Similarly to (Louizos et al., 2017; Shalit et al., 2017) and others, we utilize a 56/24/20 train/validation/test split, and undertake 100 runs with varying random split allocations in order to acquire an estimate of average performance and standard error. Note that, between models, the same random seed is used both for intialization as well as dataset splitting, and therefore the variance due to these factors is equivalent across experiments.\nFinally, we introduce a new synthetic dataset named TVAESynth which follows the structure shown in Figure 4. While the weightings are chosen relatively arbitrarily, the structure is intentionally designed such that there are a mixture of exogenous and endogenous covariates. This enables us to compare the performance of TVAE with and without zo (while keeping the total number of latent dimensions constant). These data comprised 1000 samples, a continuous outcome and binary treatment, 8 covariates, and generative/structural equations as follows:\nUzo,zc,zt,zy,y ∼ N (0,1) Ux1,x4,t ∼ Bernoulli(0.5) Ux2:3,x5:8 ∼ N (0,1) (16)\n4Available from https://www.fredjo.com/, https://github.com/WeijiaZhang24/ TEDVAE/ and elsewhere, and will be included in supplementary folder with source code upon acceptance.\n5Available from https://users.nber.org/˜rdehejia/data/.nswdata2.html.\nzo = Uzo zy = Uzy zt = Uzt zc = Uzc (17)\nx1 ∼ Bernoulli(σ(zt + 0.1(Ux1 − 0.5))) x2 ∼ N (0.4zo + 0.3zc + 0.5zy + 0.1Ux2 , 0.2) (18)\nx3 ∼ N (0.2zo+0.2zc+1.2zt+0.1Ux3 , 0.2) x4 ∼ Bernoulli(σ(0.6zo+0.1(Ux4−0.5))) (19)\nx5 ∼ N (0.6zt + 0.1Ux5 , 0.1) x6 ∼ N (0.9zy + 0.1Ux6 , 0.1) (20)\nx7 ∼ N (0.5zo + 0.1Ux7 , 0.1) x8 ∼ N (0.5zo + 0.1Ux8 , 0.1) (21)\ntp = σ(0.2zc + 0.8zt + 0.1Ut) t ∼ Bernoulli(tp) (22)\ny := 0.2zc + 0.5zyt + 0.2t + 0.1Uy (23)\nInterventional distributions were generated by setting t equal to 1 and 0 thereby yielding ground truth ATE (≈ 0.8) and individual effects for evaluation purposes. The number of treated individuals is≈ 70%. Note that for every replication, the 1000 sample evaluation set is regenerated, and so may have an empirical statistics that vary. For experiments, we use a 80/20 train/test split." }, { "heading": "A.5 MODEL AND HYPERPARAMETERS", "text": "This part of the appendix is referenced in Section 3 (Methodology), as well as in Section 5 (Experiments) of the main paper. A block diagram of the TVAE architecture is shown in Fig. 5. For continuous outcomes (as in the IHDP dataset) we standardize the values and model as a Gaussian with a fixed variance of 1, and a mean determined by the outcome arm. All binary outcomes in the model (e.g. treatment or the relevant covariates) are modelled as Bernoulli distributed with a probability determined by the associated neural network function.\nWe now list the hyperparameters that were explored as part of model training. There may be room to improve on our figures with further hyperparameter tuning. However, given that the tuning of\nhyperparameters in a causal inference paradigm is problematic in general, we intentionally limited the space of hyperparameters (similarly to Zhang et al. 2020). Bold font indicates the settings that were used in the presented results.\nHyperparameter settings for IHDP dataset experiments were: hidden layers: 3; the weight on targeted regularization λTL = {0.0, 0.1, 0.2,0.4, 0.6, 0.8, 1.0}; an Adam (Kingma & Ba, 2017) optimizer with learning rate LR = {1e-3, 1e-4, 5e-5}; number of hidden neurons was 300; number of layers = 4; dimensionality of the latent factors was Dzt = Dzt = 10, Dzc = 15, Dzo = 5; batch size of 200; number of epochs 200; weight regularization 1e-4; and learning rate decay 5e-4.\nHyperparameter settings for the Jobs dataset experiments were: hidden layers: 3; the weight on targeted regularization λTL = {0.0, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0}; an Adam (Kingma & Ba, 2017) optimizer with learning rate LR = {5e-5, 1e-5}; number of hidden neurons was 200; number of layers = 2; dimensionality of the latent factors was Dzt = Dzt = 6, Dzc = 8, Dzo = 4; batch size of 200; number of epochs 200; weight regularization 1e-4; and learning rate decay 5e-4=3.\nHyperparameter settings for the TVAESynth dataset experiments were: hidden layers: 2; the weight on targeted regularization λTL = {0.0, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0}; an Adam (Kingma & Ba, 2017) optimizer with learning rate LR = 5e-5; number of hidden neurons was 20; number of layers = 2; dimensionality of the latent factors was Dzt = Dzt = Dzc = 2, Dzo = 1; batch size of 200; number of epochs 40; weight regularization 1e-4; and learning rate decay 5e-3.\nAs described in the main paper, wherever a range of hyperaparameters was explored, the validation loss on the total objective function was used as the model selection criterion (i.e., not the causal effect estimation performance, which is not available in real-world scenarios)." }, { "heading": "A.6 SOFTWARE AND HARDWARE", "text": "The network is coded using Pyro (Bingham et al., 2019) and builds on base code by (Zhang et al., 2020). We train on a GPU (e.g. NVIDIA 2080Ti) driven by a 3.6GHz Intel I9-9900K CPU running Ubuntu 18.04. Training 200 epochs of the IHDP dataset (training split of 450 samples) takes approx. 35 seconds (0.175s per epoch)." } ]
2,020
null
SP:00a28b287979b3cf803c21118f4e403a92e4f479
[ "This paper presents an explanation of why convolutional neural networks learn oriented bandpass filters - as has been commonly shown for early layers in various ConvNet architectures. The main argument is that oriented bandpass filters are the eigen-functions of localized convolution operators and in order to span the input signal space (regardless of its a structure) the network will always learn these functions as filters. Additionally, because this result is independent of input signal it should happen at all layers. These are demonstrated by examining filter of several trained neural network architectures and fitting them to Gabors - showing the fit is good with low residual error." ]
It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. We offer an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal. We offer empirical support for the hypothesis that convolutional networks learn such filters at all of their convolutional layers. While previous research has shown evidence of filters having oriented bandpass characteristics at early layers, ours appears to be the first study to document the predominance of such filter characteristics at all layers. Previous studies have missed this observation because they have concentrated on the cumulative compositional effects of filtering across layers, while we examine the filter characteristics that are present at each layer.
[]
[ { "authors": [ "R. Bracewell" ], "title": "The Fourier Transform and Its Applications", "venue": null, "year": 1986 }, { "authors": [ "J. Bruna", "S. Mallat" ], "title": "Invariant scattering convolution networks", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2013 }, { "authors": [ "J. Bruna", "S. Chintala", "Y. LeCun", "S. Piantino", "A. Szlam", "M. Tygert" ], "title": "A mathematical motivation for complex-valued convolutional networks", "venue": "Neural Computation,", "year": 2016 }, { "authors": [ "R. DeValois", "K. DeValois" ], "title": "Spatial Vision", "venue": null, "year": 1988 }, { "authors": [ "D. Eigen", "R. Fergus" ], "title": "Predicting depth, surface normals and sematic labels with a common muliscale convolutional architecture", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "C. Feichtenhofer", "A. Pinz", "R. Wildes", "A. Zisserman" ], "title": "What have we learned from deep representations for action recognition", "venue": null, "year": 2018 }, { "authors": [ "D. Field" ], "title": "Relations between the statistics of natural images and the response properties of cortical cells", "venue": "Journal of the Optical Society of America A,", "year": 1987 }, { "authors": [ "D. Gabor" ], "title": "Theory of communication", "venue": "Journal of the Institute of Electrical Engineers,", "year": 1946 }, { "authors": [ "W. Ge", "X. Lin", "Y. Yu" ], "title": "Weakly supervised complementary parts models for fine-grained image classification from the bottom up", "venue": null, "year": 2019 }, { "authors": [ "I. Hadji", "R. Wildes" ], "title": "A spatiotemporal oriented energy network for dynamic texture recognition", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "K. He", "X. Zhang", "S. Ren", "J. Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "B. Horn" ], "title": "Robot Vision", "venue": null, "year": 1986 }, { "authors": [ "D.H. Hubel", "T.N. Wiesel" ], "title": "Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex", "venue": "The Journal of Physiology,", "year": 1962 }, { "authors": [ "J H. Jacobsen", "J V. Gemert", "Z. Lou", "A W.M. Smeulders" ], "title": "Structured receptive fields in cnns", "venue": null, "year": 2016 }, { "authors": [ "B. Jahne", "H. Hausbecker" ], "title": "Computer Vision and Applications", "venue": null, "year": 2000 }, { "authors": [ "G. Kaiser" ], "title": "A Friendly Guide to Wavelets", "venue": "Modern Birkhauser Classics,", "year": 2011 }, { "authors": [ "Y. Karklin", "M. Lewicki" ], "title": "Emergence of complex cell properties by learning to generalize in natural scenes", "venue": "Nature, 457:83–86,", "year": 2009 }, { "authors": [ "A. Krizhevsky", "I. Sutskever", "G.E. Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "J. Lim" ], "title": "Two-Dimensional Signal and Image Processing", "venue": null, "year": 1990 }, { "authors": [ "R. Linsker" ], "title": "From basic network principles to neural architecture", "venue": "Proc. National Academy of Sciences USA,", "year": 1986 }, { "authors": [ "D. MacKay" ], "title": "Information Theory, Inference, and Learning Algorithms", "venue": null, "year": 2003 }, { "authors": [ "A. Mahendran", "A. Vedaldi" ], "title": "Understanding deep image representations by inverting them", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "B. Olshausen", "D. Field" ], "title": "Emergence of simple-cell field properties by learning a sparse code for natural images", "venue": "Nature, 381:607–609,", "year": 1996 }, { "authors": [ "A. Oppenheim", "A. Willsky", "I. Young" ], "title": "Signals and Systems", "venue": null, "year": 1983 }, { "authors": [ "D. Rumelhart", "G. Hinton", "R. Williams" ], "title": "Learning representations by back-propagating", "venue": "errors. Nature,", "year": 1986 }, { "authors": [ "O. Russakovsky", "J. Deng", "H. Su", "J. Krause", "S. Satheesh", "S. Ma", "Z. Huang", "A. Karpathy", "A. Khosla", "M. Bernstein", "A. Berg", "L. Fei-Fei" ], "title": "Imagenet large scale visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "W. Shang", "K. Sohn", "H. Lee D.A. Enlitic" ], "title": "Understanding and improving convolutional neural networks via concatenated rectified linear units", "venue": null, "year": 2016 }, { "authors": [ "E. Simoncelli", "B. Olshausen" ], "title": "Natural image statistics and neural representation", "venue": "Annual Review of Neuroscience,", "year": 2001 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "J. Springenberg", "A. Dosovitskiy", "T. Brox", "M. Riedmiller" ], "title": "Striving for simplicity: The all convolutional net", "venue": "In ICLR Workshop,", "year": 2015 }, { "authors": [ "C. Szegedy", "A. Toshev", "D. Erhan" ], "title": "Deep neural networks for object detection", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "D. Tran", "L. Bourdev", "R. Fergus", "L. Torresani", "M. Paluri" ], "title": "Learning spatiotemporal features with 3d convolutional networks", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "D. Ulyanov", "A. Vedaldi", "V. Lempitsky" ], "title": "Deep image prior", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "J. Yosinski", "J. Clune", "A. Nguyen", "T. Fuchs", "H. Lipson" ], "title": "Understanding neural networks through deep visualization", "venue": "In ICML workshops,", "year": 2015 }, { "authors": [ "M. Zeiler", "R. Fergus" ], "title": "Visualizing and understanding convolutional networks", "venue": "In ECCV,", "year": 2014 }, { "authors": [ "T. Zhou", "M. Brown", "N. Snavely", "D. Lowe" ], "title": "Unsupervised learning of depth and ego-motion from video", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "It has been repeatedly observed that convolutional architectures when applied to image understanding tasks learn oriented bandpass filters. A standard explanation of this result is that these filters reflect the structure of the images that they have been exposed to during training: Natural images typically are locally composed of oriented contours at various scales and oriented bandpass filters are matched to such structure. We offer an alternative explanation based not on the structure of images, but rather on the structure of convolutional architectures. In particular, complex exponentials are the eigenfunctions of convolution. These eigenfunctions are defined globally; however, convolutional architectures operate locally. To enforce locality, one can apply a windowing function to the eigenfunctions, which leads to oriented bandpass filters as the natural operators to be learned with convolutional architectures. From a representational point of view, these filters allow for a local systematic way to characterize and operate on an image or other signal. We offer empirical support for the hypothesis that convolutional networks learn such filters at all of their convolutional layers. While previous research has shown evidence of filters having oriented bandpass characteristics at early layers, ours appears to be the first study to document the predominance of such filter characteristics at all layers. Previous studies have missed this observation because they have concentrated on the cumulative compositional effects of filtering across layers, while we examine the filter characteristics that are present at each layer." }, { "heading": "1 INTRODUCTION", "text": "" }, { "heading": "1.1 MOTIVATION", "text": "Convolutional networks (ConvNets) in conjunction with deep learning have shown state-of-the-art performance in application to computer vision, ranging across both classification (e.g., Krizhevsky et al. (2012); Tran et al. (2015); Ge et al. (2019)) and regression (e.g., Szegedy et al. (2013); Eigen & Fergus (2015); Zhou et al. (2017)) tasks. However, understanding of how these systems achieve their remarkable results lags behind their performance. This state of affairs is unsatisfying not only from a scientific point of view, but also from an applications point of view. As these systems move beyond the lab into real-world applications better theoretical understanding can help establish performance bounds and increase confidence in deployment.\nVisualization studies of filters that have been learned during training have been one of the key tools marshalled to lend insight into the internal representations maintained by ConvNets in application to computer vision, e.g., Zeiler & Fergus (2014); Yosinski et al. (2015); Mahendran & Vedaldi (2015); Shang et al. (2016); Feichtenhofer et al. (2018). Here, an interesting repeated observation is that early layers in the studied networks tend to learn oriented bandpass filters, both in two image spatial dimenstions, (x, y)>, in application to single image analysis as well as in three spatiotemporal dimensions, (x, y, t)>, in application to video. An example is shown in Fig. 1. Emergence of such filters seems reasonable, because local orientation captures the first-order correlation structure of the data, which provides a reasonable building block for inferring more complex structure (e.g., local measurements of oriented structure can be assembled into intersections to capture corner structure, etc.). Notably, however, more rigorous analyses of exactly why oriented bandpass filters might be\nlearned has been limited. This state of affairs motivates the current paper in its argument that the analytic structure of ConvNets constrains them to learn oriented bandpass filters." }, { "heading": "1.2 RELATED RESEARCH", "text": "Visualization of receptive field profiles (i.e., pointspread functions Lim (1990)) of the convolutional filters learned by contemporary ConvNets is a popular tool for providing insight into the image properties that are being represented by a network. A notable trend across these studies is that early layers appear to learn oriented bandpass filters in both two spatial dimensions, e.g., Zeiler & Fergus (2014); Springenberg et al. (2015); Yosinski et al. (2015); Shang et al. (2016), as well as three spatiotemporal dimensions, e.g., Feichtenhofer et al. (2018). Indeed, earlier studies with architectures that also constrained their filters to be convolutional in nature, albeit using a Hebbian learning strategy MacKay (2003) rather than the currently dominant back-propagation approach Rumelhart et al. (1986), also yielded filters that visualized as having oriented bandpass filter characteristics Linsker (1986). Interestingly, biological vision systems also are known to show the presence of oriented bandpass filters at their earlier layers of processing in visual cortex; see Hubel & Wiesel (1962) for pioneering work along these lines and for more general review DeValois & DeValois (1988).\nThe presence of oriented bandpass filters in biological systems often has been attributed to their being well matched to the statistics of natural images Field (1987); Olshausen & Field (1996); Karklin & Lewicki (2009); Simoncelli & Olshausen (2001), e.g., the dominance of oriented contours at multiple scales. Similar arguments have been made regarding why such filters are learned by ConvNets. Significantly, however, studies have shown that even when trained with images comprised of random noise patterns, convolutional architectures still learn oriented bandpass filters Linsker (1986). These latter results suggest that the emergence of such filter tunings cannot be solely attributed to systems being driven to learn filters that were matched to their training data. Similarly, recent work showed that randomly initialized networks serve well in image restoration problems Ulyanov et al. (2018).\nSome recent multilayer convolutional architectures have specified their earliest layers to have oriented bandpass characteristics, e.g., Bruna & Mallat (2013); Jacobsen et al. (2016); Hadji & Wildes (2017); indeed, some have specified such filters across all layers Bruna & Mallat (2013); Hadji & Wildes (2017). These design decisions have been variously motivated in terms of being well matched to primitive image structure Hadji & Wildes (2017) or providing useful building blocks for learning higher-order structures Jacobsen et al. (2016) and capturing invariances Bruna & Mallat (2013). Other work has noted that purely mathematical considerations show that ConvNets are well suited to realizing filter designs for capturing multiscale, windowed spectra Bruna et al. (2016); however, it did not explicitly established the relationship to eigenfunctions of convolution nor offer an explanation for why deep-learning yields oriented bandpass filters when applied to ConvNets. It also did not provide empirical investigation of exactly what filter characteristics are learned at each convolutional layer of ConvNets." }, { "heading": "1.3 CONTRIBUTIONS", "text": "In the light of previous research, the present work appears to be the first to offer an explanation of why ConvNets learn oriented bandpass filters, independently of the input, by appeal to the inherent properties of their architectures in terms of their eigenfunctions. By definition, the convolutional layers of a ConvNet are governed by the properties of convolution. For present purposes, a key property is that the eigenfunctions of convolution are complex exponentials. Imposing locality on the eigenfunctions leads to oriented bandpass filters, which therefore are the appropriate filters to be learned by a ConvNet. Indeed, these theoretical considerations suggest that oriented bandpass filters should be learned at all layers of a ConvNet, not just at early layers. We provide empirical support for this observation by examining filters across all convolutional layers of three standard ConvNets (AlexNet Krizhevsky et al. (2012), ResNet He et al. (2016) and VGG16 Simonyan & Zisserman (2014)) and show that both numerically and visually they are well characterized as having learned oriented bandpass filters at all their convolutional layers. Our empirical study is distinct from earlier visualization efforts, which concentrate on the cumulative compositional results of filtering across layers that typically show emergence of complicated structures in the layerwise feature maps, while we focus on the complementary question of what primitive filter characteristics have been learned at each individual layer and offer both numerical as well as visualization analyses." }, { "heading": "2 THEORY", "text": "This section details a novel explanation for why ConvNets learn oriented bandpass filters. The first two subsections largely review standard material regarding linear systems theory Oppenheim et al. (1983) and related topics Kaiser (2011); Kusse & Westwig (2006), but are necessary to motivate properly our explanation. The final subsection places the material in the context of ConvNets." }, { "heading": "2.1 EIGENFUNCTIONS OF CONVOLUTION", "text": "Let L be a linear operator on a function space. The set of eigenfunctions φn associated with this operator satisfy the condition Kusse & Westwig (2006)\nLφn = λnφn. (1) That is, the operator acts on the eigenfunctions simply via multiplication with a constant, λn, referred to as the eigenvalue. It sometimes also is useful to introduce a (positive definite) weighting function, w, which leads to the corresponding constraint\nLφn = λnwφn. (2) For cases where any function in the space can be expanded as a linear sum of the eigenfunctions, it is said that the collection of eigenfunctions form a complete set. Such a set provides a convenient and canonical spanning representation.\nLet x = (x1, x2, . . . , xn)>, a = (a1, a2, . . . , an)> and u = (u1, u2, . . . , un)>. For the space of convolutions, with the convolution of two functions, f(x) and h(x) defined as\nf(x) ∗ h(x) = ∫ ∞ −∞ f(x− a)h(a) da (3)\nit is well known that functions of the form f(x) = eiu >x are eigenfunctions of convolution Oppenheim et al. (1983), i.e.,∫ ∞ −∞ eiu >(x−a)h(a) da = eiu >x ∫ ∞ −∞ e−iu >ah(a) da (4)\nwith the equality achieved via appealing to eiu >(x−a) = eiu >xe−iu >a and subsequently factoring eiu >x outside the integral as it is independent of a. The integral on the right hand side of (4),∫ ∞\n−∞ e−iu >ah(a) da, (5)\nis the eigenvalue, referred to as the modulation transfer function (MTF) in signal processing Oppenheim et al. (1983). Noting that eiu\n>x = cos(u>x) + i sin(u>x) leads to the standard interpretation of u in terms of frequency of the function (e.g., input signal).\nGiven the eigenfunctions of convolution are parameterized in terms of their frequencies, it is useful to appeal to the Fourier transform of function f(x), where we use the form Horn (1986)\nF(u) = ∫ ∞ −∞ f(x)e−iu >x dx, (6)\nbecause any convolution can be represented in terms of how it operates via simple multiplication of the eigenvalues, (5), with the eigenfunctions, eiu\n>x, with u given by (6). Thus, this decomposition provides a canonical way to decompose f(x) and explicate how a convolution operates on it." }, { "heading": "2.2 IMPOSING LOCALITY", "text": "Understanding convolution purely in terms of its eigenfunctions and eigenvalues provides only a global representation of operations, as notions of signal locality, x, are lost in the global transformation to the frequency domain, u. This state of affairs often is unsatisfactory from a representational point of view because one wants to understand the structure of the signal (e.g., an image) on a more local basis (e.g., one wants to detect objects as well as their image coordinates). This limitation can be ameliorated by defining a windowed Fourier transform Kaiser (2011), as follows Jahne & Hausbecker (2000).\nLet w(x) be a windowing function that is positive valued, symmetric and monotonically decreasing from its center so as to provide greatest emphasis at its center. A Windowed Fourier Transform (WFT) of f(x) can then be defined as\nF(uc,x;w) = ∫ ∞ −∞ f(a)w(a− x)e−iu > c a da. (7)\nMaking use of the symmetry constraint that we have enforced on the windowing function allows for w(x) = w(−x) so that the WFT, (7), can be rewritten as\nF(uc,x;w) = ∫ ∞ −∞ f(a)w(x− a)eiu > c (x−a)e−iu > c x da, (8)\nwhich has the form of a convolution f(x) ∗ ( w(x)eiu > c x )\n(9)\nwith the inclusion of an additional phase component, e−iu > c x.\nTo provide additional insight into the impact the WFT convolution, (9), has on the function, f(x), it is useful to examine the pointspread function, w(x)eiu > c x, in the frequency domain by taking its Fourier transform (6), i.e., calculate its MTF. We have∫ ∞ −∞ w(x)eiu > c xe−iu >x dx, (10)\nwhich via grouping by coefficients of x becomes∫ ∞ −∞ w(x)e−i(u−uc) >x dx. (11)\nExamination of (11) reveals that it is exactly the Fourier transform of the window function, cf. (6), as shifted to the center frequencies, uc. Thus, operation of the WFT convolution, (9), on a function, f(x), passes central frequency, uc, relatively unattenuated, while it suppresses those that are further away from the central frequency according to the shape of the window function, w(x), i.e., it operates as a bandpass filter. Thus, convolution with a bank of such filters with varying central frequencies, uc, has exactly the desired result of providing localized measures of the frequency content of the function f(x).\nReturning to the pointspread function itself, w(x)eiu > c x, and recalling that eiu >x = cos(u>x) + i sin(u>c x), it is seen that in the signal domain, x, the filter will oscillate along the direction of uc while remaining relatively constant in the orthogonal direction, even as there is an overall amplitude fall-off with distance from the center according to the shape of w(x), i.e., we have an oriented bandpass filter.\nAs a specific example Jahne & Hausbecker (2000), taking w(x) to be an n-dimensional Gaussianlike function, g(x;σ) = κ e−‖x‖\n2/σ2 , with σ the standard deviation and κ a scaling factor, yields an n-dimensional Gabor-like filter,\ng(x;σ)eiu > c x = g(x;σ) ( cos(u>c x) + i sin(u > c x) ) , (12)\nwhich provides good joint localization of signal content in the signal and frequency domains Gabor (1946). Indeed, visualization of these filters in two spatial dimensions (Figure 2) provides strikingly similar appearance to those presented in Figure 1, if in an idealized form. In particular, their pointspread functions oscillate according to a frequency ‖uc‖ along the direction, uc‖uc‖ , while remaining relatively constant in the orthogonal direction, even as there is an overall amplitude fall-off with distance from the center. In the frequency domain, they have peak power at uc with a fall-off following a Gaussian-like shape with standard deviation, 1/σ, that is the inverse of that used in specifying the window, w(x). These observations hold because we already have seen, (11), that the frequency domain representation of such a function is the Fourier transform of the window function, w(x), shifted to the center frequencies, uc; furthermore, the Fourier transform of a function of the form g(x;σ) has a similar form, albeit with an inverse standard deviation Bracewell (1986)." }, { "heading": "2.3 IMPLICATIONS FOR CONVNETS", "text": "Convolutions in ConvNets serve to filter the input signal to highlight its features according to the learned pointspread functions (convolutional kernels). Thus, convolution with the oriented filters shown in Figure 1 will serve to highlight aspects of an image that are correspondingly oriented and at corresponding scales. The question at hand is, “Why did the ConvNet learn such filters?” The previous parts of this section have reviewed the fact that complex exponentials of the form eiu\n>x = cos(u>x) + i sin(u>x) are the eigenfunctions of convolution. Thus, such frequency dependent functions similarly serve as the eigenfunctions of the convolutional operations in ConvNets. In particular, this result is a basic property of the convolutional nature of the architecture, independent of the input to the system. Thus, for any convolution in a ConvNet the frequency dependent eigenfunctions, eiu >x, provide a systematic way to represent their input.\nAs with the general discussion of locality presented in Subsection 2.2, for the specifics of ConvNets it also is of interest to be able to characterize and operate locally on a signal. At the level of convolution, such processing is realized via pointspread functions that operate as bandpass filters, (9). Like any practical system, ConvNets will not capture a continuous range of bandpass characteristics, as given by uc and the sampling will be limited by the number of filters the designer allows at each layer, i.e., as a metaparameter of the system. Nevertheless, making use of these filters provides a systematic approach to representing the input signal.\nOverall, the very convolutional nature of ConvNets inherently constrains and even defines the filters that they learn, independent of their input or training. In particular, learning bandpass filters provides a canonical way to represent and operate on their input, as these serve as the localized eigenfunctions of convolution. As a ConvNet is exposed to more and more training data, its representation is optimized by spanning as much of the data as it can. Within the realm of convolution, in which ConvNet conv layers are defined, oriented bandpass filters provide the solution. They arise as the locality constrained eigenfunctions of convolution and thereby have potential to provide a span of any input signal in a localized manner. Thus, ConvNets are optimized by learning exactly such filters. Notably, since this explanation for why ConvNets learn oriented bandpass filters is independent of training data, it can explain why such filters emerge even when the training data lacks such pattern\nstructure, including training on random input signals, e.g., Linsker (1986). Moreover, the explanation is independent of the learning algorithm as any algorithm driving its learned representation to span the space of input signals achieves its goal in the eigenfunctions of the convolutional architecture, i.e., oriented bandpass filters. Sec. 1.2 reviewed work showing that both back propagation and Hebbian learning yield oriented bandpass filters as their learned convolutional representations.\nOur analysis of oriented bandpass filters as the localized eigenfunctions of convolution is not specific to early ConvNet layers, but rather applies to any convolutional layer. The result thereby makes a theory-based prediction that oriented bandpass filters should be learned at all conv layers in a ConvNet. As reviewed in Sec. 1.2, previous studies have demonstrated that filters learned at early ConvNet layers visualize as oriented bandpass filters; however, it appears that little attention in previous studies has examined the pointspread functions of learned convolutional filters deeper in a network. Instead, studies of learned filters deep in a ConvNet have focused on the cumulative effect of filtering across all layers upto and including a particular layer under consideration. In the following, we present a complementary study that empirically examines filters that have been learned at a given layer without reference to previous layers to see whether they appear as oriented bandpass filters." }, { "heading": "3 EMPIRICAL SUPPORT", "text": "In this section, we present empirical support for the theory-based prediction that oriented bandpass filters should be learned at all layers in a ConvNet. We examine three standard ConvNets (AlexNet Krizhevsky et al. (2012), ResNet50 He et al. (2016) and VGG16 Simonyan & Zisserman (2014)) from both numerical and visualization perspectives. In all cases, we make use of publicly available implementations of the architectures AlexNet; ResNet; VGG16." }, { "heading": "3.1 NUMERICAL STUDIES", "text": "To study numerically whether ConvNets learn oriented bandpass filters, we perform a least-squares fit of the derived oriented bandpass filter, (12), to all learned convolutional filters at all layers of each model. While other oriented bandpass filter models can be used here, the model considered, (12), is a natural choice in the present context, as it results from our theoretical analysis in Sec. 2. In particular, we fit the free parameters of a 2D instantiation of the model, (i.e., the center frequencies, uc, and the standard deviation, σ), to the learned pointspread values of every convolutional filter at every layer of AlexNet Krizhevsky et al. (2012), ResNet50 He et al. (2016) and VGG16 Simonyan & Zisserman (2014). Finally, we take the root-mean-square (RMS) error residual between each individual fit of the model and the corresponding learned pointspread function as indicative of how well the learned filter is captured by an oriented bandpass filter, with smaller error indicative of a better fit. Results are plotted in Fig. 3.\nFor all three architectures the histograms of residuals collapsed across layers shows that in all cases the fitting errors mostly lie below 0.1, with the bulk of errors residing below 0.01 and lower. These small residuals indicate generally good fits of the learned filters to the oriented bandpass model.\nA more detailed look can be had by considering box plots of errors by layer. Here, the results for AlexNet, for example, show that the median fitting error is under 0.04 at layer one and subsequently decreased to under 0.02 for all subsequent layers as well as for the aggregated fit across all layers. Moreover, 95% of the data lies under 0.15 at layer 1 and under 0.04 at all other layers as well as the aggregate across layers. To place these numbers in perspective, we perturbed the otherwise analytically defined pointspread function, (12), with various amounts of noise and then compared to the same function without corruption to see how much corruption would yield various RMS errors. Results are summarized in Fig. 4. For example, we find that random noise within only ≈6% of the distribution of the uncorrupted filter values yields 0.04 residual, which demonstrates that the discrepancy from the purely analytic form is very small, indicating that the observed fits are quite good. Still, as indicated by the raw histograms, Fig. 3, some outliers are observed, where the errors approach 0.1 and beyond. Visual inspection of these cases indicates that they arise when the learning process apparently has failed, as the learned filter has an essentially constant valued pointspread function (i.e., is flat) or else has no discernible structure.\nResults for ResNet50 and VGG16 show similar patterns to those of AlexNet, with the main difference being that the distributions are shifted to larger values at their first layers; however, they subsequently conform to values similar to those of AlexNet thereafter. Also, it is seen that the layer\none distribution for VGG16 is shifted upward relative to ResNet50. Perhaps this pattern arises because the layer one filter sizes are largest for AlexNet and smallest for VGG16, with ResNet50 lying in between, while at layer two and beyond VGG16 and ResNet50 have the same size filters and all three architectures have the same size filters at layer four and beyond. Interestingly, for all architectures it is seen that there is a marked decrease in the residual values between layers one and two. This result parallels observations made in earlier visualization studies that learned filters typically begin to show more strongly oriented structure after the first layer, e.g., Linsker (1986); Springenberg et al. (2015). Overall, these numerical experiments indicate that all three of the considered architectures learn oriented bandpass filters at all layers." }, { "heading": "3.2 VISUALIZATION STUDIES", "text": "For visualization studies, we plot the pointspread functions for a selection of the learned filters at each layer and display them as images. In the interest of space, here we focus on AlexNet; although, the visualization results for the other architectures similarly are supportive of oriented bandpass filters, as would be expected from the numerical results of Sec. 3.1. In particular, Fig. 5 shows plots of learned pointspread functions from all layers of AlexNet. The shown pointspread functions are representative of the median residual value for each layer, and they are paired with a visualization of the corresponding fit to the oriented bandpass model, (12). Inspection of these plots show that in all cases oriented structure is visible in the learned pointspread functions: In particular, proceeding from layers 1 to 8 orientations are manifest approximately along slight diagonal upper right to lower left, vertical, diagonal upper right to lower left, vertical, vertical, diagonal upper left to lower right, horizontal and vertical, resp. Moreover, the learned and fit pointspread functions are qualitatively very similar. Moreover, the visualizations suggest improved fits between the learned and fit models as layer increases, similar to what is seen in the numerical results of Sec. 3.1. Overall, these visualization results corroborate the numerical results indicating that oriented bandpass filters are indeed learned at all layers of the considered ConvNets." }, { "heading": "4 SUMMARY", "text": "Previous studies have demonstrated that learned filters at the early layers of convolutional networks visualize as oriented bandpass filters. This phenomenon typically is explained via appeal to natural image statistics, i.e., natural images are dominated by oriented contours manifest across a variety of scales and oriented bandpass filters are well matched to such structure. We have offered an alternative explanation in terms of the structure of convolutional networks themselves. Given that their convolutional layers necessarily operate within the space of convolutions, learning oriented bandpass filters provides the system with the potential to span possible input, even while preserving a notion of locality in the signal domain. Notably, our work is applicable to not just early ConvNet layers, but to all conv layers in such networks. We have provided empirical support for this claim, showing that oriented bandpass filters are indeed learned at all layers of three standard ConvNets. These results not only provide new insights into the operations and representations learned by ConvNets, but also suggest interesting future research. In particular, our work motivates investigation of novel architectures that explicitly constrain their convolutional filters to be oriented bandpass in a learning-based framework. In such a framework, it would not be necessary for the training process to learn the numerical values for each and every individual filter value (i.e., each filter tap), but rather would merely need to learn a much smaller number of parameters, e.g., the values of the center frequency, uc, and the standard deviation, σ, associated with the Gabor-like filter derived above, (12), or some other suitably parameterized filter. Such a constrained learning approach would require a much less intensive training procedure (e.g. involving far less data) compared to learning values for all individual filter taps, owing to the drastically reduced number of parameters that need to be estimated, even while being able to tune to the specifics of the task that is being optimized." } ]
2,020
null
SP:41065df46326876b201c82ec287033ff43e9bcc8
[ "Given a rewardless environment MDP, the authors want to find a set of policies for the worst case reward function. Their process involves two steps: first to select the right set of policies and second to combine them to generate a new policy. The policy selection is made with the only goal to maximize the expected return of highest achieving policy of the set in the worst-case reward function (Equation (7))." ]
We study the problem of how to construct a set of policies that can be composed together to solve a collection of reinforcement learning tasks. Each task is a different reward function defined as a linear combination of known features. We consider a specific class of policy compositions which we call set improving policies (SIPs): given a set of policies and a set of tasks, a SIP is any composition of the former whose performance is at least as good as that of its constituents across all the tasks. We focus on the most conservative instantiation of SIPs, setmax policies (SMPs), so our analysis extends to any SIP. This includes known policy-composition operators like generalized policy improvement. Our main contribution is a policy iteration algorithm that builds a set of policies in order to maximize the worst-case performance of the resulting SMP on the set of tasks. The algorithm works by successively adding new policies to the set. We show that the worst-case performance of the resulting SMP strictly improves at each iteration, and the algorithm only stops when there does not exist a policy that leads to improved performance. We empirically evaluate our algorithm on a grid world and also on a set of domains from the DeepMind control suite. We confirm our theoretical results regarding the monotonically improving performance of our algorithm. Interestingly, we also show empirically that the sets of policies computed by the algorithm are diverse, leading to different trajectories in the grid world and very distinct locomotion skills in the control suite.
[ { "affiliations": [], "name": "Tom Zahavy" }, { "affiliations": [], "name": "Andre Barreto" }, { "affiliations": [], "name": "Daniel J Mankowitz" }, { "affiliations": [], "name": "Shaobo Hou" }, { "affiliations": [], "name": "Brendan O’Donoghue" }, { "affiliations": [], "name": "Iurii Kemaev" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "André Barreto", "Will Dabney", "Rémi Munos", "Jonathan J Hunt", "Tom Schaul", "Hado P van Hasselt", "David Silver" ], "title": "Successor features for transfer in reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Andre Barreto", "Diana Borsa", "John Quan", "Tom Schaul", "David Silver", "Matteo Hessel", "Daniel Mankowitz", "Augustin Zidek", "Remi Munos" ], "title": "Transfer in deep reinforcement learning using successor features and generalised policy improvement", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "André Barreto", "Shaobo Hou", "Diana Borsa", "David Silver", "Doina Precup" ], "title": "Fast reinforcement learning with generalized policy updates", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Steven Diamond", "Stephen Boyd" ], "title": "CVXPY: A Python-embedded modeling language for convex optimization", "venue": "Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "T.G. Dietterich" ], "title": "Hierarchical reinforcement learning with the MAXQ value function decomposition", "venue": "Journal of Artificial Intelligence Research,", "year": 2000 }, { "authors": [ "Laurent El Ghaoui", "Hervé Lebret" ], "title": "Robust solutions to least-squares problems with uncertain data", "venue": "SIAM Journal on matrix analysis and applications,", "year": 1997 }, { "authors": [ "Benjamin Eysenbach", "Abhishek Gupta", "Julian Ibarz", "Sergey Levine" ], "title": "Diversity is all you need: Learning skills without a reward function", "venue": "arXiv preprint arXiv:1802.06070,", "year": 2018 }, { "authors": [ "Marguerite Frank", "Philip Wolfe" ], "title": "An algorithm for quadratic programming", "venue": "Naval research logistics quarterly,", "year": 1956 }, { "authors": [ "Dan Garber", "Elad Hazan" ], "title": "A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization", "venue": "arXiv preprint arXiv:1301.4666,", "year": 2013 }, { "authors": [ "Karol Gregor", "Danilo Jimenez Rezende", "Daan Wierstra" ], "title": "Variational intrinsic control", "venue": "arXiv preprint arXiv:1611.07507,", "year": 2016 }, { "authors": [ "Christopher Grimm", "Irina Higgins", "Andre Barreto", "Denis Teplyashin", "Markus Wulfmeier", "Tim Hertweck", "Raia Hadsell", "Satinder Singh" ], "title": "Disentangled cumulants help successor representations transfer to new tasks", "venue": null, "year": 1911 }, { "authors": [ "Jacques Guélat", "Patrice Marcotte" ], "title": "Some comments on wolfe’s ‘away step", "venue": "Mathematical Programming,", "year": 1986 }, { "authors": [ "Steven Hansen", "Will Dabney", "Andre Barreto", "Tom Van de Wiele", "David Warde-Farley", "Volodymyr Mnih" ], "title": "Fast task inference with variational intrinsic successor features", "venue": null, "year": 1906 }, { "authors": [ "Martin Jaggi" ], "title": "Revisiting frank-wolfe: Projection-free sparse convex optimization", "venue": null, "year": 2013 }, { "authors": [ "Arnab Nilim", "Laurent El Ghaoui" ], "title": "Robust control of markov decision processes with uncertain transition matrices", "venue": "Operations Research,", "year": 2005 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: discrete stochastic dynamic programming", "venue": null, "year": 1984 }, { "authors": [ "Stuart J Russell", "Andrew Zimdars" ], "title": "Q-decomposition for reinforcement learning agents", "venue": "In Proceedings of the 20th International Conference on Machine Learning", "year": 2003 }, { "authors": [ "Satinder P Singh", "Tommi Jaakkola", "Michael I Jordan" ], "title": "Reinforcement learning with soft state aggregation", "venue": "In Advances in neural information processing systems,", "year": 1995 }, { "authors": [ "Nathan Sprague", "Dana Ballard" ], "title": "Multiple-goal reinforcement learning with modular sarsa", "venue": null, "year": 2003 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Richard S. Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning", "venue": "Artificial Intelligence,", "year": 1999 }, { "authors": [ "Philip Wolfe" ], "title": "Convergence theory in nonlinear programming", "venue": "Integer and nonlinear programming,", "year": 1970 }, { "authors": [ "Huan Xu", "Shie Mannor" ], "title": "Robustness and generalization", "venue": "Machine learning,", "year": 2012 }, { "authors": [ "Huan Xu", "Constantine Caramanis", "Shie Mannor" ], "title": "Robust regression and lasso", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Tom Zahavy", "Alon Cohen", "Haim Kaplan", "Yishay Mansour" ], "title": "Apprenticeship learning via frankwolfe", "venue": "AAAI, 2020,", "year": 2020 }, { "authors": [ "Tom Zahavy", "Alon Cohen", "Haim Kaplan", "Yishay Mansour" ], "title": "Average reward reinforcement learning with unknown mixing times", "venue": "The Conference on Uncertainty in Artificial Intelligence (UAI),", "year": 2020 }, { "authors": [ "Tom Zahavy", "Avinatan Hasidim", "Haim Kaplan", "Yishay Mansour" ], "title": "Planning in hierarchical reinforcement learning: Guarantees for using local policies", "venue": "In Algorithmic Learning Theory, pp", "year": 2020 }, { "authors": [ "Tom Zahavy", "Zhongwen Xu", "Vivek Veeriah", "Matteo Hessel", "Junhyuk Oh", "Hado van Hasselt", "David Silver", "Satinder Singh" ], "title": "A self-tuning actor-critic algorithm. Advances in neural information processing systems, 2020d", "venue": null, "year": 2020 }, { "authors": [], "title": "Πn,w. Proof. We know from previous results in the literature", "venue": "vGPI", "year": 2017 }, { "authors": [ "W C REGULARIZING" ], "title": "In this section we experimented with constraining the set of rewards to include only vectors w whose mean is zero. Since we are using CVXPY (Diamond & Boyd, 2016) to optimize for w (Eq", "venue": null, "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) is concerned with building agents that can learn to act so as to maximize reward through trial-and-error interaction with the environment. There are several reasons why it can be useful for an agent to learn about multiple ways of behaving, i.e., learn about multiple policies. The agent may want to achieve multiple tasks (or subgoals) in a lifelong learning setting and may learn a separate policy for each task, reusing them as needed when tasks reoccur. The agent may have a hierarchical architecture in which many policies are learned at a lower level while an upper level policy learns to combine them in useful ways, such as to accelerate learning on a single task or to transfer efficiently to a new task. Learning about multiple policies in the form of options (Sutton et al., 1999a) can be a good way to achieve temporal abstraction; again this can be used to quickly plan good policies for new tasks. In this paper we abstract away from these specific scenarios and ask the following question: what set of policies should the agent pre-learn in order to guarantee good performance under the worst-case reward? A satisfactory answer to this question could be useful in all the scenarios discussed above and potentially many others.\nThere are two components to the question above: (i) what policies should be in the set, and (ii) how to compose a policy to be used on a new task from the policies in the set. To answer (ii), we propose the concept of a set improving policy (SIP). Given any set of n policies, a SIP is any composition of these policies whose performance is at least as good as, and generally better than, that of all of the constituent policies in the set. We present two policy composition (or improvement) operators that lead to a SIP. The first is called set-max policy (SMP). Given a distribution over states, a SMP chooses from n policies the one that leads to the highest expected value. The second SIP operator is generalized policy improvement (Barreto et al., 2017, GPI). Given a set of n policies and their associated action-value functions, GPI is a natural extension of regular policy improvement in which the agent acts greedily in each state with respect to the maximum over the set of action-values\n∗tomzahavy@google.com\nfunctions. Although SMP provides weaker guarantees than GPI (we will show this below), it is more amenable to analysis and thus we will use it exclusively for our theoretical results. However, since SMP’s performance serve as a lower bound to GPI’s, the results we derive for the former also apply to the latter. In our illustrative experiments we will show this result empirically.\nNow that we have fixed the answer to (ii), i.e., how to compose pre-learned policies for a new reward function, we can leverage it to address (i): what criterion to use to pre-learn the policies. Here, one can appeal to heuristics such as the ones advocating that the set of pre-learned policies should be as diverse as possible (Eysenbach et al., 2018; Gregor et al., 2016; Grimm et al., 2019; Hansen et al., 2019). In this paper we will use the formal criterion of robustness, i.e., we will seek a set of policies that do as well as possible in the worst-case scenario. Thus, the problem of interest to this paper is as follows: how to define and discover a set of n policies that maximize the worst possible performance of the resulting SMP across all possible tasks? Interestingly, as we will discuss, the solution to this robustness problem naturally leads to a diverse set of policies.\nTo solve the problem posed above we make two assumptions: (A1) that tasks differ only in their reward functions, and (A2) that reward functions are linear combinations of known features. These two assumptions allow us to leverage the concept of successor features (SFs) and work in apprenticeship learning. As our main contribution in this paper, we present an algorithm that iteratively builds a set of policies such that SMP’s performance with respect to the worst case reward provably improves in each iteration, stopping when no such greedy improvement is possible. We also provide a closed-form expression to compute the worst-case performance of our algorithm at each iteration. This means that, given tasks satisfying Assumptions A1 and A2, we are able to provably construct a SIP that can quickly adapt to any task with guaranteed worst-case performance.\nRelated Work. The proposed approach has interesting connections with hierarchical RL (HRL) (Sutton et al., 1999b; Dietterich, 2000). We can think of SMP (and GPI) as a higher-level policy-selection mechanism that is fixed a priori. Under this interpretation, the problem we are solving can be seen as the definition and discovery of lower-level policies that will lead to a robust hierarchical agent.\nThere are interesting parallels between robustness and diversity. For example, diverse stock portfolios have less risk. In robust least squares (El Ghaoui & Lebret, 1997; Xu et al., 2009), the goal is to find a solution that will perform well with respect to (w.r.t) data perturbations. This leads to a min-max formulation, and there are known equivalences between solving a robust (min-max) problem and the diversity of the solution (via regularization) (Xu & Mannor, 2012). Our work is also related to robust Markov decision processes (MDPs) (Nilim & El Ghaoui, 2005), but our focus is on a different aspect of the problem. While in robust MDPs the uncertainty is w.r.t the dynamics of the environment, here we focus on uncertainty w.r.t the reward and assume that the dynamics are fixed. More importantly, we are interested in the hierarchical aspect of the problem – how to discover and compose a set of policies. In contrast, solutions to robust MDPs are typically composed of a single policy.\nIn Apprenticeship Learning (AL; Abbeel & Ng, 2004) the goal is also to solve a min-max problem in which the agent is expected to perform as well as an expert w.r.t any reward. If we ignore the expert, AL algorithms can be used to find a single policy that performs well w.r.t any reward. The solution to this problem (when there is no expert) is the policy whose SFs have the smallest possible norm. When the SFs are in the simplex (as in tabular MDPs) the vector with the smallest `2 norm puts equal probabilities on its coordinates, and is therefore \"diverse\" (making an equivalence between the robust min-max formulation and the diversity perspective). In that sense, our problem can be seen as a modified AL setup where: (a) no expert demonstrations are available (b) the agent is allowed to observe the reward at test time, and (c) the goal is to learn a set of constituent policies." }, { "heading": "2 PRELIMINARIES", "text": "We will model our problem of interest using a family of Markov Decision Processes (MDPs). An MDP is a tuple M , (S,A, P, r, γ,D), where S is the set of states, A is the set of actions, P = {P a | a ∈ A} is the set of transition kernels, γ ∈ [0, 1] is the discount factor and D is the initial state distribution. The function r : S ×A× S 7→ R defines the rewards, and thus the agent’s objective; here we are interested in multiple reward functions, as we explain next.\nLet φ(s, a, s′) ∈ [0, 1]d be an observable vector of features (our analysis only requires the features to be bounded; we use [0, 1] for ease of exposition). We are interested in the set of tasks induced by all possible linear combinations of the features φ. Specifically, for any w ∈ Rd, we can define a reward function rw(s, a, s′) = w · φ(s, a, s′). Given w, the reward rw is well defined and we will use the terms w and rw interchangeably to refer to the RL task induced by it. Formally, we are interested in\nthe following set of MDPs:\nMφ , {(S,A, P, rw, γ,D) |w ∈ W}. (1) In general,W is any convex set, but we will focus on the `2 d-dimensional ball denoted byW = B2. This choice is not restricting, since the optimal policy in an MDP is invariant with respect to the scale of the rewards and the `2 ball contains all the directions.\nA policy in an MDP M ∈Mφ, denoted by π ∈ Π, is a mapping π : S → P(A), where P(A) is the space of probability distributions over A. For a policy π we define the successor features (SFs) as\nψπ(s, a) , (1− γ) · E [∑∞\nt=0 γtφ(st, at, st+1)|P, π, st = s, at = a\n] . (2)\nThe multiplication by 1 − γ together with the fact that the features φ are in [0, 1] assures that ψπ(s, a) ∈ [0, 1]d for all (s, a) ∈ S ×A.1 We also define SFs that are conditioned on the initial state distribution D and the policy π as: ψπ , E[ψπ(s, a)|D,π] = Es∼D,a∼π(s)ψπ(s, a). It should be clear that the SFs are conditioned on D and π whenever they are not written as a function of states and actions like in Eq. (2). Note that, given a policy π, ψπ is simply a vector in [0, 1]d. Since we will be dealing with multiple policies, we will use superscripts to refer to them—that is, we use πi to refer to the i-th policy. To keep the notation simple, we will refer to the SFs of policy πi as ψi. We define the action-value function (or Q-function) of policy π under reward rw as\nQπw(s, a) , (1− γ)E [∑∞\nt=0 γtφ(st, at, st+1) ·w|P, π, st = s, at = a\n] = ψπ(s, a) ·w.\nWe define the value of a policy π as vπw , (1− γ)E [∑∞ t=0 γ tw · φ(st)|π, P,D ] = ψπ ·w. Note\nthat vπw is a scalar, corresponding to the expected value of policy π under the initial state distribution D, given by\nvπw = E[Qπw(s, a)|D,π] = Es∼D,a∼π(s)Qπw(s, a). (3)" }, { "heading": "3 COMPOSING POLICIES TO SOLVE A SET OF MDPS", "text": "As described, we are interested in solving all the tasks w ∈ W in the set of MDPsMφ defined in (1). We will approach this problem by learning policies associated with specific rewards w and then composing them to build a higher-level policy that performs well across all the tasks. We call this higher-level policy a generalized policy, defined as (Barreto et al., 2020): Definition 1 (Generalized policy). Given a set of MDPsMφ, a generalized policy is a function π : S ×W 7→ P(A) that maps a state s and a task w onto a distribution over actions.\nWe can think of a generalized policy as a regular policy parameterized by a task, since for a fixed w we have π(·;w) : S 7→ P(A). We now focus our attention on a specific class of generalized policies that are composed of other policies:\nDefinition 2 (SIP). Given a set of MDPsMφ and a set of n policies Πn = {πi}ni=1, a set improving policy (SIP) πSIP is any generalized policy such that:\nvSIPΠn,w ≥ viw for all πi ∈ Πn and all w ∈ W, (4)\nwhere vSIPΠn,w and v i w are the value functions of π SIP Πn(·;w) and the policies πi ∈ Πn under reward rw.\nWe have been deliberately vague about the specific way the policies πi ∈ Πn are combined to form a SIP to have as inclusive a concept as possible. We now describe two concrete ways to construct a SIP.\nDefinition 3 (SMP). Let Πn = {πi}ni=1 be a set of n policies and let vi be the corresponding value functions defined analogously to (3) for an arbitrary reward. A set-max policy (SMP) is defined as\nπSMPΠn (s;w) = π k(s), with k = arg max i∈[1,...,n] viw.\n1While we focus on the most common, discounted RL criteria, all of our results will hold in the finite horizon and average reward criteria (see, for example, Puterman (1984)). Concretely, in these scenarios there exist normalizations for the SFs whose effect are equivalent to that of the multiplication by 1− γ. In the finite-horizon case we can simply multiply the SFs by 1/H . In the average reward case, there is no multiplication (Zahavy et al., 2020b) and the value function is measured under the stationary distribution (instead of D).\nCombining the concepts of SMP and SFs we can build a SIP forMφ. Given the SFs of the policies πi ∈ Πn, {ψi}ni=1, we can quickly compute a generalized SMP as\nπSMPΠn (s;w) = π k(s), with k = arg max i∈[1,...,n] {w · ψi}. (5)\nSince the value of a SMP under reward w is given by vSMPΠn,w = maxi∈[1,...,n] v i w, it trivially qualifies as a SIP as per Definition 2. In fact, the generalized policy πSMPΠn defined in (5) is in some sense the most conservative SIP possible, as it will always satisfy (4) with equality. This means that any other SIP will perform at least as well as the SIP induced by SMP. We formalize this notion below:\nLemma 1. Let πSMPΠn be a SMP defined as in (5) and let π : S ×W 7→ P(A) be any generalized policy. Then, given a set of n policies Πn, π is a SIP if and only if vπΠn,w ≥ vSMPΠn,w for all w ∈ W .\nDue to space constraints, all the proofs can be found in the supplementary material. Lemma 1 allows us to use SMP to derive results that apply to all SIPs. For example, a lower bound for vSMPΠn,w automatically applies to all possible vSIPΠn,w. Lemma 1 also allows us to treat SMP as a criterion to determine whether a given generalized policy qualifies as a SIP. We illustrate this by introducing a second candidate to construct a SIP called generalized policy improvement (Barreto et al., 2017; 2018; 2020, GPI):\nDefinition 4 (GPI policy). Given a set of n policies Πn = {πi}ni=1 and corresponding Q-functions Qiw computed under an arbitrary reward w, the GPI policy is defined as\nπGPIΠn (s;w) = arg max a max i Qiw(s, a).\nAgain, we can combine GPI and SFs to build a generalized policy. Given the SFs of the policies πi ∈ Πn, {ψi}ni=1, we can quickly compute the generalized GPI policy as πGPIΠn (s;w) = arg maxa maxiψ\ni(s, a) ·w. Note that the maximization in GPI is performed in each state and uses the Q-functions of the constituent policies. In contrast, SMP maximizes over value functions (not Q-functions), with an expectation over states taken with respect to the initial state distribution D. For this reason, GPI is a stronger composition than SMP. We now formalize this intuition:\nLemma 2. For any reward w ∈ W and any set of policies Πn, we have that vGPIΠn,w ≥ vSMPΠn,w.\nLemma 2 implies that for any set of policies it is always better to use a GPI policy rather than an SMP (as we will confirm in the experiments). As a consequence, it also certifies that the generalized GPI policy πGPIΠn (s;w) qualifies as a SIP (Lemma 1).\nWe have described two ways of constructing a SIP by combining SMP and GPI with SFs. Other similar strategies might be possible, for example by using local SARSA (Russell & Zimdars, 2003; Sprague & Ballard, 2003) as the basic mechanism to compose a set of value functions. We also note that in some cases it is possible to define a generalized policy (Definition 1), that is not necessarily a SIP (Eq. (5)), but is guaranteed to perform better than any SIP in expectation. For example, a combination of maximization, randomization and local search have been shown to be optimal in expectation among generalized policies in tabular MDPs with collectible rewards (Zahavy et al., 2020c). That said, we note that some compositions of policies that may at first seem like a SIP do not qualify as such. For example, a mixed policy is a linear (convex) combination of policies that assigns probabilities to the policies in the set and samples from them. When the mixed policy is mixing the best policy in the set with a less performant policy then it will result in a policy that is not as good as the best single policy in the set (Zahavy et al., 2020c).\nProblem formulation. We are now ready to formalize the problem we are interested in. Given a set of MDPsMφ, as defined in (1), we want to construct a set of n policies Πn = {πi}ni=1, such that the performance of the SMP defined on that set πSMPΠn will have the optimal worst-case performance over all rewards w ∈ W . That is, we want to solve the following problem:\narg max Πn⊆Π min w\nvSMPΠn,w. (6)\nNote that, since vSMPΠn,w ≤ vSIPΠn,w for any SIP, Πn and w, as shown in Lemma 1, by finding a good set for (6) we are also improving the performance of all SIPs (including GPI)." }, { "heading": "4 AN ITERATIVE METHOD TO CONSTRUCT A SET-MAX POLICY", "text": "We now present and analyze an iterative algorithm to solve problem (6). We begin by defining the worst case or adversarial reward associated with the generalized SMP policy: Definition 5 (Adversarial reward for an SMP). Given a set of policies Πn, we denote by w̄SMPΠn = arg minw∈B2 v SMP Πn,w the worst case reward w.r.t the SMP π SMP Πn defined in (5). In addition, the value of the SMP w.r.t to w̄SMPΠn is defined by v̄ SMP Πn = minw∈B2 v SMP Πn,w.\nWe are interested in finding a set of policies Πn such that the performance of the resulting SMP will be optimal w.r.t its adversarial reward w̄SMPΠn . This leads to a reformulation of (6) as a max-min-max optimization for discovering robust policies:\narg max Πn⊆Π v̄SMPΠn = arg max Πn⊆Π min w∈B2 vSMPΠn,w = arg max Πn⊆Π min w∈B2 max i∈[1,..,n] ψi ·w. (7)\nAlgorithm 1 SMP worst case policy iteration\nInitialize: Sample w ∼ N(0̄, 1̄),Π0 ← { }, π1 ← arg maxπ∈Π w ·ψπ , t← 1 v̄SMPΠ1 ← −||ψ\n1|| repeat\nΠt ← Πt−1 + {πt} w̄SMPΠt ← solution to (8) πt+1← solution of the RL task w̄SMPΠt t← t+ 1\nuntil vt w̄SMP Πn ≤ v̄SMPΠt−1 return Πt−1 The order in which the maximizations and the minimization are performed in (7) is important. (i) The inner maximization over policies (or SFs), by the SMP, is performed last. This means that, for a fixed set of policies Πn and a fixed reward w, SMP selects the best policy in the set. (ii) The minimization over rewards w happens second, that is, for a fixed set of policies Πn, we compute the value of the generalized SMP πSMPΠn (·;w) for any reward w, and then minimize the maximum of these values. (iii) Finally, for any set of policies, there is an associated worst case reward for the SMP, and we are looking for policies that maximize this value.\nThe inner maximization (i) is simple: it comes down to computing n dot-products ψi · w, i = 1, 2, . . . , n, and comparing the resulting values. The minimization problem (ii) is slightly more complicated, but fortunately easy to solve. To see this, note that this problem can be rewritten as:\nw̄SMPΠn = arg min w∈B2 max i∈[1,...,n] {w ·ψ1, . . . ,w ·ψn}. s.t. ||w||2 − 1 ≤ 0. (8)\nEq. (8) is a convex optimization problem that can be easily solved using standard techniques, like gradient descent, and off-the-shelf solvers (Diamond & Boyd, 2016; Boyd et al., 2004). We note that the minimizer of Eq. (8) is a function of policy set. As a result, the set forces the worst case reward to make a trade-off – it has to “choose” the coordinates it “wants” to be more adversarial for. This trade-off is what encourages the worst case reward to be diverse across iterations (w.r.t different sets). We note that this property holds since we are optimizing over B2 but it will not necessary be the case for other convex sets. For example, in the case of B∞ the internal minimization problem in the above has a single solution - a vector with -1 in all of its coordinates.\nThe outer maximization problem (iii) can be difficult to solve if we are searching over all possible sets of policies Πn ⊆ Π. Instead, we propose an incremental approach in which policies πi are successively added to an initially empty set Π0. This is possible because the solution w̄SMPΠn of (8) gives rise to a well-defined RL problem in which the rewards are given by rw(s, a, s′) = w̄SMPΠn ·φ(s, a, s′). This problem can be solved using any standard RL algorithm. So, once we have a solution w̄SMPΠn for (8), we solve the induced RL problem using any algorithm and add the resulting policy πn+1 to Πn (or, rather, the associated SFs ψn+1).\nAlgorithm 1 has a step by step description of the proposed method. The algorithm is initialized by adding a policy π1 that maximizes a random reward vector w to the set Π0, such that Π1 = {π1}. At each subsequent iteration t the algorithm computes the worst case reward w̄SMPΠt w.r.t to the current set Πt by solving (8). The algorithm then finds a policy πt+1 that solves the task induced by w̄SMPΠt . If the value of πt+1 w.r.t w̄SMPΠt is strictly larger than v̄ SMP Πt the algorithm continues for another iteration, with πt+1 added to the set. Otherwise, the algorithm stops. As mentioned before, the set of policies Πt computed by Algorithm 1 can also be used with GPI. The resulting GPI policy will do at least as well as the SMP counterpart on any task w (Lemma 2); in particular, the GPI’s worst-case performance will be lower bounded by v̄SMPΠn ." }, { "heading": "4.1 THEORETICAL ANALYSIS", "text": "Algorithm 1 produces a sequence of policy sets Π1,Π2, . . . The definition of SMP guarantees that enlarging a set of policies always leads to a soft improvement in performance, so v̄SMPΠt+1 ≥ v̄ SMP Πt ≥ . . . ≥ v̄SMP{π1}. We now show that the improvement in each iteration of our algorithm is in fact strict.\nTheorem 1 (Strict improvement). Let Π1, . . . ,Πt be the sets of policies constructed by Algorithm 1. We have that the worst-case performance of the SMP induced by these set is strictly improving in each iteration, that is: v̄SMPΠt+1 > v̄ SMP Πt . Furthermore, when the algorithm stops, there does not exist a single policy πt+1 such that adding it to Πt will result in improvement: @ πt+1 ∈ Π s.t. v̄SMPΠt+{π} > v̄ SMP Πt .\nIn general we cannot say anything about the value of the SMP returned by Algorithm 1. However, in some special cases we can upper bound it. One such case is when the SFs lie in the simplex. Lemma 3 (Impossibility result). For the special case where the SFs associated with any policy are in the simplex, the value of the SMP w.r.t the worst case reward for any set of policies is less than or equal to −1/ √ d. In addition, there exists an MDP where this upper bound is attainable.\nOne example where the SFs are in the simplex is when the features φ are “one-hot vectors”, that is, they only have one nonzero element. This happens for example in a tabular representation, in which case the SFs correspond to stationary state distributions. Another example are the features induced by state aggregation, since these are simple indicator functions associating states to clusters (Singh et al., 1995). We will show in our experiments that when state aggregation is used our algorithm achieves the upper bound of Lemma 3 in practice.\nFinally, we observe that not all the policies in the set Πt are needed at each point in time, and we can guarantee strict improvement even if we remove the \"inactive\" policies from Πt, as we show below. Definition 6 (Active policies). Given a set of n policies Πn, and an associated worst case reward w̄SMPΠn , the subset of active policies Πa(Π n) are the policies in Πn that achieve v̄SMPΠn w.r.t w̄ SMP Πn : Πa(Π n) = { π ∈ Πn : ψπ · w̄SMPΠn = v̄SMPΠn } .\nTheorem 2 (Sufficiency of Active policies). For any set of policies Πn, πSMPΠa(Πn) achieves the same value w.r.t the worst case reward as πSMPΠn , that is, v̄ SMP Πn = v̄ SMP Πa(Πn) .\nTheorem 2 implies that once we have found w̄SMPΠn we can remove the inactive policies from the set and still guarantee the same worst case performance. Furthermore, we can continue with Algorithm 1 to find the next policy by maximizing w̄SMPΠn and guarantee strict improvement via Theorem 1. This is important in applications that have memory constraints, since it allows us to store fewer policies." }, { "heading": "5 EXPERIMENTS", "text": "We begin with a 10 × 10 grid-world environment (Fig. 1(d)), where the agent starts in a random place in the grid (marked in a black color) and gains/loses reward from collecting items (marked with white color). Each item belongs to one of d − 1 classes (here with d = 5) and is associated with a marker: 8, O,X, Y . In addition, there is one \"no item\" feature (marked in gray color). The features are one-hot vectors, i.e., for i ∈ [1, d− 1], φi(s) equals one when item i is in state s and zero otherwise (similarly φd(s) equals one when there is no item in state s). The objective of the agent is to pick up the “good” objects and avoid “bad” objects, depending on the weights of the vector w.\nIn Fig. 1(a) we report the performance of the SMP πSMPΠt w.r.t w̄ SMP Πt for d = 5. At each iteration (x-axis) of Algorithm 1 we train a policy for 5 · 105 steps to maximize w̄SMPΠt . We then compute the SFs of that policy using additional 5 · 105 steps and evaluate it w.r.t w̄SMPΠt . As we can see, the performance of SMP strictly improves as we add more policies to the set (as we stated in Theorem 1). In addition, we compare the performance of SMP with that of GPI, defined on the same sets of policies (Πt) that were discovered by Algorithm 1. Since we do not know how to compute w̄GPIΠt (the worst case reward for GPI), we evaluate GPI w.r.t w̄ SMP Πn (the blue line in Fig. 1(a)).\nInspecting Fig. 1(a), we can see that the GPI policy indeed performs better than the SMP as Lemma 2 indicates. We note that the blue line (in Fig. 1(a)) does not correspond to the worst case performance of the GPI policy. Instead, we can get a good approximation for it because we have that: w̄SMPΠn ·\nψ(πSMPΠn ) ≤ w̄GPIΠn ·ψ(πGPIΠn ) ≤ w̄SMPΠn ·ψ(πGPIΠn ); i.e., the worst case performance of GPI (in the middle) is guaranteed to be between the green and blue lines in Fig. 1(a). This also implies that the upper bound in Lemma 3 does not apply for the blue line.\nWe also compare our algorithm to two baselines in Fig. 1(b) (for d = 10): (i) Orthogonal - at iteration t we train policy πt to maximize the reward w = et (a vector of zeroes with a one on the t-th coordinate) such that a matrix with the vectors w in its columns forms the identity matrix; (ii) Random: at iteration t we train policy πt to maximize reward w ∼ Ñ(0̄, 1̄), i.e., we sample a vector of dimension d from a Normal Gaussian distribution and normalize it to have a norm of 1. While all the methods improve as we add policies to the set, Algorithm 1 clearly outperforms the baselines.\nIn Fig. 1(c) and Fig. 1(d) we visualize the policies that were discovered by Algorithm 1. Fig. 1(c) presents the SFs of the discovered policies, where each row (color) corresponds to a different policy and the columns correspond to the different features. We do not enumerate the features from 1 to d, but instead we label them with markers that correspond to specific items (the x-axis labels). In Fig. 1(d) we present a trajectory from each policy. We note that both the colors and the markers match between the two figures: the red color corresponds to the same policy in both figures, and the item markers in Fig. 1(d) correspond to the coordinates in the x-axis of Fig. 1(c).\nInspecting the figures we can see that the discovered policies are qualitatively diverse: in Fig. 1(c) we can see that the SFs of the different policies have different weights for different items, and in Fig. 1(d) we can see that the policies visit different states. For example, we can see that the teal policy has a larger weight for the no item feature (Fig. 1(c)) and visits only no-item states (Fig. 1(d)) and that the green policy has higher weights for the ’Y’ and ’X’ items (Fig. 1(c)) and indeed visits them (Fig. 1(d)).\nFinally, in Fig. 2, we compare the performance of our algorithm with that of the baseline methods over a test set of rewards. The only difference is in how we evaluate the algorithms. Specifically, we sampled 500 reward signals from the uniform distribution over the unit ball. Recall that at iteration t each algorithm has a set of policies Πt, so we evaluate the SMP defined on this set, πSMPΠt , w.r.t each one of the test rewards. Then, for each method, we report the mean value obtained over the test rewards and repeat this procedure for 10 different seeds. Finally, we report the mean and the confidence interval over the seeds. Note that the performance in this experiment will necessarily be better than the in Fig. 1(a) because here we evaluate average performance rather than worst-case performance. Also note that our algorithm was not designed to optimize the performance on this \"test set\", but to optimize the performance w.r.t the worst case. Therefore it is not necessarily expected to outperform the baselines when measured on this metric.\nInspecting Figure Fig. 2(a) we can see that our algorithm (denoted by SMP) performs better than the two baselines. This is a bit surprising for the reasons mentioned above, and suggests that optimising for the worst case also improves the performance w.r.t the entire distribution (transfer learning result). At first glance, the relative gain in performance might seem small. Therefore, the baselines might seem preferable to some users due to their simplicity. However, recall that the computational cost for computing the worst case reward is small compared to finding the policy the maximizes it, and therefore the relative cost of the added complexity is low.\nThe last observation suggests that we should care about how many policies are needed by each method to achieve the same value. We present these results in Fig. 2(b). Note that we use exactly the same data as in Fig. 2(a) but present it in a different manner. Inspecting the figure, we can see that the baselines require more policies to achieve the same value. For example, to achieve a value of 0.07, the SMP required 2 policies, while the baselines needed 4; and for a value of 0.1 the SMP required 4 policies while the baselines needed 7 and 9 respectively.\nDeepMind Control Suite. Next, we conducted a set of experiments in the DM Control Suite (Tassa et al., 2018). We focused on the setup where the agent is learning from feature observations corresponding to the positions and velocities of the “body” in the task (pixels were only used for visualization). We considered the following six domains: ’Acrobot’, ’Cheetah’, ’Fish’, ’Hopper’, ’Pendulum’, and ’Walker’. In each of these tasks we do not use the extrinsic reward that is defined by the task, but instead consider rewards that are linear in the observations (of dimensions 6, 17, 21, 15, 3, and 24 respectively). At each iteration of Algorithm 1 we train a policy for 2 · 106 steps using an actor-critic (and specifically STACX (Zahavy et al., 2020d)) to maximize w̄SMPΠt , add it to the set, and compute a new w̄SMPΠt+1 .\nFig. 3(a) presents the performance of SMP in each iteration w.r.t w̄SMPΠt . As we can see, our algorithm is indeed improving in each iteration. In addition, we present the average number of active policies (Definition 6) in each iteration with bars. All the results are averaged over 10 seeds and presented with 95% Gaussian confidence intervals. Fig. 3(b) presents the SFs of the active policies at the end of training (the seed with the maximum number of active policies was selected). We perform PCA dimensionality reduction such that each point in the scatter plot corresponds to the SFs of one of the active policies. We also report the variance explained by PCA: values close to 1 indicate that the dimensionality reduction has preserved the original variance. Examining the figures we can see that our algorithm is strictly improving (as Theorem 1 predicts) and that the active policies in the set are indeed diverse; we can also see that adding more policies is correlated with improving performance.\nFinally, in Fig. 4(a), Fig. 4(b) and Fig. 4(c) we visualize the trajectories of the discovered policies in the Cheetah, Hopper and Walker environments. Although the algorithm was oblivious to the extrinsic reward of the tasks, it was still able to discover different locomotion skills, postures, and even some \"yoga poses\" (as noted by the label we gave each policy on the left). The other bodies (Acrobot, Pendulum and Fish) have simpler bodies and exhibited simpler movement in various directions and velocities, e.g. the Pendulum learned to balance itself up and down. The supplementary material contains videos from all the bodies." }, { "heading": "6 CONCLUSION", "text": "We have presented an algorithm that incrementally builds a set of policies to solve a collection of tasks defined as linear combinations of known features. The policies returned by our algorithm can be composed in multiple ways. We have shown that when the composition is a SMP its worst-case\nperformance on the set of tasks will strictly improve at each iteration of our algorithm. More generally, the performance guarantees we have derived also serve as a lower bound for any composition of policies that qualifies as a SIP. The composition of policies has many applications in RL, such as for example to build hierarchical agents or to tackle a sequence of tasks in a continual learning scenario. Our algorithm provides a simple and principled way to build a diverse set of policies that can be used in these and potentially many other scenarios." }, { "heading": "7 ACKNOWLEDGEMENTS", "text": "We would like to thank Remi Munos and Will Dabney for their comments and feedback on this paper." }, { "heading": "A PROOFS", "text": "Lemma 1. Let πSMPΠn be a SMP defined as in (5) and let π : S ×W 7→ P(A) be any generalized policy. Then, given a set of n policies Πn, π is a SIP if and only if vπΠn,w ≥ vSMPΠn,w for all w ∈ W .\nProof. We first show that the fact that π is a SIP implies that vπΠn,w ≥ vSMPΠn,w for all w. For any w ∈ W , we have\nvπΠn,w ≥ viw for all πi ∈ Πn (SIP as in Definition 2) ≥ max i∈[1,...,n] viw\n= vSMPΠn,w.\nWe now show the converse:\nvπΠn,w ≥ vSMPΠn,w = max i∈[1,...,n] viw (SMP as in Definition 3)\n≥ viw for all πi ∈ Πn.\nLemma 2. For any reward w ∈ W and any set of policies Πn, we have that vGPIΠn,w ≥ vSMPΠn,w.\nProof. We know from previous results in the literature Barreto et al. (2017) that QGPI(Πn)(s, a) ≥ Qπ(s, a) for all (s, a) ∈ S ×A and any π ∈ Πn. Thus, we have that ∀s ∈ S:\nvGPIΠn,w(s) = Q GPI Πn,w(s, π GPI(s))\n≥ max π∈Πn,a∈A Qπw(s, a)\n≥ max π∈Πn Ea∼π[Qπw(s, a)]\n= max π∈Πn\nvπw(s)\n= vSMPΠn,w(s),\nwhere the second inequality is due to Jensen’s inequality.\nTherefore:\nvGPIΠn,w(s) ≥ vSMPΠn,w(s) ED[vGPIΠn,w(s)] ≥ ED[vSMPΠn,w(s)]\nvGPIΠn,w ≥ vSMPΠn,w\nLemma 3 (Impossibility result). For the special case where the SFs associated with any policy are in the simplex, the value of the SMP w.r.t the worst case reward for any set of policies is less than or equal to −1/ √ d. In addition, there exists an MDP where this upper bound is attainable.\nProof. For the impossibility result. we have that\nmax Πn⊆Π min w∈B2 vSMPΠn,w = min w∈B2 max π∈Π vπw (9)\n= min w∈B2 max π∈Π\nψ(π) · w\n≤ min w∈B2 max ψ∈∆d−1 ψ(π) · w (10)\n= min w∈B2 max i wi (11)\n= − 1√ d . (12)\nThe equality in Eq. (9) follows from the fact that Π is set of all possible policies and therefore the largest possible subset (the maximizer of the first maximization). In that case the second maximization (by the SMP) is equivalent to selecting the optimal policy in the MDP. Notice that the order of maximization-minimization here is in the reversed when compared to AL, i.e., for each reward the SMP chooses the best policy in the MDP, while in AL the reward is chosen to be the worst possible w.r.t any policy. The inequality in Eq. (10) follows from the fact that we increase the size of the optimization set in the inner loop, and the equality in Eq. (12) follows from the fact that a maximizer in the inner loop puts maximal distribution on the largest component of w.\nFeasibility. To show the feasibility of the upper bound in the previous impossibility result we give an example of an MDP in which a set of d policies achieves the upper bound. The d policies are chosen such that their stationary distributions form an orthogonal basis.\nmin w∈B2 vSMPΠn,w = min w∈B2 max ψ∈{ψ1,...,ψd} w · ψ = min w∈B2 max ψ∈∆d−1 w · ψ = − 1√ d , (13)\nwhich follows from the fact that the maximization over the simplex is equivalent to a maximization over pure strategies.\nLemma 4 (Reformulation of the worst-case reward for an SMP). Let {ψi}ni=1 be n successor feature vectors. Let w∗ be the adversarial reward, w.r.t the SMP defined given these successor features. That is, w∗ is the solution for\narg min w\nmax i∈[1,...,n]\n{w · ψ1, . . . , w · ψn}\ns.t. ||w||2 − 1 ≤ 0 (14)\nLet w∗i be the solution to the following problem for i ∈ [1, . . . , n]:\narg min w\nw · ψi\ns.t. ||w||2 − 1 ≤ 0 w · (ψj − ψi) ≤ 0 (15)\nThen, w∗ = arg mini w∗i .\nProof. For any solution w∗ to Eq. (8) there is some policy i in the set that is one of its maximizers. Since it is the maximizer w.r.t w∗, its value w.r.t w∗ is bigger or equal to that of any other policy in the set. Since we are checking the solution among all i ∈ [1, . . . , n], one of them must be the solution.\nTheorem 2 (Sufficiency of Active policies). For any set of policies Πn, πSMPΠa(Πn) achieves the same value w.r.t the worst case reward as πSMPΠn , that is, v̄ SMP Πn = v̄ SMP Πa(Πn) .\nProof. Let Πn = {πi}ni=1 . Denote by J a subset of the indices [1, . . . , n] that corresponds to the indices of the active policies such that Πa(Πn) = {πj}j∈J . We can rewrite problem Eq. (14) as follows:\nminimize γ s.t. γ ≥ w · ψi i = 1, . . . , n\n‖w‖2 ≤ 1. (16)\nLet (γ?, w?) be any optimal points. The set of inactive policies i 6∈ J satisfy γ? > w? · ψi. Since these constraints are not binding we can drop them from the formulation and maintain the same optimal objective value, i.e.,\nminimize γ s.t. γ ≥ w · ψj j ∈ J\n‖w‖2 ≤ 1, (17)\nhas the same optimal objective value, v̄SMPΠn , as the full problem. This in turn can be rewritten\nminimize maxj∈J w · ψj s.t. ‖w‖2 ≤ 1, (18)\nwith optimal value v̄SMPΠa(Πn), which is therefore equal to v̄ SMP Πn .\nLemma 5 (κ is binding).\nProof. Denote by ẇ a possible solution where the constraint ‖ẇ‖2 ≤ 1 is not binding, i.e., (‖ẇ‖2 < 1, κ̇ = 0). In addition, denote the primal objective for ẇ by v̇ = maxi∈[1,...,n]{ẇ · ψi}. To prove the lemma, we are going to inspect two cases: (i) v̇ ≥ 0 and (ii) v̇ < 0. For each of these two cases we will show that there exists another feasible solution w̃ that achieves a lower value w̃ for the primal objective (w̃ < v̇), and therefore ẇ is not the minimizer.\nFor the first case v̇ ≥ 0, consider the vector\nw̃ = (−1,−1, . . . ,−1)/ √ d.\nw̃ is a feasible solution to the problem, since ‖w̃‖2 = 1. Since all the SFs have positive coordinates, we have that if they are not all exactly 0, then the primal objective evaluated at w̃ is stictly negative: maxi∈[1,...,n]{w̃ · ψ1, . . . , w̃ · ψn} < 0.\nWe now consider the second case of v̇ < 0. Notice that multiplying ẇ by a positive constant c would not change the maximizer, i.e., arg maxi∈[1,...,n]{cẇ ·ψi} = arg maxi∈[1,...,n]{ẇ ·ψi}. Since v̇ < 0, it means that ẇ/‖ẇ‖ (c = 1/‖ẇ‖) is a feasible solution and a better minimizer than ẇ. Therefore ẇ is not the minimizer.\nWe conclude that the constraint κ is always binding, i.e. ‖w‖2 = 1, κ > 0.\nTheorem 1 (Strict improvement). Let Π1, . . . ,Πt be the sets of policies constructed by Algorithm 1. We have that the worst-case performance of the SMP induced by these set is strictly improving in each iteration, that is: v̄SMPΠt+1 > v̄ SMP Πt . Furthermore, when the algorithm stops, there does not exist a single policy πt+1 such that adding it to Πt will result in improvement: @ πt+1 ∈ Π s.t. v̄SMPΠt+{π} > v̄ SMP Πt .\nProof. We have that\nvSMPΠt = min w∈B2 max ψ∈Ψt ψ ·w ≤ max ψ∈Ψt ψ · w̄SMPΠt+1 ≤ max ψ∈Ψt+1 ψ · w̄SMPΠt+1 = v SMP Πt+1 . (19)\nThe first inequality is true because we replace the minimization over w with w̄SMPΠt+1 , and the second inequality is true because we add a new policy to the set. Thus, we will focus on showing that the first inequality is strict. We do it in two steps. In the first step, we will show that the problem minw∈B2 maxψ∈Ψt ψ · w has a unique solution w?t . Thus, for the first inequality to hold with equality it must be that w̄SMPΠt+1 = w̄ SMP Πt . However, we know that, since the algorithm did not stop, ψt+1 · w̄SMPΠt > vSMPΠt , hence a contradiction. We will now show that minw∈B2 maxψ∈Ψt ψ ·w has a unique solution. Before we begin, we refer the reader to Lemma 4 and Theorem 2 where we reformulate the problem to a form that is simpler to analyze. We begin by looking at the partial Lagrangian of Eq. (17):\nL(w, γ, κ, λ) = γ + ∑ j∈J λj(ψ j · w − γ) + κ(‖w‖2 − 1).\nThe variable κ ≥ 0 is associated with the constraint ‖w‖2 ≤ 1. Denote by (λ?, κ?) any optimal dual variables and note that by complementary slackness we know that either κ? > 0 and ‖w‖2 = 1 or κ? = 0 and ‖w‖2 < 1. Lemma 5 above, guarantees that the constraint is in fact binding – only solutions with κ? > 0 and ‖w‖2 = 1 are possible solutions. Notice that this is correct due to the fact that the SFs have positive coordinates and not all of them are 0 (as in our problem formulation).\nConsequently we focus on the case where κ? > 0 under which the Lagrangian is strongly convex in w, and therefore the problem\nmin w,γ\nL(w, γ, λ?, κ?)\nhas a unique solution. Every optimizer of the original problem must also minimize the Lagrangian evaluated at an optimal dual value, and since this minimizer is unique, it implies that the minimizer of the original problem is unique (Boyd et al., 2004, Sect. 5.5.5).\nFor the second part of the proof, notice that if the new policy πt+1 does not achieve better reward w.r.t vSMPΠt than the policies in Π t then we have that:\nvSMPΠt+1 = min w∈B2 max π∈Πt+1 ψ(π) · w ≤ max π∈Πt+1 ψ(π) · w̄SMPΠt = max π∈Πt ψ(π) · w̄SMPΠt = vSMPΠt ;\nthus, it is necessary that the policy πt+1 will achieve better reward w.r.t vSMPΠt to guarantee strict improvement.\nB AL\nIn AL there is no reward signal, and the goal is to observe and mimic an expert. The literature on AL is quite vast and dates back to the work of (Abbeel & Ng, 2004), who proposed a novel framework for AL. In this setting, an expert demonstrates a set of trajectories that are used to estimate the SFs of its policy πE , denoted by ψE . The goal is to find a policy π, whose SFs are close to this estimate, and hence will have a similar return with respect to any weight vector w, given by\narg max π min w∈B2\nw · ( ψπ − ψE ) = arg max\nπ −||ψπ − ψE ||\n= arg min π ||ψπ − ψE ||. (20)\nThe projection algorithm (Abbeel & Ng, 2004) solves this problem in the following manner. The algorithm starts with an arbitrary policy π0 and computes its feature expectation ψ0. At step t, the reward function is defined using weight vector wt = ψE − ψ̄t−1 and the algorithm finds a policy πt that maximizes it, where ψ̄t is a convex combination of SFs of previous (deterministic) policies ψ̄t = ∑t j=1 αjψ\nj . In order to get that ‖ψ̄T − ψE‖ ≤ , the authors show that it suffices to run the algorithm for T = O( k(1−γ)2 2 log( k (1−γ) )) iterations.\nRecently, it was shown that this algorithm can be viewed as a Frank-Wolfe method, also known as the Conditional Gradient (CG) algorithm (Zahavy et al., 2020a). The idea is that solving Eq. (20) can be seen as a constrained convex optimization problem, where the optimization variable is the SFs, the objective is convex, and the SFs are constrained to be in the SFs polytope K, given as the following convex set:\nDefinition 7 (The SFs polytope). K = { x : ∑k+1 i=1 aiψ i, ai ≥ 0, ∑k+1 i=1 ai = 1, π i ∈ Π } .\nIn general, convex optimization problems can be solved via the more familiar projected gradient descent algorithm. This algorithm takes a step in the reverse gradient direction zt+1 = xt+αt∇h(xt), and then projects zt+1 back into K to obtain xt+1. However, in some cases, computing this projection may be computationally hard. In our case, projecting into K is challenging since it has |A||S| vertices (feature expectations of deterministic policies). Thus, computing the projection explicitly and then finding π whose feature expectations are close to this projection, is computationally prohibitive.\nThe CG algorithm (Frank & Wolfe, 1956) (Algorithm 2) avoids this projection by finding a point yt ∈ K that has the largest correlation with the negative gradient. In AL, this step is equivalent to finding a policy whose SFs has the maximal inner product with the current gradient, i.e., solve an MDP whose reward vector w is the negative gradient. This is a standard RL (planning) problem and\ncan be solved efficiently, for example, with policy iteration. We also know that there exists at least one optimal deterministic policy for it and that PI will return a solution that is a deterministic policy (Puterman, 1984).\nAlgorithm 2 The CG method Frank & Wolfe (1956) 1: Input: a convex set K, a convex function h, learning rate schedule αt. 2: Initiation: let x0 ∈ K 3: for t = 1, . . . , T do 4: yt = arg maxy∈K−∇h(xt−1) · y 5: xt = (1− αt)xt−1 + αtyt 6: end for\nFor smooth functions, CG requires O(1/t2) iterations to find an −optimal solution to Eq. (20). This gives a logarithmic improvement on the result of (Abbeel & Ng, 2004). In addition, it was shown in (Zahavy et al., 2020a) that since the optimization objective is strongly convex, and the constraint set is a polytope, it is possible to use a variant of the CG algorithm, known as Away steps conditional gradient (ASCG) (Wolfe, 1970). ASCG attains a linear rate of convergence when the set is a polytope (Guélat & Marcotte, 1986; Garber & Hazan, 2013; Jaggi, 2013), i.e., it converges after O(log(1/ ) iterations. See (Zahavy et al., 2020a) for the exact constants and analysis.\nThere are some interesting relations between our problem and AL with \"no expert\", that is, solving\narg min π ||ψπ|| (21)\nIn terms of optimization, this problem is equivalent to Eq. (20), and the same algorithms can be used to solve them.\nBoth AL with \"no expert\" and our algorithm can be used to solve the same goal: achieve good performance w.r.t the worst case reward. However, AL is concerned with finding a single policy, while our algorithm is explicitly designed to find a set of policies. There is no direct connection between the policies that are discovered from following these two processes. This is because the intrinsic rewards that are maximised by each algorithm are essentially different. Another way to think about this is that since the policy that is returned by AL is a mixed policy, its goal is to return a set of policies that are similar to the expert, but not diverse from one another. From a geometric perspective, the policies returned by AL are the nodes of the face in the polytope that is closest to the demonstrated SFs. Even more concretely, if the SFs of the expert are given exactly (instead of being approximated from trajectories), then the AL algorithm would return a single vertex (policy). Finally, while a mixed policy can be viewed as a composition of policies, it is not a SIP. Therefore, it does not encourage diversity in the set." }, { "heading": "C REGULARIZING W", "text": "In this section we experimented with constraining the set of rewards to include only vectors w whose mean is zero. Since we are using CVXPY (Diamond & Boyd, 2016) to optimize for w (Eq. (8)), this requires adding a simple constraint ∑d i=1 wi = 0 to the minimization problem. Note that constraining the mean to be zero does not change the overall problem qualitatively, but it does potentially increase the difference in the relative magnitude of the elements in w. Since it makes the resulting w’s have more zero elements, i.e., it makes the w’s more sparse, it can also be viewed as a method to regularize the worst case reward. Adding this constraint increased the number of w’s (and corresponding policies) that made a difference to the optimal value (Definition 5). To see this, note that the green curve in Fig. 5(a) converges to the optimal value in 2 iterations while the the green curve in Fig. 1(a)) does so in 3 iterations. As a result, the policies that were discovered by the algorithm are more diverse. To see this observe that the SFs in Fig. 5(b) are more focused on specific items than the SFs in Fig. 1(c). In Fig. 5(c) and Fig. 5(d) we verified that this increased diversity continues to be the case when we increase the feature dimension d." } ]
2,021
DISCOVERING A SET OF POLICIES FOR THE WORST CASE REWARD
SP:b7704e25b5f177afa8f8d85636d652b5079afc0e
[ "This paper proposes that SGD has the implicitly bias of reducing gradient variance via the phenomenon of thermophoresis that masses tend to flow from regions with higher temperature / variance of random walk to regions with lower temperature / variance of random walk. In the setup of two-layer neural networks trained by SGD for binary classification, the authors show the analogous phenomenon that the model is biased towards smaller activation rate and the norm of its second layer weight. The dependence of the rate at which this phenomenon happens on the learning rate and batch size are verified in the experiments." ]
A central ingredient in the impressive predictive performance of deep neural networks is optimization via stochastic gradient descent (SGD). While some theoretical progress has been made, the effect of SGD in neural networks is still unclear, especially during the early phase of training. Here we generalize the theory of thermophoresis from statistical mechanics and show that there exists an effective force from SGD that pushes to reduce the gradient variance in certain parameter subspaces. We study this effect in detail in a simple two-layer model, where the thermophoretic force functions to decreases the weight norm and activation rate of the units. The strength of this effect is proportional to squared learning rate and inverse batch size, and is more effective during the early phase of training when the model’s predictions are poor. Lastly we test our quantitative predictions with experiments on various models and datasets.
[]
[ { "authors": [ "Alireza Aghasi", "Afshin Abdi", "Nam Nguyen", "Justin Romberg" ], "title": "Net-trim: Convex pruning of deep neural networks with performance guarantee", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Paschalis Bizopoulos", "Dimitrios Koutsouris" ], "title": "Sparsely activated networks", "venue": "arXiv preprint arXiv:1907.06592,", "year": 2020 }, { "authors": [ "John Chipman" ], "title": "The soret effect", "venue": "Journal of the American Chemical Society,", "year": 1926 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": null, "year": 2017 }, { "authors": [ "E.D. Eastman" ], "title": "Thermodynamics of non-isothermal systems", "venue": "Journal of the American Chemical Society,", "year": 1926 }, { "authors": [ "Fartash Faghri", "David Duvenaud", "David J. Fleet", "Jimmy Ba" ], "title": "A study of gradient variance in deep learning", "venue": "arXiv preprint arXiv:2007.04532,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2019 }, { "authors": [ "Jonathan Frankle", "Gintare Karolina Dziugaite", "Daniel M. Roy", "Michael Carbin" ], "title": "The lottery ticket hypothesis at scale", "venue": "arXiv preprint arXiv:1903.01611,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Jürgen Janek", "Carsten Korte", "Alan B. Lidiard" ], "title": "Thermodiffusion in Ionic Solids —Model Experiments and Theory, pp. 146–183", "venue": null, "year": 2002 }, { "authors": [ "Stanislaw Jastrzebski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On large-batch training for deep learning: Generalization gap and sharp minima", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2012 }, { "authors": [ "Mark Kurtz", "Justin Kopinsky", "Rati Gelashvili", "Alexander Matveev", "John Carr", "Michael Goin", "William Leiserson", "Sage Moore", "Nir Shavit", "Dan Alistarh" ], "title": "Inducing and exploiting activation sparsity for fast neural network inference", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Werner Köhler", "Konstantin I. Morozov" ], "title": "The soret effect in liquid mixtures – a review", "venue": "Journal of Non-Equilibrium Thermodynamics,", "year": 2016 }, { "authors": [ "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning overparameterized neural networks via stochastic gradient descent on structured data", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Yuanzhi Li", "Colin Wei", "Tengyu Ma" ], "title": "Towards explaining the regularization effect of initial large learning rate in training neural networks", "venue": "arXiv preprint arXiv:1907.04595,", "year": 2020 }, { "authors": [ "Ji Lin", "Yongming Rao", "Jiwen Lu", "Jie Zhou" ], "title": "Runtime neural pruning", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "K. Simonyan", "A. Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In ICLR,", "year": 2015 }, { "authors": [ "Samuel L. Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "H. Tyrell", "R. Colledge" ], "title": "Thermal diffusion potentials and the soret effect", "venue": "Nature, 173:264–265,", "year": 1954 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Mingwei Wei", "David J Schwab" ], "title": "How noise affects the hessian spectrum in overparameterized neural networks", "venue": "arXiv preprint arXiv:1910.00195,", "year": 2019 }, { "authors": [ "Alois Wurger" ], "title": "Is soret equilibrium a non-equilibrium effect", "venue": "arXiv preprint arXiv:1401.7546,", "year": 2014 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,,", "year": 2017 } ]
[ { "heading": null, "text": "A central ingredient in the impressive predictive performance of deep neural networks is optimization via stochastic gradient descent (SGD). While some theoretical progress has been made, the effect of SGD in neural networks is still unclear, especially during the early phase of training. Here we generalize the theory of thermophoresis from statistical mechanics and show that there exists an effective force from SGD that pushes to reduce the gradient variance in certain parameter subspaces. We study this effect in detail in a simple two-layer model, where the thermophoretic force functions to decreases the weight norm and activation rate of the units. The strength of this effect is proportional to squared learning rate and inverse batch size, and is more effective during the early phase of training when the model’s predictions are poor. Lastly we test our quantitative predictions with experiments on various models and datasets." }, { "heading": "1 INTRODUCTION", "text": "Deep neural networks have achieved remarkable success in the past decade on tasks that were out of reach prior to the era of deep learning. Yet fundamental questions remain regarding the strong performance of over-parameterized models and optimization schemes that typically involve only first-order information, such as stochastic gradient descent (SGD) and its variants.\nIn particular, optimization via SGD is known in many cases to result in models that generalize better than those trained with full-batch optimization. To explain this, much work has focused on how SGD navigates towards so-called flat minima, which tend to generalize better than sharp minima (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017). This has been argued by nonvacuous PACBayes bounds (Dziugaite & Roy, 2017) and Bayesian evidence (Smith & Le, 2018). More recently, Wei & Schwab (2019) discuss how optimization via SGD pushes models to flatter regions within a minimal valley by decreasing the trace of the Hessian.\nHowever, these perspectives apply to models towards the end of training, whereas it is known that proper treatment of hyperparameters during the early phase is vital. In particular, when training a deep network one typically starts with a large learning rate and small batch size if possible. After training has progressed, the learning rate is annealed and decreased so that the model can be further trained to better fit the training set (Krizhevsky et al., 2012; Simonyan & Zisserman, 2015; He et al., 2016b;a; You et al., 2017; Vaswani et al., 2017). Crucially, using a small learning rate during the first phase of training usually leads to poor generalization and also result in large gradient variance practically (Jastrzebski et al., 2020; Faghri et al., 2020).\nHowever, limited theoretical work has been done to understand the effect of SGD on the early phase of training. Jastrzebski et al. (2020) argue for the existence of a “break-even” point on an SGD trajectory. This point depends strongly on the hyperparameter settings. They argue that the breakeven point with large learning rate and small batch size tends to have a smaller leading eigenvalue of the Hessian spectrum, and this eigenvalue sets an upper bound for the leading eigenvalue beyond this point. They also present experiments showing that large learning rate SGD will reduce the variance of the gradient. However their analysis focuses only on the leading eigenvalue of the Hessian spectrum and requires the strong assumption that the loss function in the leading eigensubspace is quadratic.\nMeanwhile Li et al. (2020) studied the simple setting of two-layer neural networks. They demonstrate that in this model, training with large learning rate in the early phase tends to result in better generalization than training with small learning rate. To explain this, they hypothesize a separation of features in the data: easy-to-generalize yet hard-to-fit features, and hard-to-generalize, easierto-fit features. They argue that a model trained with small learning rate will memorize easy-togeneralize, hard-to-fit patterns during phase one, and then generalize worse on hard-to-generalize, easier-to-fit patterns, while the opposite scenario occurs when training with large learning rate. However, this work relies heavily on the existence of these two distinct types of features in the data and the specific network architecture. Moreover, their analysis focuses mainly on learning rate instead of the effect of SGD.\nIn this paper, we study the dynamics of model parameter motion during SGD training by borrowing and generalizing the theory of thermophoresis from physics. With this framework, we show that during SGD optimization, especially during the early phase of training, the activation rate of hidden nodes is reduced as is the growth of parameter weight norm. This effect is proportional to squared learning rate and inverse batch size. Thus, thermophoresis in deep learning acts as an implicit regularization that may improve the model’s ability to generalize.\nWe first give a brief overview of the theory of thermophoresis in physics in the next section. Then we generalize this theory to models beyond physics and derive particle mass flow dynamics microscopically, demonstrating the existence of thermophoresis and its relation to relevant hyperparameters. Then we focus on a simple two-layer model to study the effect of thermophoresis in detail. Notably, we find the thermophoretic force is strongest during the early phase of training. Finally, we test our theoretical predictions with a number of experiments, finding strong agreement with the theory." }, { "heading": "2 THERMOPHORESIS IN PHYSICS", "text": "Thermophoresis, also known as the Soret effect, describes particle mass flow in response to both diffusion and a temperature gradient. The effect was first discovered in electrolyte solutions (Ludwig, 1859; Soret, 1897; Chipman, 1926). However it was discovered in other systems such as gases, colloids, and biological fluids and solid (Janek et al., 2002; Köhler & Morozov, 2016).\nThermophoresis typically refers to particle diffusion in a continuum with a temperature gradient. In one method of analysis, the non-uniform steady-state density ρ is given by the ”Soret Equilibrium” (Eastman, 1926; Tyrell & Colledge, 1954; Wurger, 2014),\n∇ρ+ ρST∇T = 0 , (1)\nwhere T is temperature and ST is called the Soret coefficient.\nIn other work by de Groot & Mazur (1962), mass flow was calculated by non-equilibrium theory. They considered two types of processes for entropy balance: a reversible process stands for the entropy transfer and an irreversible process corresponds to the entropy production, or dissipation. The resulting mass flow induced by diffusion and temperature gradient was found to be\nJ = −D∇ρ− ρDT∇T , (2)\nwhere D is the Einstein diffusion coefficient and DT is defined as thermal diffusion coefficient. Comparing the steady state in 1 and setting the flow to be zero, the Soret coefficient is simply\nST = DT D . (3)\nThe Soret coefficient can be calculated from molecular interaction potentials based on specific molecular models (Wurger, 2014)." }, { "heading": "3 THERMOPHORESIS IN GENERAL", "text": "In this section, we first study a kind of generalized random walk that has evolution equations for a particle state with coordinate q = {qi}i=1,...,n as\nqt+1 = qt − ηγf(qt, ξ) , (4)\nwhere f is a vector function, γ and ξ are random variables, and η is a small number controlling the step size. Notice that this is a generalized inhomogeneous random walk for the particle. Before further analysis, it is noted that the evolution equations 4 is similar to SGD updates in machine learning and we will show this in the next section.\nTo isolate the effect of thermophoresis, we assume the random walk is unbiased, in which case\nP (γf(q, ξ) = a) = P (γf(q, ξ) = −a), (5)\nfor an arbitrary vector a. Thus there is no explicit force exerted on the particle. This simplification was used to demonstrate a residual thermophoretic force in the absence of a gradient. Including gradients is straightforward and corresponds to an external field that creates a bias term. We also denote the probability density, which we also call the mass density, as ρ(q) and\ngi(q) := √∫ γ2f2i (q, ξ)dµ(γ, ξ), (6)\nso that ηgi(q) is the standard deviation of the random walk in the ith direction.\nFrom a position q, we consider a subset of coordinate indices, U ⊆ {1, . . . , n}, wherein\nsign(fi(q, x)) = sign(fj(q, x)) and ∂igj(q) ≥ 0 (7)\nfor all i, j ∈ U . We note here that indices will correspond to parameters when we study learning dynamics. The first property is necessary for our derivation. The second condition will be used at the end to conclude that each gi decreases.\nIn order to study the dynamics of the particle and its density function, we focus on the probability mass flow induced by the inhomogeneous random walk. We will show that there is always a flow from regions with larger gi(q) to those with smaller gi(q) for i ∈ U , which is a generalization of thermophoresis in physics.\nSince η 1, the movement of the particle will have a mean free path of gi(q) in ith direction. Therefore the random walk equation 4 becomes\nqi = qi − ηgi(q)ζi, (8)\nwhere i = 1, . . . , n and ζi is a binary random variable with P (ζi = −1) = P (ζi = 1) = 0.5. Moreover, from Eq. 7, we also have that ζi = ζj for all i and j ∈ U . Next we will show that the flow projecting on the subspace U is always toward smaller gi(q). Notice that although U can be multi-dimensional, the degree of freedom of the particle dynamics is 1 within U due to the sharing of the ζs, and therefore the mass flow projecting on it is also 1-dimensional. For each i ∈ U , we define the average flow in this dimension to be the mass that enters qi from q−i minus the mass from the opposite direction q+i . From Eq. 8 and the assumption that η 1, only mass close to qi will move across qi at each step. We let the farthest mass that will flow across qi in step i be qi + ∆+i and qi − ∆ − i , where ∆ + i and ∆ + i are positive. ∆ + i and ∆ − i are thus defined implicitly by the equations: ∆+i = ηgi(q+∆ +) and ∆−i = ηgi(q−∆−), respectively. Notice that if the random walk were homogeneous, we would have ∆+i = ∆ − i . In our inhomogeneous case, we have ∆+i ∼ ∆ − i ∼ ηgi(q) up to leading order of η, and the next to leading order will be calculated in order to compute the difference between ∆+i and ∆ − i .\nNow we are ready to calculate the mass flow through q. The mass flow projecting onto the subspace U is calculated by the mass through q from q + ∆+ minus the mass from q−∆− where ∆+i and ∆−i are as above for i ∈ U . It is straightforward to show that1\n∆+i −∆ − i = 2η 2 ∑ j∈U gj(q)∂jgi(q) +O(η 3). (9)\nWith this, we can compute the flow density, J , through q, finding J = −η2 √∑ i∈U g2i (q) ∑ i∈U gi(q)∂iρ(q)− η2 ∑ i,j∈U gi(q)gj(q)∂jgi(q)√∑\ni∈U g 2 i (q)\nρ(q) +O(η3), (10)\nwhere the derivation can be found in Appendix A.3. This can be understood as described in Diagram 1. Notice that this probability mass flow consists of two terms at order η2. The first represents diffu-\nsion and the second corresponds to our goal in this section, namely the flow due to thermophoresis. By the second property of the gi in Eq. 7, we find that the coefficient of thermophoresis (Soret coefficient), which is defined as\nc := −η2 ∑ i,j∈U gi(q)gj(q)∂jgi(q)\n2 √∑\ni∈U g 2 i (q)\n(11)\n≤ 0, (12)\nis negative. This means that there is an effective force exerted on a particle at position q towards the smaller variance regime (by analogy, the colder area). The coefficient is proportional to η2." }, { "heading": "4 MATHEMATICAL MODEL", "text": "" }, { "heading": "4.1 TWO-LAYER MODEL", "text": "To study the physics behind SGD optimization in detail, we consider the simple setting of onehidden layer neural networks. The network is a function f : RM → R parameterized as follows:\nf(x; V,W,b) = Vσ(Wx + b),\n= N∑ i=1 Viσ( M∑ j=1 Wijxj + bi).\nWe also write f(x) for simplicity. The network has a scalar output, which is widely used in regression and binary classification. x is the network input with dimension M , W and b are the weights and biases in the first layer with dimensionN×M andN respectively, whereN is the number of hidden nodes in the hidden layer, and σ is the ReLU activation function defined as σ(a) = max(0, a).\nThe dataset is drawn i.i.d. from the data distribution, {(x, y)|(x, y) ∼ D(x, y)}. In this paper we consider two cases, where either xi ≥ 02 or xi ∼ N (0, 1)3. Here y ∈ Y and we denote the marginal distribution of y as DY . Finally, we have the loss function L : R× Y→ R+." }, { "heading": "4.2 TRAINING", "text": "We consider optimization via SGD, where the gradient of the loss on a batch of size |B| is given by 1A brief derivation can be found in Appendix A.2. 2Usually in convolutional neural networks or intermediate layers. 3Often found when the data are normalized.\n∇LB(V,W,b) = 1\n|B| |B|∑ i=1 ∇fL(f(xi), yi)∇f(xi). (13)\nIn our two-layer model, we have\n∇Vif(x) = σ( M∑ j=1 Wijxj + bi), ∇Wijf(x) = Vixjσ′( M∑ k=1 Wikxk + bi),\n∇bif(x) = Viσ′( M∑ k=1 Wikxk + bi).\nFor an input vector x, we call the hidden node i activated when σ′( ∑M k=1Wikxk + bi) = 1, or equivalently Wikxk + bi > 0. We thus define the activation rate of the network to be\nσ′ = 1\nN N∑ i=1 Exσ′( M∑ k=1 Wikxk + bi) . (14)\nThis is an important concept to which we will return.\nHenceforth, we drop the index i, since the dynamical equations are invariant with respect to node index, and write V := Vi, Wj := Wij and b := bi by abuse of notation. We also denote\nhv(V,W, b) := Ex[∇V f(x)]2, hw(V,W, b) := Ex[∇Wf(x)]2, hb(V,W, b) := Ex[∇bf(x)]2,\nwhere Ex denotes average over input x. We have the following property for the functions h: Property 4.1. Given W, if V 21 ≤ V 22 and b1 ≤ b2, we have\nhv(V1,W, b1) ≤ hv(V2,W, b2), hw(V1,W, b1) ≤ hw(V2,W, b2), hb(V1,W, b1) ≤ hb(V2,W, b2), σ′(V1,W, b1) ≤ σ′(V2,W, b2).\nHere we define a ≤ b as min(b− a) ≥ 0.\nIt is straightforward to see the following: Property 4.2. When the case of xi ≥ 0 is considered, if V 21 ≤ V 22 , W1 ≤ W2 and b1 ≤ b2, we have\nhv(V1,W1, b1) ≤ hv(V2,W2, b2), hw(V1,W2, b1) ≤ hw(V2,W, b2). hb(V1,W2, b1) ≤ hb(V2,W, b2), σ′(V1,W, b1) ≤ σ′(V2,W, b2).\nIn our analysis, we focus for simplicity on binary classification tasks, where the loss is typically binary cross-entropy: L(f, y) = y ln p(f) + (1− y) ln(1− p(f)) and p(f) = 1/(1 + exp(f)). We thus have ∇fL(f, y) = p(f)− y. (15) Substituting into Eq. 13, the mini-batch gradient becomes\n∇LB(V,W,b) = 1\n|B| |B|∑ i=1 (pi − yi)∇f(xi). (16)\nOur results also hold straightforwardly for squared error." }, { "heading": "5 THERMOPHORESIS IN DEEP LEARNING", "text": "In this section, we will show that the parameters in the one hidden layer model and their dynamics approximately satisfy the criteria of the previous section and that the biases are pushed negative and V 2 is suppressed during training, the effects of both of which are proportional to squared learning rate η2 and inverse batch size 1/|B|. The gradient that dominates model training is defined in 16. Because training samples are i.i.d., the variance of the gradient is\nvar [ ∇LB(V,W,b) ] = var\n[ 1\n|B| |B|∑ i=1 (pi − yi)∇f(xi) ] , (17)\n= 1 |B| var [ (p− y)∇f(x) ] (18)\nThe gradient has two components: p−y corresponding to γ in equation 4 and∇f(x) corresponding to f(q, ξ). We assume that the dataset is unbiased, in which case P (y = 0) = P (y = 1) = 0.5 and P (p − y = a) = P (p − y = −a), and that p − y and ∇f(x) are independent in the first period of training given that the dataset is complex and can’t be learned by linear model. It is straightforward to see that it satisfies Eq. 5.\nNext we will show that V and b are always in the set of U , i.e. they satisfy the conditions of Eq. 7. First, if Vi ≥ 0, we have\n∇Vif(x) = σ( M∑ j=1 Wijxj + bi), (19)\n≥ 0. (20) and\n∇bif(x) = Viσ′( M∑ k=1 Wikxk + bi), (21)\n≥ 0. (22) Since we also have Property 4.1, the conditions in Eq. 7 are satisfied. If Vi < 0, we consider a coordinate transform that maps Vi to V̄i = −Vi. It is easy to show that Eq. 7 is again satisfied after this transform.\nNext we consider W. The gradient of f with respect to Wij is the product of ∇bif and xi. If xi for i = 1, . . . ,M are always ≥ 0, which is usually the case in convolutional neural networks, it is easy to show that Wij is also in set U and smaller Wij corresponds to smaller variance according to Property 4.2. If xi ∼ N (0, 1), on the other hand, W is excluded from U . For the following, we only consider the case where xi ∼ N (0, 1), and\ngV (Vi,Wi, bi) = 1√ |B| √√√√∫ [(p− y)σ( M∑ j=1 Wijxj + bi) ]2 dµ(x, y) , (23)\n:= 1√ |B| φ1(Wi, bi) , (24)\ngb(Vi,Wi, bi) = 1√ |B| √√√√∫ [(p− y)Viσ′( M∑ j=1 Wijxj + bi) ]2 dµ(x, y) , (25)\n:= Vi√ |B| φ2(Wi, bi) , (26)\nwhere g is defined as in Eq. 6. Inserting these into Eq. 10, we find the thermophoresis flow density to be\nJt = η2\n|B| ψ , (27)\nwhereψ = Viφ1φ 2 2+Viφ1φ2∂bφ1+V 3φ22∂bφ2\n2 √ φ21+V 2 i φ 2 2\nρ. This flow biases the model toward smaller bi and smaller\nVi 4 with a strength proportional to squared learning rate η2 and inverse batch size. We also note that ψ can be bounded by a function multiplying with a scalar ∫\n(p−y)2µ(x, y). It is clear that this scalar measures the L-2 distance between model predictions and sample labels and decreases on average during training as prediction getting better. Thus thermophoresis is more effective during the early phase of training.\nTherefore there exists an effective force that pushes to decrease the model’s activation rate, defined in equation 14, and reduces the weight norm of the second layer. The strength of this force scales as\nF ∝ η 2\n|B| . (28)\nIn Li & Liang (2018), Theorem 4.1 presents a linear relation between learning rate and training iterations for a target training error and small learning rate. This implies that if one uses a learning rate k times larger, the model will require k times fewer optimization steps for the same training performance. Together with our results, this implies the following: for the same model and initialization, comparing two optimization schemes with η1 ≤ η2 each achieving a given training error, the activation rate for scheme 1 will be at least as large as that for scheme 2, i.e. σ1 ≥ σ2. Similarly, denoting the weight norm for scheme 1(2) by v1(v2), we have that v1 ≥ v2. Model sparsity can mean two different things: sparsity of the weights, and frequency with which units are activated, called the activation rate. Intuitively, a sparser model has a smaller capacity Bizopoulos & Koutsouris (2020); Kurtz et al. (2020); Aghasi et al. (2017); Lin et al. (2017). Furthermore, certain forms of model pruning have been shown to improve generalization Frankle & Carbin (2019); Frankle et al. (2020). Therefore one might surmise that a smaller activation rate in general correlates with generalization. Moreover, in Appendix A.5, we construct an upper bound of the Hessian norm and this upper bound monotonously depends on activation rate and weight norm. This also sheds light on the connection between sparsity, weight norm, and generalization.\nOur theory can also be generalized beyond two-layer models. We have shown that there exists an effective force in deep neural networks from SGD that reduces the gradient variance and have derived quantitative properties of it." }, { "heading": "6 EXPERIMENTS", "text": "The essential result from the previous section is that there exists an effective force from SGD, analogous to thermophoresis, that pushes to decrease the gradient variance, and in one-hidden-layer neural networks decreases the model’s activation rate and reduces the weight norm of the second layer. The strength of the force is proportional to squared learning rate and inverse batch size. In this section, we present experiments to test these results. Further experiments can be found in the appendix.\nFirst we consider a one hidden layer model with input dimension 100 and 100 hidden units. The input data, x, is distributed as N (0, I) where I is identity matrix, and the label is randomly chosen from {0, 1}. Batch size is set to 1 and the learning rate is varied from 0.025 to 0.1. We calculate the activation rate and L2 norm of the vector V after each training iteration. The result for activation rate is shown in the first row of Fig. 2. The leftmost plot shows activation rate as a function of true iteration on the x-axis, and we see that activation rate decreases during training, and the decreasing is more rapid with larger learning rate. In the middle plot we rescale the x-axis by a factor proportional to learning rate η5. This rescaling factor is to offset the movement difference due to learning rate difference. It is clear that even after this rescaling, we still observe that larger learning rates decrease activation rate faster. Finally, on the rightmost plot we rescale the x-axis with a factor proportional to squared learning rate η2. We see that all trajectories now overlap, which matches our prediction in the previous section that decreasing rate is proportional to η2.\n4larger Vi if Vi < 0. 5For example, if raw iteration number for η = 0.05 is 1000 and rescaled iteration number is also 1000, the\nrescaled iteration number for η = 0.1 is 1000 then its true iteration number is 500.\nWe next test our results for deep neural networks beyond the two-layer model. Instead of activation rate and weight norm, we plot the gradient variance as predicted by our theory. Networks architectures are 6-layer fully-connected with hidden layer sizes of 100 and 6-layer convolutional with 10 channels with kernel size of 5*5 and stride 1 except the last fully-connected layer output. The results are shown in the second row of Fig. 2 and the third row of Fig. 2 respectively.\nFurther experiments can be found in Appendix A.6, where we show that the evolution of the weight norm, scaling with batch size, and other results are consistent with our theoretical predictions. We also study other models and other datasets including CIFAR10." }, { "heading": "7 CONCLUSION", "text": "In this paper we generalized the theory of thermophoresis from statistical mechanics and showed that there exists an effective thermophoretic force from SGD that pushes to reduce the gradient variance. We studied this effect in detail for a simple two-layer model, where the thermophoretic force serves to decrease the weight norm and the activation rate of the units. We found that the strength of this effect is proportional to square of the learning rate, inversely proportional to batch size, and is more effective during the early phase of training when the model’s predictions are poor. We found good agreement between our predictions and experiments on various models and datasets." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 PROOF OF PROPERTY 4.1", "text": "Proof. By definition, we have\nσ( M∑ j=1 Wjxj + b1) ≤ σ( M∑ j=1 Wjxj + b2) , (29)\nσ′( M∑ j=1 Wjxj + b1) ≤ σ′( M∑ j=1 Wjxj + b2) . (30)\nFor hv ,\nhv(V1,W, b1) = Exg2v(x, V1,W, b1) ,\n= Exσ2( M∑ j=1 Wjxj + b1) , ≤ Exσ2( M∑ j=1 Wjxj + b2) ,\n= hv(V2,W, b2) .\nSimilarly, we have\nhwi(V1,W, b1) = ExV 21 x2iσ′( M∑ k=1 Wkxk + b1) ,\n≤ ExV 22 x2iσ′( M∑ k=1 Wkxk + b1) ,\n≤ ExV 22 x2iσ′( M∑ k=1 Wkxk + b2) ,\n= hwi(V2,W, b2) .\nClearly the inequality also holds for hb.\nA.2 DERIVATION OF EQ. 9\n∆+i −∆ − i = ηgi(q + ∆ +)− ηgi(q−∆−) , (31)\n= η n∑ j∈U (∆+j + ∆ − j )∂jgi(q) +O(η∆ 2) , (32)\n= 2η2 ∑ j∈U gj(q)∂jgi(q) +O(η 3) . (33)" }, { "heading": "A.3 DERIVATION OF EQ. 10", "text": "J =− 1 2 |∆+|ρ(q + ∆+) + 1 2 |∆−|ρ(q−∆−) ,\n= 1 2 |∆+|[ρ(q−∆−)− ρ(q + ∆+)] + 1 2 (|∆−| − |∆+|)ρ(q−∆−) ,\n=− 1 2 |∆+|(|∆+ + ∆−|)ρ(q + ∆ +)− ρ(q−∆−) |∆+ + ∆−|\n− 1 2 (|∆+|2 − |∆−|2) |∆+|+ |∆−| ρ(q−∆−) ,\n≈− 1 2 |∆+|(∆+ + ∆−)∇ρ(q)− 1 2\n∑ i∈U (∆ + i + ∆ − i )(∆ + i −∆ − i )\n|∆+|+ |∆−| ρ(q−∆−) ,\n=− η2 √∑ i∈U g2i (q) ∑ i∈U gi(q)∂iρ(q)− η2 ∑ i,j∈U gi(q)gj(q)∂jgi(q)√∑\ni∈U g 2 i (q)\nρ(q) +O(η3) ." }, { "heading": "A.4 SANITY CHECK OF GENERALIZED THEORY", "text": "If |U | = 1 and gi(q) = gi(qi), the model will reduce to aforementioned physics model and the Soret coefficient reduces to\nc = η2\n2 g(q)g′(q) , (34)\n= [( ηg(q)\n2 )2]′ , (35)\n≈ ∇T , (36)\nwhere T is the effective temperature in the model. This result is consistent with thermophoresis model in physics." }, { "heading": "A.5 SPARSITY, WEIGHT NORM AND THEIR RELATION TO GENERALIZATION", "text": "In this section, we demonstrate how sparsity is related to the Hessian norm. We first denote the model’s probabilistic prediction on a C-class classification as\npµk = exp zµk∑C l=1 exp z µ l , (37)\nwhere k is the probability for label k, µ is the data index, z is model output and C is the total number of categories. We consider cross entropy loss of the form\nL(w) = − 1 B B∑ µ=1 C∑ k=1 yµk log p µ k , (38)\nwhere y is sample labels and p stands for model probability prediction, similar to the previous definition. We denote the loss for individual sample to be\nLµ = − C∑ k=1 yµk log p µ k . (39)\nThe gradient with respect to the model output is\n(∇zLµ)k = −yµk + p µ k . (40)\nAnd it is easy to show that the Hessian with respect to output is\n(∇2zLµ)kl = δklp µ k − p µ kp µ l . (41)\nTherefore the Hessian with respect to model parameters is\nHµ = ∇2wL(z(w)) , (42) = ∇w(∇zL ∗ ∇wz) , (43) = (∇wz)(∇2zL)(∇wz) +∇zL∇2wz , (44) ≈ (∇wzµ)ij(∇2zLµ)jk(∇wzµ)kl . (45)\nTo study the spectrum of the Hessian, we calculate the trace and have\nTr(Hµ) ≈ Tr((∇wzµ)(∇2zLµ)(∇wzµ)T ) , (46) = Tr((∇2zLµ)(∇wzµ)T (∇wzµ)) , (47) = Tr(P ∗K) , (48) ≤ Tr(P ) ∗ Tr(K) , (49)\nwhere\nP = ∇2zLµ , (50) Kµν = ∑ l ∑ ij ( ∂zµ ∂wlij )( ∂zν ∂wlij ) . (51)\nThe trace of K therefore can be calculated by chain rule, Tr(K) = ∑ µ ∑ l ∑ ij ( ∂zµ ∂wlij )2 , (52)\n= ∑ l ( ∑ iu (δli[µ]) 2 ∑ j (hl−1j ) 2) , (53)\nwhere δ and h carry backward and forward information respectively. They are defined as\nδli[µ] = δµnLW L nLnL−1σ ′...W l+1nl+2iσ ′ , (54) hl−1j = σW l−1 jnl σ...Wn1n0xn0 . (55)\nIt can further be shown that∑ iu (δli[µ]) 2 = ∑ iu δµnLW L nLnL−1σ ′ . . .W l+1nl+2iσ ′ ∗ σ′W l+1inl+2 . . . σ′WLnL−1nLδnLµ , (56)\n= Tr(WLσ′ . . .W l+1σ′σ′W l+1 . . . σ′W̄L) , (57)\n≤ Tr(σ′WLWLσ′)Tr(σ′WL−1WL−1σ′) . . . T r(σ′W l+1W l+1σ′) , (58)\nas well as ∑ j (hl−1j ) 2 ≤ ‖X‖2Tr(W 1σσW 1)Tr(W 2σσW 2)...T r(W l−1σσW l−1) . (59)\nTogether with the previous calculations and the definition of K, we have Tr(K) ≤ ‖X‖2 ∑ l Πl−1n=1Tr(W nσσWn)ΠLn=l+1Tr(σ ′WnWnσ′) , (60)\n= ‖X‖2 ∑ l Πl−1n=1‖σWn‖2FΠLn=l+1‖Wnσ′‖2F . (61)\nFinally, we derive an upper bound for the trace of the Hessian, Tr(Hµ) ≤ Tr(P )‖X‖2 ∑ l Πl−1n=1‖σWn‖2FΠLn=l+1‖Wnσ′‖2F . (62)\nNotice that activation rate and weight norm control the magnitude of ‖σWn‖2F and ‖Wnσ′‖2F . Therefore smaller activation rate and weight norm lead to tiger upper bound of the Hessian trace and thus indicate smaller matrix norm. This analysis connects sparsity with Hessian norm, Hessian trace specifically." }, { "heading": "A.6 MORE EXPERIMENTS", "text": "In this section, we design extensive experiments to test the obtained results. First of all, we consider binary classification with BCE loss. The model has the form\nf(x) = Vσ(Wx + b) . (63)\nInput dimension is 10 and the number of hidden nodes is also 10. x is distributed as N (0, I) where I is identity matrix and the label is randomly chosen from {0, 1}. For the first hyperparameter setting, batch size is 1 and learning rate varies from 0.025 to 0.1. We calculate the activation rate and L2 norm of the vector V after each training iteration. The result for activation rate and weight norm are shown in Fig. 3 and Fig. 4 respectively. Both figures contain three plots. The leftmost plots correspond to plot with raw iteration as x-axis. It is shown that both activation rate and weight norm are decreasing for all cases. Additionally, the decreasing rate is larger with larger learning rate. The middle plots correspond to plot with rescaled iteration as x-axis, where the rescaled factor is proportional to learning rate η6. This rescalation factor is to offset the movement difference due to learning rate difference. It is clear that even after this rescalation, we still observe activation rate and weight norm difference for different learning rates. Lastly, the rightmost plots correspond to plot with rescaled iteration as x-axis, where the rescaled factor is proportional to squared learning rate η2. The result that all trajectories overlap with each other matches our prediction in the previous section, that decreasing rate is proportional to η2.\nFor the second hyperparameter setting, learning rate is fixed to be 0.05 and batch size varies from 1 to 3. Again we calculate the activation rate and L2 norm of the vector V after each training\n6For example, if raw iteration number for η = 0.05 is 1000 and rescaled iteration number is also 1000, the rescaled iteration number for η = 0.1 is 1000 then its true iteration number is 500.\niteration. The result for activation rate and weight norm are shown in Fig. 5 and Fig. 6 respectively. We observe similar tendency as we discussed in the previous hyperparameter setting. The result shows that both activation rate and weight norm are decreasing for all cases. While there exist decreasing rate difference in the left plots due to batch size discrepancy. This difference can be offset be rescaling x-axis according to proportional factor 1/|B|, which is the result in the right plots and it is consistent with our theory prediction.\nSubsequently we consider that second case discussed in the previous section. This is also a consider binary classification with BCE loss. The model, however, has the form\nf(x) = Vσ(Wx) . (64)\nInput dimension is 10 and the number of hidden nodes is also 10. x is distributed uniformly between 0 and 1 as U(0, 1) and the label is randomly chosen from {0, 1}. The first setting is the same as the first setting in the aforementioned experiment. The result for activation rate and weight norm are shown in Fig. 7 and Fig. 8 respectively. Similar to previous experiment, both figures contain three plots. The leftmost, middle, right plots correspond to plot with raw iteration, rescaled iteration with factor η and rescaled iteration with factor η2 respectively as xaxis. The result that all trajectories overlap with each other matches our prediction in the previous section, that decreasing rate is proportional to η2.\nThe second setting is also similar to the second setting in the previous experiment where we fix learning rate to be 0.05 and vary batch size from 1 to 3. The result for activation rate and weight norm are shown in Fig. 9 and Fig. 10 respectively. The result shows that both activation rate and weight norm are decreasing for all cases. While there exist decreasing rate difference in the left plots due to batch size discrepancy. This difference can be offset be rescaling x-axis according to\nproportional factor 1/|B|, which is the result in the right plots and it is consistent with our theory prediction.\nFurthermore, we use real image dataset CIFAR10 for the next experiment instead of artificial data. The model in this experiment has one hidden layer with 300 hidden nodes. We first fix batch size to be 1000 and vary learning rate η. The result of activation rate and weight norm decreasing are shown in Fig. 11 and Fig. 12 respectively. We then fix learning rate to be 0.02 and vary batch size. The result of activation rate and weight norm decreasing are shown in Fig. 13 and Fig. 14 respectively.\nIt is clear that the result matches our theoretical predictions. We skip detailed analysis here as it is similar to our discussion in previous experiments." } ]
2,020
null
SP:f0bf4f7a726d20e1ebf8d45033a61e3034ede044
[ "This paper applies federated learning to steering wheel prediction for autonomous driving. \"Federated learning\" in this draft mainly refers to an on-device distributed training algorithm where each edge device hosts its private data and performs local updates (model training) and send the updates back to a central server to aggregate. More specifically, this paper uses the most well-known algorithm in federated learning, FedAvg (McMahan et al. 2017). " ]
With the development of computation capability in devices, companies are eager to utilize ML/DL methods to improve their service quality. However, with traditional Machine Learning approaches, companies need to build up a powerful data center to collect data and perform centralized model training, which turns out to be expensive and inefficient. Federated Learning has been introduced to solve this challenge. Because of its characteristics such as model-only exchange and parallel training, the technique can not only preserve user data privacy but also accelerate model training speed. In this paper, we introduce an approach to end-to-end on-device Machine Learning by utilizing Federated Learning. We validate our approach with an important industrial use case, the wheel steering angle prediction in the field of autonomous driving. Our results show that Federated Learning can significantly improve the quality of local edge models and reach the same accuracy level as compared to the traditional centralized Machine Learning approach without its negative effects. Furthermore, Federated Learning can accelerate model training speed and reduce the communication overhead, which proves that this approach has great strength when deploying ML/DL components to real-world embedded systems.
[]
[ { "authors": [ "Mariusz Bojarski", "Davide Del Testa", "Daniel Dworakowski", "Bernhard Firner", "Beat Flepp", "Prasoon Goyal", "Lawrence D Jackel", "Mathew Monfort", "Urs Muller", "Jiakai Zhang" ], "title": "End to end learning for self-driving cars", "venue": "arXiv preprint arXiv:1604.07316,", "year": 2016 }, { "authors": [ "Jan Bosch", "Ivica Crnkovic", "Helena Holmström Olsson" ], "title": "Engineering ai systems: A research agenda", "venue": "arXiv preprint arXiv:2001.07522,", "year": 2020 }, { "authors": [ "Shuyang Du", "Haoli Guo", "Andrew Simpson" ], "title": "Self-driving car steering angle prediction based on image recognition", "venue": "arXiv preprint arXiv:1912.05440,", "year": 2019 }, { "authors": [ "Hesham M Eraqi", "Mohamed N Moustafa", "Jens Honer" ], "title": "End-to-end deep learning for steering autonomous vehicles considering temporal dependencies", "venue": "arXiv preprint arXiv:1710.03804,", "year": 2017 }, { "authors": [ "Gunnar Farnebäck" ], "title": "Two-frame motion estimation based on polynomial expansion", "venue": "In Scandinavian conference on Image analysis,", "year": 2003 }, { "authors": [ "Nelson Fernandez" ], "title": "Two-stream convolutional networks for end-to-end learning of self-driving cars", "venue": "arXiv preprint arXiv:1811.05785,", "year": 2018 }, { "authors": [ "Andrew Hard", "Kanishka Rao", "Rajiv Mathews", "Swaroop Ramaswamy", "Françoise Beaufays", "Sean Augenstein", "Hubert Eichner", "Chloé Kiddon", "Daniel Ramage" ], "title": "Federated learning for mobile keyboard prediction", "venue": "arXiv preprint arXiv:1811.03604,", "year": 2018 }, { "authors": [ "Berthold KP Horn", "Brian G Schunck" ], "title": "Determining optical flow", "venue": "In Techniques and Applications of Image Understanding,", "year": 1981 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Jakub Konečnỳ", "H Brendan McMahan", "Daniel Ramage", "Peter Richtárik" ], "title": "Federated optimization: Distributed machine learning for on-device intelligence", "venue": "arXiv preprint arXiv:1610.02527,", "year": 2016 }, { "authors": [ "Xiang Li", "Kaixuan Huang", "Wenhao Yang", "Shusen Wang", "Zhihua Zhang" ], "title": "On the convergence of fedavg on non-iid data", "venue": null, "year": 1907 }, { "authors": [ "Lucy Ellen Lwakatare", "Aiswarya Raj", "Jan Bosch", "Helena Holmström Olsson", "Ivica Crnkovic" ], "title": "A taxonomy of software engineering challenges for machine learning systems: An empirical investigation", "venue": "In International Conference on Agile Software Development,", "year": 2019 }, { "authors": [ "Alexandra L’heureux", "Katarina Grolinger", "Hany F Elyamany", "Miriam AM Capretz" ], "title": "Machine learning with big data: Challenges and approaches", "venue": "IEEE Access,", "year": 2017 }, { "authors": [ "Alexandra L’heureux", "Katarina Grolinger", "Hany F Elyamany", "Miriam AM Capretz" ], "title": "Machine learning with big data: Challenges and approaches", "venue": "IEEE Access,", "year": 2017 }, { "authors": [ "Swaroop Ramaswamy", "Rajiv Mathews", "Kanishka Rao", "Françoise Beaufays" ], "title": "Federated learning for emoji prediction in a mobile keyboard", "venue": null, "year": 1906 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Two-stream convolutional networks for action recognition in videos", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Rodolfo Valiente", "Mahdi Zaman", "Sedat Ozer", "Yaser P Fallah" ], "title": "Controlling steering angle for cooperative self-driving vehicles utilizing cnn and lstm-based deep networks", "venue": "IEEE Intelligent Vehicles Symposium (IV),", "year": 2019 }, { "authors": [ "Du Zhang", "Jeffrey JP Tsai" ], "title": "Machine learning and software engineering", "venue": "Software Quality Journal,", "year": 2003 } ]
[ { "heading": "1 INTRODUCTION", "text": "With the development of computation capability in devices, Machine Learning and Deep Learning arouse great interests by companies who are eager to utilize ML/DL methods to improve their service quality. However, with the explosive growth of data generated on edge devices, the traditional centralized Machine Learning approaches have shown its weakness, such as data communication overhead, model compatibility, training efficiency, etc. (L’heureux et al., 2017a) Figure 1 illustrate a traditional Machine Learning approach with the centralized learning framework.\nThe diagram contains four stages: 1) data collection from multiple distributed edge devices 2) model training in a central server 3) model validation based on existing testing data 4) model deployment\nto edge devices. However, the data collected from edge devices need to be transmitted to a central server and perform model training on that enormous data set, which turns out to be inefficient and expensive. In order to solve these challenges, Federated Learning has been introduced as an efficient approach which can distribute learning tasks to the edge devices and avoid massive data transmission. Furthermore, due to the characteristics of Federated Learning, on-device training becomes possible and the local model quality can be continuously improved.\nAlthough the concept of Federated Learning has significant benefits and potential in AI engineering fields, it is hard for industries and companies to build a reliable and applicable on-device Federated Learning system. Some previous research identified the challenges of deploying AI/ML components into a real-world industrial context. As defined in ”Engineering AI Systems: A Research Agenda” (Bosch et al., 2020), AI engineering refers to AI/ML-driven software development and deployment in production contexts. We found that the transition from prototype to the production-quality deployment of ML models proves to be challenging for many companies (L’heureux et al., 2017b) (Lwakatare et al., 2019).\nThe contribution of this paper is threefold. First, we utilize Federated Learning, a distributed machine learning technique, and validate it on an important industrial use case, steering wheel prediction in the field of autonomous driving, which is also a classic end-to-end learning problem. Second, we describe an end-to-end on-device Federated Learning approach to efficiently train Machine Learning models in a distributed context. Third, we empirically evaluate our approach on the real-world autonomous driving data sets. Based on our results, we demonstrate the strength of Federated Learning compared to traditional centralized learning methods.\nThe remainder of this paper is structured as follows. Section 2 we introduce the background of this study. Section 3 details our research method, including the simulation testbed, the utilized machine learning method and the evaluation metrics. Section 4 presents the end-to-end Federated Learning approach utilized in this paper. Sections 5 evaluates proposed learning approach to empirical data sets. Section 6 outlines the discussion on our observed results. Finally, Section 7 presents conclusions and future work." }, { "heading": "2 BACKGROUND", "text": "The first Federated Learning framework was proposed by Google in 2016 Konečnỳ et al. (2016), The major objective of Federated Learning is to learn a global statistical model from numerous edge devices. Particularly, the problem is to minimize the following finite-sum objective function 1:\nmin w f(w), where f(w) := n∑ i=1 λifi(w) (1)\nHere, w represents model parameters, n is the total number of edge devices, and fi(w) is the local objective function which is defined by high dimensional tensor w of the ith device. λi (λi ≥ 0 and ∑ i λi = 1) gives the impact of ith remote device and is defined by users. This formula is also applied throughout this research.\nWith the development of the concept of cloud computing and decentralized data storage, there has been increasing interest in how to utilize this technique to improve Machine Learning procedure. There are two classic applications which were realized by Hard et al. (2018) and Ramaswamy et al. (2019). Authors applied Federated Learning techniques on the Google Keyboard platform to improve virtual keyboard search suggestion quality and emoji prediction. Their results show feasibility and benefit of applying federated learning to train models while preventing to transfer user’s data. However, authors in previous research didn’t discuss the impact of model training time and the communication cost when deploying and training models on edge devices. Furthermore, due to the system environment and troubles encountered when deploying Federated Learning into different cases, we propose an end-to-end approach and validate the on-device Federated Learning into a completely different industrial scenario, the steering wheel angle prediction.\nWith the inspiration of the work by Bojarski et al. (2016), we designed and developed a deep convolutional neural network to directly predict the steering wheel angle and control the steer based on the prediction. The training data is collected from single images sampled from video and the\nground truth is recorded directly from real-time human behavior. In order to improve the model prediction performance, a two-stream model was first proposed in Simonyan & Zisserman (2014) and applied in Fernandez (2018) due to its robustness and lower training cost compared with other networks such as 3D-CNN (Du et al., 2019), RNN (Eraqi et al., 2017) and LSTM (Valiente et al., 2019). However, the previous research for this use case is mainly focusing on training model in a single vehicle. In this paper, we will apply Federated Learning to accelerate model training speed and improve the model quality by forming a global knowledge of all participating edge vehicles." }, { "heading": "3 METHOD", "text": "In this research, the empirical method and learning procedure described in Zhang & Tsai (2003) was applied to make a quantitative measurement and comparison between Federated Learning and traditional centralized learning methods. In the following sections, we present the mathematical notations used in this paper, our testbed, data traces and the convolutional neural network architecture utilized for solving the problem of steering wheel angle prediction." }, { "heading": "3.1 MATHEMATICAL NOTATIONS", "text": "We first introduce the mathematical notations that will be used in the rest of the paper:\nAt An image frame matrix at time t\nOt = f(At,At−1) An optical-flow matrix at time t\nθt Steering wheel angle at time t" }, { "heading": "3.2 DATA TRACES AND TESTBED", "text": "The datasets used in this paper is SullyChen collection of labeled car driving data sets, which is available on Github (SullyChen, 2018). In this collection, there are two datasets (Dataset 2017 and Dataset 2018) which record different driving information on different routes.\nDataset 2017 contains approximately 45,500 images, 2.2 GB. The dataset records a trajectory of approximately 4km around the Rolling Hills in LA, USA in 2017. 2017 dataset is used for pretraining the model. (The model will be used to initialize edge models before Federated Learning)\nDataset 2018 contains approximately 63,000 images, 3.1 GB. This dataset records a trajectory of approximately 6km along the Palos Verdes in LA. 2018 dataset is used for end-to-end Federated Learning and model validation. In order to provide fruitful evaluation, we conducted experiment on 4, 8, 16, 32 and 64 edge vehicles. The data were divided into corresponding number of parts and distributed to edge vehicles. Besides, in each edge vehicle, 70% of local dataset were training set while 30% were acted as the testing set.\nIn each edge vehicle, the first 70% data are regarded as the previously recorded driving information while the rest 30% are future information. The models were continuously trained based on the recorded information and perform prediction and validation on the steering wheel angle information by using future driving data.\nTable 1 provides the hardware information for all of the servers. In order to simulate aggregation and edge functions, one server is adopted as the aggregation server while the rest are acted as edge vehicles." }, { "heading": "3.3 MACHINE LEARNING METHOD", "text": "A two-stream deep Convolutional Neural Network (CNN) (Simonyan & Zisserman, 2014) (Fernandez, 2018) is utilized to perform angle prediction. Figure 2 gives the detailed information about the architecture. In our implementation, each stream has two convolutional layers and a max-pooling layer. After concatenating, there are two fully-connected layers which are activated by ReLU function.\nThe model contains two different neural branches which consume spatial information and temporal information as the inputs of two streams and then output the predicted steering angle. For the first stream the model consumes 3 frames of RGB images, which can be denoted as {At−2,At−1,At}. The second stream is the two-frame optical flow calculated by two consecutive frames Ot−1 = f({At−2,At−1}) and Ot = f({At−1,At}). Optical flow is a common temporal representation in video streams, which captures the motion changes between two frames (Horn & Schunck, 1981). The method of calculating optical flow applied in this paper is based on Gunnar Farneback’s algorithm implemented in OpenCV (Farnebäck, 2003). Figure 3 demonstrate an example optical flow matrix produced by two consecutive image frame.\nThe process of training an local CNN network is to find the best model parameters which cause the minimum difference between the predicted angle and the ground truth steering angle. Therefore, in this case, we choose mean square error as the local model training loss function:\nLoss = 1\nN N∑ t=1 (θt − θ̂t)2 (2)\nHere, N represents the batch size while θt and θ̂t represent the ground truth and the predicted steering wheel angle value at time t.\nDuring the process of model training in each edge vehicles, all the image frames will be firstly normalized to [−1, 1]. The batch size is 16 while the learning rate is set to 1e − 5. The optimizer utilized here is Adam (Kingma & Ba, 2014), with parameters β1 = 0.6, β2 = 0.99 and = 1e− 8." }, { "heading": "3.4 EVALUATION METRICS AND BASELINE MODEL", "text": "In order to provide fruitful results and evaluation, we selected three metrics and two baseline models. The three metrics includes angle prediction performance, model training time and bandwidth cost:\n• Angle prediction performance: We use root mean square error (RMSE), a common metric, to measure the difference between prediction results and ground truth. The metrics can provide good estimation about the quality of trained model in each edge vehicles. • Model training time: This metric is defined as the time cost for training a model at the\nedge vehicles. The result is the average of four edge vehicles during one training round. This metric demonstrates the speed of local edge devices updating their knowledge which is crucial and important for those systems which need to quickly evolve to adapt to the rapidly-changed environment. The metrics was measured in all the vehicles by checking model deployment timestamp. • Bandwidth cost: This metric is defined as the total number of bytes transmitted during the\nwhole training procedure. This metric demonstrate the total communication resources cost of achieving an applicable CNN model.\nThe two baseline models includes model trained by applying traditional centralized learning approach and the locally trained model without model sharing:\n• Traditional Centralized Learning model (ML): This baseline model is trained under the traditional centralized learning approach. Before model training, all the data from edge vehicles are firstly collected to a single server. The hyper-parameter of this model training is the same as Federated Learning which is mentioned in section 3.3. The performance can be then compared with the model trained by Federated Learning approach.\n• Locally trained model without model sharing (Local ML): This baseline models are trained directly on each edge vehicles. However, different from Federated Learning, there will be no model exchange during the training procedure. The prediction performance can be compared with Federated Learning model to see how Federated Learning can outperform those independently trained local models." }, { "heading": "4 END-TO-END FEDERATED LEARNING", "text": "In this section, we describe the algorithm and the approach applied in this paper. In order to perform on-device end-to-end learning based on the input image frames, images are firstly stored in an external storage driver located on each edge vehicles. At the same time, the optical flow information are calculated. When triggering the training threshold, image frames and optical flow frames are fed into a convolutional neural network. The output of the network is compared to the ground truth for that image frame, which is the recorded steering wheel angle. The weights of the CNN are adjusted using back propagation to enforce the model output as close as possible to the desired output. Figure 4 illustrate the diagram of the learning procedure in a single edge vehicle.\nAfter finishing each training epoch, models in edge vehicles will also be updated to the aggregation server and form a global knowledge among other cars (Figure 5). The aggregation applied in this paper is FedAvg (Li et al., 2019), which is a commonly used Federated Learning algorithm in most of the research. The steps of FedAvg algorithm is listed below:\nStep 1: Edge vehicles locally compute the model; After finishing each five local training epoch, they send updated model results to the aggregation server.\nStep 2: The central server performs aggregation by averaging all updated models to form a global knowledge of all local models.\nStep 3: The aggregation server sends back the aggregated result to each edge vehicles. Step 4: Edge vehicles replace the local model and performs further local training by using the global" }, { "heading": "5 EVALUATION", "text": "In this section, we present the experiment results of the presented end-to-end on-device Federated Learning approach on the use case of steering wheel angle prediction. We evaluate the system performance in three aspects (The metrics are defined in section 3.4) - (1) Angle prediction performance (2) Model Training Time (3) Bandwidth cost.\nFigure 6 illustrate the angle prediction performance between the model trained by Federated Learning (FL) and the locally trained model without any model exchange (Local ML). The results demon-\nstrate that the performance of traditional centralized trained model behaves similar to Federated Learning model. Besides, compared with independently trained model, Federated Learning can provide better prediction which is much closer to the ground truth.\nNumeric results are provided in Table 2. We show detailed results with 4 vehicles participated in Federated Learning, which provides a clear view of prediction performance in each edge vehicle. The results illustrate that in vehicle 1 and 4, model of Federated Learning outperform other baseline models. In vehicle 2 and 3, model of Federated Learning only perform about 1 ◦ worse than the traditional centralized learning model. Based on our results, we can summarize that Federated Learning model can provide more accurate prediction than local independently trained model and the behaviour of Federated Learning model can reach the same accuracy level compared with centralized learning model.\nTable 3 gives the comparison of total training time and bytes transferred between Federated Learning and two baseline model. The total number of training epochs for all the models is 100 and the model training is accelerated by Nvidia Tesla T4 GPU. The results show that Federated Learning need slightly more training time than independently locally trained model due to the model exchange time cost. However, the training time of Federated Learning is reduced about 75% and we save about 25% bandwidth compared with traditional centralized learning method.\nIn order to evaluate the impact of different number of learning vehicles, we perform more experiments with 8, 16, 32, 64 vehicles participated. Table 4 gives the overall steering angle prediction\nerror and total training time of Federated Learning model with different number of vehicles. The overall value provides an overview of prediction performance among all of the test datasets belongs to all vehicles. With the increasing number of edge vehicles, the model prediction performance on the edge is further enhanced. Furthermore, total model training time is linearly decreased corresponding to the increasing number of edge vehicles. Based on our results, we can summarize that with the participation of more edge vehicles and the larger size of the input datasets, the advantages of Federated Learning will become more obvious." }, { "heading": "6 DISCUSSION", "text": "Based on our experiment results, end-to-end on-device Federated Learning approach has more advantages compared with commonly used centralized learning approach. Federated Learning model can achieve same level of model prediction accuracy but decrease model training time and the bandwidth cost. Furthermore, if we compared with independently local trained model, because of the model sharing mechanism, Federated Learning can form a global knowledge of the whole datasets which are belongs to different participated edge vehicles. The model quality are largely enhanced and can achieve much better results.\nDue to those advantages, there are a variety of other meaningful use cases that end-to-end on-device Federated Learning can help. The technique reported in this paper can not only be used for steering angle prediction in self-driving vehicles but also other on-device applications, such as camera sensors and motion detection, which requires continuously machine learning model training on the resource-constrained edges. Furthermore, because of the user data privacy and network bandwidth constraints, Federated Learning can be applied in those systems which need quickly-evolved model to adapt their rapidly changing environment." }, { "heading": "7 CONCLUSION", "text": "In this paper, we describe an approach to end-to-end on-device Machine Learning by utilizing Federated Learning. We validate our approach with the wheel steering angle prediction in the self-driving vehicles. Our results demonstrate the strength and advantage of the model trained under end-toend Federated Learning approach. The model can achieve the same level of prediction accuracy compared with commonly used centralized learning method but reduces training time with 75% and bandwidth cost with 25 % in our case. Note that if the number of participating devices is further increased, the reduction will be more obvious and the strength of Federated Learning will become stronger.\nIn the future we plan to validate our approach in more use cases. Also, we would like to explore more advanced neural network combined with Federated Learning method. Furthermore, we plan to find more suitable aggregation algorithms and protocols for our end-to-end on-device Federated Learning approach." } ]
2,020
null
SP:475921dfd2c656b69172acf8d3ac49ecde54639d
[ "This paper proposed a self-supervised learning method of 3D shape descriptors for 3D recognition through multi-view 2D image representation learning. To represent the 3D shape, the authors first project the object to a group of 2D project images, which helps apply deep learning due to the image's matrix data format. The Unsupervised Learning of Transformation Equivariant 2D Representations by Autoencoding Variational Transformations is used for 3D shape descriptor learning, which the authors claimed as \"self-supervised\" learning. The key idea of transformation equivariant representations is directly borrowed from existing works [1][2]. The method designed is almost the same as [1] except for the encoding network." ]
3D object representation learning is a fundamental challenge in computer vision to draw inferences about the 3D world. Recent advances in deep learning have shown their efficiency in 3D object recognition, among which view-based methods have performed best so far. However, feature learning of multiple views in existing methods is mostly trained in a supervised fashion, which often requires a large amount of data labels with high cost. Hence, it is critical to learn multi-view feature representations in a self-supervised fashion. To this end, we propose a novel self-supervised learning paradigm of Multi-View Transformation Equivariant Representations (MV-TER), exploiting the equivariant transformations of a 3D object and its projected multiple views. Specifically, we perform a 3D transformation on a 3D object, and obtain multiple views before and after transformation via projection. Then, we self-train a representation learning module to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after transformation. Experimental results demonstrate that the proposed MV-TER significantly outperforms the state-of-the-art view-based approaches in 3D object classification and retrieval tasks.
[]
[ { "authors": [ "Andrew Brock", "Theodore Lim", "James M Ritchie", "Nick Weston" ], "title": "Generative and discriminative voxel modeling with convolutional neural networks", "venue": "arXiv preprint arXiv:1608.04236,", "year": 2016 }, { "authors": [ "Ken Chatfield", "Karen Simonyan", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Return of the devil in the details: Delving deep into convolutional nets", "venue": "In British Machine Vision Conference (BMVC),", "year": 2014 }, { "authors": [ "Taco Cohen", "Max Welling" ], "title": "Group equivariant convolutional networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Sander Dieleman", "Kyle W Willett", "Joni Dambre" ], "title": "Rotation-invariant convolutional neural networks for galaxy morphology prediction", "venue": "Monthly Notices of the Royal Astronomical Society,", "year": 2015 }, { "authors": [ "Sander Dieleman", "Jeffrey De Fauw", "Koray Kavukcuoglu" ], "title": "Exploiting cyclic symmetry in convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Alon Faktor", "Michal Irani" ], "title": "Video segmentation by non-local consensus voting", "venue": "In British Machine Vision Conference (BMVC),", "year": 2014 }, { "authors": [ "Yifan Feng", "Zizhao Zhang", "Xibin Zhao", "Rongrong Ji", "Yue Gao" ], "title": "GVCNN: Group-view convolutional neural networks for 3D shape recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Xiang Gao", "Wei Hu", "Guo-Jun Qi" ], "title": "GraphTER: Unsupervised learning of graph transformation equivariant representations via auto-encoding node-wise transformations", "venue": "In Proceedings of IEEE/CVF Conferences on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Robert Gens", "Pedro M Domingos" ], "title": "Deep symmetry networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Zhizhong Han", "Mingyang Shang", "Zhenbao Liu", "Chi-Man Vong", "Yu-Shen Liu", "Matthias Zwicker", "Junwei Han", "CL Philip Chen" ], "title": "Seqviews2seqlabels: Learning 3d global features via aggregating sequential views by rnn with attention", "venue": "IEEE Transactions on Image Processing (TIP),", "year": 2018 }, { "authors": [ "Zhizhong Han", "Honglei Lu", "Zhenbao Liu", "Chi-Man Vong", "Yu-Shen Liu", "Matthias Zwicker", "Junwei Han", "CL Philip Chen" ], "title": "3D2SeqViews: Aggregating sequential views for 3D global feature learning by CNN with hierarchical attention aggregation", "venue": "IEEE Transactions on Image Processing (TIP),", "year": 2019 }, { "authors": [ "Xinwei He", "Yang Zhou", "Zhichao Zhou", "Song Bai", "Xiang Bai" ], "title": "Triplet-center loss for multiview 3d object retrieval", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Geoffrey E Hinton", "Alex Krizhevsky", "Sida D Wang" ], "title": "Transforming auto-encoders", "venue": "In International Conference on Artificial Neural Networks (ICANN),", "year": 2011 }, { "authors": [ "Jianwen Jiang", "Di Bao", "Ziqiang Chen", "Xibin Zhao", "Yue Gao" ], "title": "Mlvcnn: Multi-loop-view convolutional neural network for 3d shape retrieval", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI),", "year": 2019 }, { "authors": [ "Longlong Jing", "Yingli Tian" ], "title": "Self-supervised visual feature learning with deep neural networks: A survey", "venue": "arXiv preprint arXiv:1902.06162,", "year": 2019 }, { "authors": [ "Asako Kanezaki", "Yasuyuki Matsushita", "Yoshifumi Nishida" ], "title": "Rotationnet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Jyri J Kivinen", "Christopher KI Williams" ], "title": "Transformation equivariant boltzmann machines", "venue": "In International Conference on Artificial Neural Networks (ICANN),", "year": 2011 }, { "authors": [ "Roman Klokov", "Victor Lempitsky" ], "title": "Escape from cells: Deep kd-networks for the recognition of 3D point cloud models", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (CVPR),", "year": 2017 }, { "authors": [ "Alexander Kolesnikov", "Xiaohua Zhai", "Lucas Beyer" ], "title": "Revisiting self-supervised visual representation learning", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2012 }, { "authors": [ "Karel Lenc", "Andrea Vedaldi" ], "title": "Understanding image representations by measuring their equivariance and equivalence", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Jan Eric Lenssen", "Matthias Fey", "Pascal Libuschewski" ], "title": "Group equivariant capsule networks", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2018 }, { "authors": [ "Daniel Maturana", "Sebastian Scherer" ], "title": "Voxnet: A 3D convolutional neural network for real-time object recognition", "venue": "In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),", "year": 2015 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Charles R Qi", "Hao Su", "Matthias Nießner", "Angela Dai", "Mengyuan Yan", "Leonidas J Guibas" ], "title": "Volumetric and multi-view CNNs for object classification on 3D data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "PointNet: Deep learning on point sets for 3D classification and segmentation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Charles Ruizhongtai Qi", "Li Yi", "Hao Su", "Leonidas J Guibas" ], "title": "PointNet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In Advances in Neural Information Processing Systems (NIPS),", "year": 2017 }, { "authors": [ "Guo-Jun Qi" ], "title": "Learning generalized transformation equivariant representations via autoencoding transformations", "venue": "arXiv preprint arXiv:1906.08628,", "year": 2019 }, { "authors": [ "Guo-Jun Qi", "Liheng Zhang", "Chang Wen Chen", "Qi Tian" ], "title": "AVT: Unsupervised learning of transformation equivariant representations by autoencoding variational transformations", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Zhongzheng Ren", "Yong Jae Lee" ], "title": "Cross-domain self-supervised multi-task feature learning using synthetic imagery", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Nawid Sayed", "Biagio Brattoli", "Björn Ommer" ], "title": "Cross and learn: Cross-modal self-supervision", "venue": "In German Conference on Pattern Recognition,", "year": 2018 }, { "authors": [ "Uwe Schmidt", "Stefan Roth" ], "title": "Learning rotation-aware features: From invariant priors to equivariant descriptors", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Henrik Skibbe" ], "title": "Spherical Tensor Algebra for Biomedical Image Analysis", "venue": "PhD thesis, Verlag nicht ermittelbar,", "year": 2013 }, { "authors": [ "Kihyuk Sohn", "Honglak Lee" ], "title": "Learning invariant representations with local transformations", "venue": "In International Conference on Machine Learning (ICML),", "year": 2012 }, { "authors": [ "Nitish Srivastava", "Elman Mansimov", "Ruslan Salakhudinov" ], "title": "Unsupervised learning of video representations using lstms", "venue": "In International Conference on Machine Learning (ICML),", "year": 2015 }, { "authors": [ "Otilia Stretcu", "Marius Leordeanu" ], "title": "Multiple frames matching for object discovery in video", "venue": "In British Machine Vision Conference (BMVC),", "year": 2015 }, { "authors": [ "Hang Su", "Subhransu Maji", "Evangelos Kalogerakis", "Erik Learned-Miller" ], "title": "Multi-view convolutional neural networks for 3D shape recognition", "venue": "In Proceedings of the IEEE international conference on computer vision (CVPR),", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Jiayu Wang", "Wengang Zhou", "Guo-Jun Qi", "Zhongqian Fu", "Qi Tian", "Houqiang Li" ], "title": "Transformation GAN for unsupervised image synthesis and representation learning", "venue": "In Proceedings of IEEE/CVF Conferences on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Xin Wei", "Ruixuan Yu", "Jian Sun" ], "title": "View-gcn: View-based graph convolutional network for 3d shape analysis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Zhirong Wu", "Shuran Song", "Aditya Khosla", "Fisher Yu", "Linguang Zhang", "Xiaoou Tang", "Jianxiong Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Ze Yang", "Liwei Wang" ], "title": "Learning relationships for multi-view 3D object recognition", "venue": "In Proceedings of the IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Tan Yu", "Jingjing Meng", "Junsong Yuan" ], "title": "Multi-view harmonized bilinear network for 3D object recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Liheng Zhang", "Guo-Jun Qi", "Liqiang Wang", "Jiebo Luo" ], "title": "AET vs. AED: Unsupervised representation learning by auto-encoding transformations rather than data", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "3D object representation has become increasingly prominent for a wide range of applications, such as 3D object recognition and retrieval (Maturana & Scherer, 2015; Qi et al., 2016; Brock et al., 2016; Qi et al., 2017a;b; Klokov & Lempitsky, 2017; Su et al., 2015; Feng et al., 2018; Yu et al., 2018; Yang & Wang, 2019). Recent advances in Convolutional Neural Network (CNN) based methods have shown their success in 3D object recognition and retrieval (Su et al., 2015; Feng et al., 2018; Yu et al., 2018; Yang & Wang, 2019). One important family of methods are view-based methods, which project a 3D object into multiple views and learn compact 3D representation by fusing the feature maps of these views for downstream tasks. Feature learning of multiple views in existing approaches are mostly trained in a supervised fashion, hinging on a large amount of data labels that prevents the wide applicability. Hence, self-supervised learning is in demand to alleviate the dependencies on labels by exploring unlabeled data for the training of multi-view feature representations in an unsupervised or (semi-)supervised fashion.\nMany attempts have been made to explore self-supervisory signals at various levels of visual structures for representation learning. The self-supervised learning framework requires only unlabeled data in order to formulate a pretext learning task (Kolesnikov et al., 2019), where a target objective can be computed without any supervision. These pretext tasks can be summarized into four categories (Jing & Tian, 2019): generation-based (Zhang et al., 2016; Pathak et al., 2016; Srivastava et al., 2015), context-based, free semantic label-based (Faktor & Irani, 2014; Stretcu & Leordeanu, 2015; Ren & Jae Lee, 2018), and cross modal-based (Sayed et al., 2018; Korbar et al., 2018). Among them, context-based pretext tasks include representation learning from image transformations, which is well connected with transformation equivariant representations as they transform equivalently as the transformed images.\nTransformation Equivariant Representation learning assumes that representations equivarying to transformations are able to encode the intrinsic structures of data such that the transformations can be reconstructed from the representations before and after transformations (Qi, 2019). Learning\ntransformation equivariant representations has been advocated in Hinton’s seminal work on learning transformation capsules (Hinton et al., 2011). Following this, a variety of approaches have been proposed to learn transformation equivariant representations (Kivinen & Williams, 2011; Sohn & Lee, 2012; Schmidt & Roth, 2012; Skibbe, 2013; Lenc & Vedaldi, 2015; Gens & Domingos, 2014; Dieleman et al., 2015; 2016; Zhang et al., 2019; Qi et al., 2019; Gao et al., 2020; Wang et al., 2020). Nevertheless, these works focus on transformation equivariant representation learning of a single modality, such as 2D images or 3D point clouds.\nIn this paper, we propose to learn Multi-View Transformation Equivariant Representations (MVTER) by decoding the 3D transformations from multiple 2D views. This is inspired by the equivariant transformations of a 3D object and its projected multiple 2D views. That is, when we perform 3D transformations on a 3D object, the 2D views projected from the 3D object via fixed viewpoints will transform equivariantly. In contrast to previous works where 2D/3D transformations are decoded from the original single image/point cloud and transformed counterparts, we exploit the equivariant transformations of a 3D object and the projected 2D views. We propose to decode 3D transformations from multiple views of a 3D object before and after transformation, which is taken as self-supervisory regularization to enforce the learning of intrinsic 3D representation. By estimating 3D transformations from the fused feature representations of multiple original views and those of the equivariantly transformed counterparts from the same viewpoints, we enable the accurate learning of 3D object representation even with limited amount of labels.\nSpecifically, we first perform 3D transformation on a 3D object (e.g., point clouds, meshes), and render the original and transformed 3D objects into multiple 2D views with fixed camera setup. Then, we feed these views into a representation learning module to infer representations of the multiple views before and after transformation respectively. A decoder is set up to predict the applied 3D transformation from the fused representations of multiple views before and after transformation. We formulate multi-view transformation equivariant representation learning as a regularizer along with the loss of a specific task (e.g., classification) to train the entire network end-to-end. Experimental results demonstrate that the proposed method significantly outperforms the state-of-the-art view-based models in 3D object classification and retrieval tasks.\nOur main contributions are summarized as follows.\n• We propose Multi-View Transformation Equivariant Representations (MV-TER) to learn 3D object representations from multiple 2D views that transform equivariantly with the 3D transformation in a self-supervised fashion.\n• We formalize the MV-TER as a self-supervisory regularizer to learn the 3D object representations by decoding 3D transformation from fused features of projected multiple views before and after the 3D transformation of the object.\n• Experiments demonstrate the proposed method outperforms the state-of-the-art view-based methods in 3D object classification and retrieval tasks in a self-supervised fashion." }, { "heading": "2 RELATED WORKS", "text": "In this section, we review previous works on transformation equivariant representations and multiview based neural networks." }, { "heading": "2.1 TRANSFORMATION EQUIVARIANT REPRESENTATIONS", "text": "Many approaches have been proposed to learn equivariant representations, including transforming auto-encoders (Hinton et al., 2011), equivariant Boltzmann machines (Kivinen & Williams, 2011; Sohn & Lee, 2012), equivariant descriptors (Schmidt & Roth, 2012), and equivariant filtering (Skibbe, 2013). Lenc & Vedaldi (2015) prove that the AlexNet (Krizhevsky et al., 2012) trained on ImageNet learns representations that are equivariant to flip, scaling and rotation transformations. Gens & Domingos (2014) propose an approximately equivariant convolutional architecture, which utilizes sparse and high-dimensional feature maps to deal with groups of transformations. Dieleman et al. (2015) show that rotation symmetry can be exploited in convolutional networks for effectively learning an equivariant representation. Dieleman et al. (2016) extend this work to evaluate on other computer vision tasks that have cyclic symmetry. Cohen & Welling (2016) propose group equivariant convolutions that have been developed to equivary to more types of transformations. The idea of\ngroup equivariance has also been introduced to the capsule nets (Lenssen et al., 2018) by ensuring the equivariance of output pose vectors to a group of transformations.\nTo generalize to generic transformations, Zhang et al. (2019) propose to learn unsupervised feature representations via Auto-Encoding Transformations (AET) by estimating transformations from the learned feature representations of both the original and transformed images. Qi et al. (2019) extend AET by introducing a variational transformation decoder, where the AET model is trained from an information-theoretic perspective by maximizing the lower bound of mutual information. Gao et al. (2020) extend transformation equivariant representations to graph data that are irregularly structured, and formalize graph transformation equivariant representation learning by auto-encoding node-wise transformations in an unsupervised manner. Wang et al. (2020) extend the AET to Generative Adversarial Networks (GANs) for unsupervised image synthesis and representation learning." }, { "heading": "2.2 MULTI-VIEW LEARNING", "text": "Recently, many view-based approaches have been proposed for 3D object learning. These methods project 3D objects (e.g., point clouds, meshes) into multiple views and extract view-wise features receptively via CNNs, and then fuse these features as the descriptor of 3D objects. Su et al. (2015) first propose a multi-view convolutional neural network (MVCNN) to learn a compact descriptor of an object from multiple views, which fuses view-wise features via a max pooling layer. Qi et al. (2016) introduce a new multi-resolution component into MVCNNs, and improve the classification performance. However, max pooling only retains the maximum elements from views, which leads to information loss. In order to address this problem, many subsequent works have been proposed to fuse multiple view-wise features into an informative descriptor for 3D objects. Feng et al. (2018) propose a group-view convolutional neural network (GVCNN) framework, which produces a compact descriptor from multiple views using a grouping strategy. Yu et al. (2018) propose a multi-view harmonized bilinear network (MHBN), which learns 3D object representation by aggregating local convolutional features through the proposed bilinear pooling. To take advantage of the spatial relationship among views, Han et al. (2018) and Han et al. (2019) propose to aggregate the global features of sequential views via attention-based RNN and CNN, respectively. Kanezaki et al. (2018) propose to learn global features by treating pose labels as latent variables which are optimized to self-align in an unsupervised manner. Yang & Wang (2019) propose a relation network to connect corresponding regions from different viewpoints, and reinforce the information of individual view. Jiang et al. (2019) propose a Multi-Loop-View Convolutional Neural Network (MLVCNN) for 3D object retrieval by introducing a novel loop normalization to generate loop-level features. Wei et al. (2020) design a view-based GCN framework to aggregate multi-view features by investigating relations of views." }, { "heading": "3 MV-TER: THE PROPOSED METHOD", "text": "In this section, we first define multi-view equivariant transformation in Section 3.1. Then we formulate the MV-TER model and introduce the MV-TER framework in Section 3.2 and Section 3.4, respectively." }, { "heading": "3.1 MULTI-VIEW EQUIVARIANT TRANSFORMATION", "text": "2D views are projections of a 3D object from various viewpoints, which transform in an equivariant manner as the 3D object transforms. Formally, given a 3D object M ∈ Rn×3 consisting of n points and a 3D transformation distribution T , we sample a transformation t ∼ T and apply it to M:\nM̃ = t(M). (1)\nWe project M onto 2D views from m viewpoints, denoted as V = {V1, ...,Vm}, i.e.,\nVi = pi(M), (2)\nwhere pi : R3 7→ R2 is a projection function for the ith view. Subsequent to the transformation on M, the m views transform equivariantly, leading to Ṽ = {Ṽ1, ..., Ṽm}. We have\nṼi = pi\n( M̃ ) = pi (t(M)) = fi,t (Vi) , i = 1, ...,m, (3)\nwhere fi,t’s are 2D transformations that are equivariant under the same 3D transformation t. Though Vi and Ṽi are projected along the same viewpoint i (i.e., the same camera setup), they are projections of the original 3D object and its transformed counterpart, thus demonstrating different perspectives of the same 3D object. Our goal is to learn the representations of 3D objects from their multiple 2D views by estimating the 3D transformation t as a pretext task from sampled multiple views before and after the transformation, i.e., V and Ṽ ." }, { "heading": "3.2 THE FORMULATION", "text": "Considering the ith views {Vi, Ṽi} before and after a transformation t, a function E(·) is transformation equivariant if it satisfies\nE(Ṽi) = E(fi,t(Vi)) = ρ(t)E(Vi), (4)\nwhere ρ(t) is a homomorphism of transformation t in the representation space.\nWe aim to train a shared representation module E(·) that learns equivariant representations of multiple views. In the setting of self-supervised learning, we formulate MV-TER as a regularizer along with the (semi-)supervised loss of a specific task to train the entire network. Given a neural network with learnable parameters Θ, the network is trained end-to-end by minimizing the weighted sum of two loss functions: 1) the loss of a specific task `task (e.g., a cross-entropy loss in 3D object classification); and 2) the MV-TER loss that is the expectation of estimation error `M(t, t̂) over each sample M given a distribution of 3D objectsM and each transformation t ∼ T :\nmin Θ `task + λ E t∼T E M∼M `M(t, t̂). (5)\n`M(t, t̂) is the mean squared error (MSE) between the estimated transformation t̂ and the ground truth t. λ is a weighting parameter to strike a balance between the loss of a specific task and the MV-TER loss. Here, the loss `task can be taken over all the data labels (fully-supervised) or partial labels (semi-supervised). In (5), t̂ is decoded as a function of V and Ṽ in multiple views as defined in (2) and (3), and we will present two schemes to decode t̂ in the next subsection." }, { "heading": "3.3 TWO TRANSFORMATION DECODING SCHEMES", "text": "Fusion Scheme. We propose two schemes to decode the transformation t in (5) from the feature representations of multiple views E(Vi) and E(Ṽi), i = 1, ...,m. The first scheme is to decode from fused representations of multiple views (before and after transformations). Suppose the neural network extracts features of Vi and Ṽi from a representation learning module E(·), and estimates the 3D transformation from both features via a transformation decoding module D(·), then we have\nt̂ = D [ F (E(V1), ..., E(Vm)) , F ( E(Ṽ1), ..., E(Ṽm) )] , (6)\nwhere F (·) is a function of feature fusion.\nAverage Scheme. In the second decoding scheme, we estimate the transformation t̂ from each view before and after transformation and then take the average of the estimates. The idea is each view captures the projected 3D structures under a transformation. This essentially models a 3D object from different perspectives from which the underlying 3D transformation can be revealed. By averaging the estimated transformations across multiple views, a reasonable estimation of 3D transformation can be made.\nThis actually pushes the model to learn a good 3D representation from individual 2D views and leads to an estimation t̂i from the ith view:\nt̂i = D ( E(Vi), E(Ṽi) ) , i = 1, ...,m. (7)\nThe final decoded 3D transformation is taken as the expectation of t̂i’s:\nt̂ = 1\nm m∑ i=1 t̂i. (8)\nHence, we update the parameters Θ in the representation learning module E(·) and the transformation decoding module D(·) iteratively by backward propagation of the regularized loss in (5). Interestingly, the second decoding scheme can reach an even better performance than the first scheme in experiments. This should not be surprising since our ultimate goal is to enable multiview learning by fusing the representations of individual 2D views to reveal the target 3D objects. The second scheme follows this motivation by pushing each view to encode as much information as possible about the 3D transformation, as implied by the multi-view learning." }, { "heading": "3.4 THE ALGORITHM", "text": "Given a 3D object M, we randomly draw a transformation t ∼ T and apply it to M to obtain a transformed M̃. Then we have m views V = {V1, ...,Vm} by projecting M to 2D views. Accordingly, the views after the 3D transformation are Ṽ = {Ṽ1, ..., Ṽm}. To learn the applied 3D transformation t, we design an end-to-end architecture as illustrated in Figure 1 for the fusion decoding scheme, while the architecture of the average decoding scheme will be presented in Appendix A. We choose existing CNN models as the representation learning module E(·) (e.g., AlexNet (Krizhevsky et al., 2012), GoogLeNet (Szegedy et al., 2015)), which extract the representation of each view separately. The learned feature representations will be fed into a fusion module and a transformation decoding module D(·) respectively. The fusion module is to fuse the features of multiple views as the overall 3D object representation, e.g., by a view-wise max-pooling layer (Su et al., 2015) or group pooling layer (Feng et al., 2018). The fused feature will serve as the general descriptor of the 3D object for the subsequent downstream learning tasks (e.g., classification and retrieval). The transformation decoding module D(·) is to estimate the 3D transformation parameters from the feature representations of multiple views. Next, we will discuss the representation learning module and transformation decoding module in detail." }, { "heading": "3.4.1 REPRESENTATION LEARNING MODULE", "text": "The representation learning module E(·) takes the original 2D views V and their transformed counterparts Ṽ as the input. E(·) learns feature representations of V and Ṽ through a Siamese encoder network with shared weights. Specifically, we employ the feature learning layers of a pre-trained CNN model as the backbone. Then, we obtain the features of each view before and after transformation." }, { "heading": "3.4.2 TRANSFORMATION DECODING MODULE", "text": "To estimate the 3D transformation t, we concatenate extracted features of multiple views before and after transformation at feature channel, which are then fed into the transformation decoder. The decoder consists of several linear layers to aggregate the representations of multiple views for\nthe prediction of the 3D transformation. As discussed in Section 3.2, we have two strategies for decoding the transformation parameters. We can decode from the fused representations of multiple views before and after transformation as in (6), or from each pair of original and equivariantly transformed views {Vi, Ṽi} to take average for final estimation as in (8). Based on the loss in (5), t is decoded by minimizing the mean squared error (MSE) between the ground truth and estimated transformation parameters. We will show the estimated 3D transformations of the two decoding schemes in Appendix C." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we evaluate the proposed MV-TER model on two representative downstream tasks: 3D object classification and retrieval." }, { "heading": "4.1 DATASET", "text": "We conduct experiments on the ModelNet40 dataset (Wu et al., 2015). This dataset contains 12, 311 CAD models from 40 categories. We follow the standard training and testing split settings, i.e., 9, 843 models are used for training and 2, 468 models are for testing. To acquire projected 2D views, we follow the experimental settings of MVCNN (Su et al., 2015) to render multiple views of each 3D object. Here, 12 virtual cameras are employed to capture views with an interval angle of 30 degree. Next, we employ rotation as our 3D transformations on objects and perform a random rotation with three parameters all in range [−180◦, 180◦] on the entire 3D object. We also render the views of the transformed 3D object using the same settings as the original 3D object. The rendered multiple views before and after 3D transformations are taken as the input to our method." }, { "heading": "4.2 3D OBJECT CLASSIFICATION", "text": "In this task, we employ the MV-TER as a self-supervisory regularizer to two competitive multi-view based 3D object classification methods: MVCNN (Su et al., 2015) and GVCNN (Feng et al., 2018), which are referred to as MV-TER (MVCNN) and MV-TER (GVCNN). Further, we implement with two transformation decoding schemes as discussed in Section 3.3, including the fusion scheme and average scheme. Then our model has four variants as presented in Table 1.\nImplementation Details: We deploy GoogLeNet (Szegedy et al., 2015) as our backbone CNN as in GVCNN (Feng et al., 2018). The backbone GoogLeNet is pre-trained on ImageNet1K dataset.\nWe remove the last linear layer as the Siamese representation learning module to extract features for each view. Subsequent to the representation learning module, we employ one linear layer as the transformation decoding module. The output feature representations of the Siamese network first go through a channel-wise concatenation, which are then fed into the transformation decoding module to estimate the transformation parameters. The entire network is trained via the SGD optimizer with a batch size of 24. The momentum and weight decay rate are set to 0.9 and 10−4, respectively. The initial learning rate is 0.001, which then decays by a factor of 0.5 for every 10 epochs. The weighting parameter λ in (5) is set to 1.0. Also note that, MVCNN in Table 1 has two variants with different backbones: MVCNN (Su et al., 2015) and MVCNN (GoogLeNet) (Feng et al., 2018). MVCNN (Su et al., 2015) uses the VGG-M (Chatfield et al., 2014) as the backbone, while MVCNN (GoogLeNet) implemented in (Feng et al., 2018) employs the GoogLeNet (Szegedy et al., 2015). In addition, RotationNet (Kanezaki et al., 2018) and View-GCN (Wei et al., 2020) are set up with 12 views taken by the default camera system for fair comparison.\nExperimental Results: As listed in Table 1, the MV-TER (MVCNN), average and MV-TER (GVCNN), average achieve classification accuracy of 95.5% and 97.0% respectively, which outperform the state-of-the-art View-GCN (Wei et al., 2020). Also, the MV-TER (MVCNN), average outperforms its baseline MVCNN (GoogLeNet) by 3.3%, while MV-TER (GVCNN), average outperforms the baseline GVCNN (GoogLeNet) by 4.4%, which demonstrates the effectiveness of our proposed MV-TER as a self-supervisory regularizer." }, { "heading": "4.3 3D OBJECT RETRIEVAL", "text": "In this task, we directly employ the fused feature representations of MV-TER (MVCNN) and MVTER (GVCNN) as the 3D object descriptor for retrieval. We denote FX and FY as the 3D object descriptor of two 3D objects X and Y respectively, and use the Euclidean distance between them for retrieval. The distance metric is defined as\ndist(X,Y) = ‖FX − FY ‖2. (9)\nWe take the mean average precision (mAP) on retrieval as the evaluation metric, and present the comparison results in the last column of Table 1. For MVCNN and GVCNN, a low-rank Mahalanobis metric learning (Su et al., 2015) is applied to boost the retrieval performance. In comparison, we train our MV-TER model without the low-rank Mahalanobis metric learning, but still achieve better retrieval performance, which validates the superiority of our feature representation learning for 3D objects. Further, we apply Triplet Center Loss (He et al., 2018) to our MV-TER. With Center Loss, our model further achieves an average gain of 3.3% in mAP. As presented in the last column of Table 1, the MV-TER (GVCNN), average and MV-TER (GVCNN), fusion achieve mAP of 91.5% and 91.1% respectively, which is comparable to MLVCNN with Center Loss (Jiang et al., 2019) while we only take 12 views as input instead of 36 views. We will demonstrate some visual results of 3D object retrieval in Appendix B." }, { "heading": "4.4 ABLATION STUDIES", "text": "" }, { "heading": "4.4.1 ON THE NUMBER OF VIEWS", "text": "We quantitatively evaluate the influence of number of views on the classification task. Specifically, we randomly choose {8, 12} views from all the views as the input to train MV-TER (GVCNN), average respectively, leading to two learned networks. Then, we randomly select {2, 4, 8, 12} views from all the testing views to evaluate the classification accuracy of the two networks respectively, as reported in Table 2. We see that we constantly outperform GVCNN with different number of training views and testing views. In particular, when the number of testing views reaches the extreme of two views for multi-view learning, our MV-TER model is still able to achieve the classification accuracy of 91.9% and 91.2%, which outperforms GVCNN by a large margin." }, { "heading": "4.4.2 ON DIFFERENT LABELING RATES", "text": "We adopt six different label rates in the set {0.01, 0.02, 0.03, 0.04, 0.05, 0.10} to train four models for comparison: MVCNN (AlexNet), GVCNN (AlexNet), MV-TER (MVCNN), average and MVTER (GVCNN), average. When training MVCNN (AlexNet) and GVCNN (AlexNet), we only use a small amount of labeled data to minimize the cross entropy loss for training, and then employ all the test data for evaluation. When training MV-TER (MVCNN), average and MV-TER (GVCNN), average, we adopt all the data (labeled and unlabeled) to predict the 3D transformations without the\nuse of labels, and then adopt only labeled data to acquire classification scores. That is, we minimize (5) with a small amount of labels taken for the classification loss `task. In all the four models, a pre-trained AlexNet on ImageNet1K dataset is employed as the backbone CNN.\nFigure 2 presents the classification accuracy under the six label rates on ModelNet40 dataset. When the label rate is 0.10, we see that the four models achieve comparable results, which benefits from the pre-training of the backbone AlexNet. When the label rate keeps decreasing, the performance of both MVCNN and GVCNN drop quickly, while the MV-TER models are much more robust. Even at the extremely low label rate 0.01, MV-TER (MVCNN), average and MV-TER (GVCNN), average achieve the classification accuracy of 58.6% and 55.2% respectively, thus demonstrating the robustness of the proposed MV-TER model." }, { "heading": "4.4.3 TRANSFER LEARNING", "text": "We further show the generalization performance of the proposed MV-TER under average and fusion schemes. We take the same network architecture and parameter settings as in Sec. 4.2, except that we set λ = 0.5 in (5). In particular, we train the feature extractor on the ShapeNetCore55 dataset, and test on the ModelNet40 dataset by a linear SVM classifier using the feature representations of dimension 2048 obtained from the second last fully-connected layer of MV-TER. Table 3 reports the classification comparison of MV-TER and two baseline methods on ModelNet40 dataset under the ShapeNetCore55 pre-training strategies. As we can see, the proposed MV-TER with two decoding schemes improves the average classification accuracy by 2.85% and 2.95% respectively under the ShapeNetCore55 pre-training strategy compared with the two baseline methods MVCNN and GVCNN, thus validating the generalizability." }, { "heading": "5 CONCLUSION", "text": "In this paper, we propose a novel self-supervised learning paradigm of Multi-View Transformation Equivariant Representations (MV-TER) via auto-encoding 3D transformations, exploiting the equivariant transformations of a 3D object and its projected multiple views. We perform a 3D transformation on a 3D object, which leads to equivariant transformations in projected multiple views. By decoding the 3D transformation from the fused feature representations of multiple views before and after transformation, the MV-TER enforces the representation learning module to learn intrinsic 3D object representations. Experimental results demonstrate that the proposed MV-TER significantly outperforms the state-of-the-art view-based approaches in 3D object classification and retrieval tasks." }, { "heading": "A ARCHITECTURE OF AVERAGE DECODING SCHEME", "text": "We demonstrate the architecture of the proposed MV-TER with the average decoding scheme in Figure 3.\nB VISUALIZATION OF 3D OBJECT RETRIEVAL Query Top 10 retrieved 3D objects\nWe demonstrate some visual results of 3D object retrieval in Figure 4." }, { "heading": "C EVALUATION OF THE 3D TRANSFORMATION ESTIMATION", "text": "Further, to intuitively interpret the estimated 3D transformations from the proposed fusion and average decoding schemes, we visualize the multiple views projected from 3D objects Car and Bowl with the estimated 3D transformations applied. In Figure 5(a) and Figure 5(b), the first, second and fourth rows demonstrate the projected views from the 3D object with the same 3D transformation: the ground truth, the estimation from the fusion scheme and the estimation from the average scheme. In the third row, each view is the result of each individually estimated 3D transformation t̂i as in (7), i.e., view-wise transformations. Note that each column is rendered under the same viewpoint. We see that our MV-TER model estimates more accurate 3D transformations via the average scheme, which is consistent with the objective results.\nMoreover, Figure 6 shows the transformation estimation error on the ModelNet40 dataset under the average scheme. The horizontal axis is the index of the training epoch, while the vertical axis refers to the mean squared error. We observe that the MV-TER loss decreases rapidly in the first 40\nepochs. Until the 60th epoch, the mean squared error basically converges to a very small number, thus validating the effectiveness of our model in the transformation estimation.\nD VISUALIZATION OF FEATURE MAPS\nWe visualize the feature maps of multiple views projected from 3D objects before and after transformation in Figure 7 for the same category and 8 for different categories. We see that the feature maps of projected multiple views transform equivariantly with the input views. In Figure 7, the feature maps from the same category are similar. In contrast, in Figure 8, although the 3D objects from two different categories are similar, their feature maps are discriminative. This shows the robustness and effectiveness of the learned descriptor.\ntv stand" } ]
2,020
null
SP:38e84c7dbf88091348af8b84192d8383c4b37b5b
[ "of paper: the authors explore adding a soft structural attention constraint to BERT, by penalizing attention weights that are substantially different from a head–dependent \"adjacency\" matrix derived from dependency parses. BERT is then fine-tuned with and without (\"domain-finetuned\") this constraint on corpus data for which fMRI recordings from participants during reading are available. A linear classifier from the final layer of BERT's embedding (mean-pooled) is then learned to the fMRI data. Within this pipeline, domain-finetuned models are not an improvement over unfinetuned BERT, but fine-tuning with the structural attention constraint improves decoding to fMRI data, especially for word-level data (the Wehbe2014 dataset)." ]
Neuroscientists evaluate deep neural networks for natural language processing as possible candidate models for how language is processed in the brain. These models are often trained without explicit linguistic supervision, but have been shown to learn some linguistic structure in the absence of such supervision (Manning et al., 2020), potentially questioning the relevance of symbolic linguistic theories in modeling such cognitive processes (Warstadt & Bowman, 2020). We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms. Using structure from dependency or minimal recursion semantic annotations, we find alignments improve significantly for one of the datasets. For another dataset, we see more mixed results. We present an extensive analysis of these results. Our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, expanding the range of possible scientific inferences a neuroscientist could make, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics.
[]
[ { "authors": [ "Omri Abend", "Ari Rappoport" ], "title": "Universal conceptual cognitive annotation (UCCA). In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2013 }, { "authors": [ "Samira Abnar", "Lisa Beinborn", "Rochelle Choenni", "Willem Zuidema" ], "title": "Blackbox meets blackbox: Representational similarity and stability analysis of neural language models and brains", "venue": null, "year": 1906 }, { "authors": [ "Lasha Abzianidze", "Johan Bos" ], "title": "Towards universal semantic tagging", "venue": "arXiv preprint arXiv:1709.10381,", "year": 2017 }, { "authors": [ "Andrew James Anderson", "Jeffrey R Binder", "Leonardo Fernandino", "Colin J Humphries", "Lisa L Conant", "Mario Aguilar", "Xixi Wang", "Donias Doko", "Rajeev DS Raizada" ], "title": "Predicting neural activity patterns associated with sentences using a neurobiologically motivated model of semantic representation", "venue": "Cerebral Cortex,", "year": 2017 }, { "authors": [ "Laura Banarescu", "Claire Bonial", "Shu Cai", "Madalina Georgescu", "Kira Griffitt", "Ulf Hermjakob", "Kevin Knight", "Philipp Koehn", "Martha Palmer", "Nathan Schneider" ], "title": "Abstract meaning representation for sembanking", "venue": "In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse,", "year": 2013 }, { "authors": [ "Yonatan Belinkov", "Lluı́s Màrquez", "Hassan Sajjad", "Nadir Durrani", "Fahim Dalvi", "James Glass" ], "title": "Evaluating layers of representation in neural machine translation on part-of-speech and semantic tagging", "venue": "tasks. arXiv preprint arXiv:1801.07772,", "year": 2018 }, { "authors": [ "Johan Bos", "Valerio Basile", "Kilian Evang", "Noortje J Venhuizen", "Johannes Bjerva" ], "title": "The groningen meaning bank", "venue": "In Handbook of linguistic annotation,", "year": 2017 }, { "authors": [ "Jonathan R Brennan", "John T Hale" ], "title": "Hierarchical structure guides rapid linguistic predictions during naturalistic listening", "venue": "PloS one,", "year": 2019 }, { "authors": [ "Jonathan R Brennan", "Edward P Stabler", "Sarah E Van Wagenen", "Wen-Ming Luh", "John T Hale" ], "title": "Abstract linguistic structure correlates with temporal activity during naturalistic comprehension", "venue": "Brain and language,", "year": 2016 }, { "authors": [ "Emanuele Bugliarello", "Naoaki Okazaki" ], "title": "Enhancing machine translation with dependency-aware self-attention", "venue": "arXiv preprint arXiv:1909.03149,", "year": 2019 }, { "authors": [ "Charlotte Caucheteux", "Jean-Rémi King" ], "title": "Language processing in brains and deep neural networks: computational convergence and its limits", "venue": "BioRxiv,", "year": 2020 }, { "authors": [ "Gavin C Cawley", "Nicola LC Talbot" ], "title": "On over-fitting in model selection and subsequent selection bias in performance evaluation", "venue": "The Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Wanxiang Che", "Longxu Dou", "Yang Xu", "Yuxuan Wang", "Yijia Liu", "Ting Liu" ], "title": "Hit-scir at mrp 2019: A unified pipeline for meaning representation parsing via efficient training and effective encoding", "venue": "In Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Learning,", "year": 2019 }, { "authors": [ "Yun-Nung Chen", "Dilek Hakkani-Tur", "Gokhan Tur", "Asli Celikyilmaz", "Jianfeng Gao", "Li Deng" ], "title": "Knowledge as a teacher: Knowledge-guided structural attention networks", "venue": "arXiv preprint arXiv:1609.03286,", "year": 2016 }, { "authors": [ "Ann Copestake", "Dan Flickinger", "Carl Pollard", "Ivan A Sag" ], "title": "Minimal recursion semantics: An introduction", "venue": "Research on language and computation,", "year": 2005 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Georgiana Dinu", "Angeliki Lazaridou", "Marco Baroni" ], "title": "Improving zero-shot learning by mitigating the hubness problem", "venue": "arXiv preprint arXiv:1412.6568,", "year": 2014 }, { "authors": [ "Robert M.W. Dixon" ], "title": "Basic Linguistic Theory", "venue": null, "year": 2012 }, { "authors": [ "Roman Feldbauer", "Maximilian Leodolter", "Claudia Plant", "Arthur Flexer" ], "title": "Fast approximate hubness reduction for large high-dimensional data", "venue": "IEEE International Conference on Big Knowledge (ICBK),", "year": 2018 }, { "authors": [ "Dan Flickinger", "Stephan Oepen", "Emily M Bender" ], "title": "Sustainable development and refinement of complex linguistic annotations at scale", "venue": "In Handbook of Linguistic Annotation,", "year": 2017 }, { "authors": [ "Stefan L Frank", "Leun J Otten", "Giulia Galli", "Gabriella Vigliocco" ], "title": "The erp response to the amount of information conveyed by words in sentences", "venue": "Brain and language,", "year": 2015 }, { "authors": [ "Jon Gauthier", "Anna Ivanova" ], "title": "Does the brain represent words? an evaluation of brain decoding studies of language understanding", "venue": "arXiv preprint arXiv:1806.00591,", "year": 2018 }, { "authors": [ "Jon Gauthier", "Roger Levy" ], "title": "Linking artificial and human neural representations of language", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),", "year": 2019 }, { "authors": [ "Yoav Goldberg" ], "title": "Assessing bert’s syntactic abilities", "venue": "arXiv preprint arXiv:1901.05287,", "year": 2019 }, { "authors": [ "Jan Hajic", "Eva Hajicová", "Jarmila Panevová", "Petr Sgall", "Ondrej Bojar", "Silvie Cinková", "Eva Fucı́ková", "Marie Mikulová", "Petr Pajas", "Jan Popelka" ], "title": "Announcing prague czech-english dependency treebank 2.0", "venue": "In LREC,", "year": 2012 }, { "authors": [ "Daniel Hershcovich", "Omri Abend", "Ari Rappoport" ], "title": "A transition-based directed acyclic graph parser for UCCA. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": null, "year": 2017 }, { "authors": [ "Daniel Hershcovich", "Miryam de Lhoneux", "Artur Kulmizev", "Elham Pejhan", "Joakim Nivre" ], "title": "Køpsala: Transition-based graph parsing via efficient training and effective encoding", "venue": "In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies,", "year": 2020 }, { "authors": [ "Michael E Houle" ], "title": "Inlierness, outlierness, hubness and discriminability: an extreme-value-theoretic foundation", "venue": "National Institute of Informatics Technical Report NII-2015-002E, Tokyo, Japan,", "year": 2015 }, { "authors": [ "Angelina Ivanova", "Stephan Oepen", "Lilja Øvrelid", "Dan Flickinger" ], "title": "Who did what to whom? a contrastive study of syntacto-semantic dependencies", "venue": "In Proceedings of the Sixth Linguistic Annotation Workshop,", "year": 2012 }, { "authors": [ "Shailee Jain", "Alexander Huth" ], "title": "Incorporating context into language encoding models for fmri", "venue": "bioRxiv, pp", "year": 2018 }, { "authors": [ "Hans Kamp", "Uwe Reyle" ], "title": "From discourse to logic: introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory", "venue": "Studies in linguistics and philosophy,", "year": 1993 }, { "authors": [ "Simon Kornblith", "Mohammad Norouzi", "Honglak Lee", "Geoffrey Hinton" ], "title": "Similarity of neural network representations revisited", "venue": "arXiv preprint arXiv:1905.00414,", "year": 2019 }, { "authors": [ "Nikolaus Kriegeskorte", "Marieke Mur", "Peter A Bandettini" ], "title": "Representational similarity analysisconnecting the branches of systems neuroscience", "venue": "Frontiers in systems neuroscience,", "year": 2008 }, { "authors": [ "Nelson F Liu", "Matt Gardner", "Yonatan Belinkov", "Matthew E Peters", "Noah A Smith" ], "title": "Linguistic knowledge and transferability of contextual representations", "venue": null, "year": 1903 }, { "authors": [ "Alessandro Lopopolo", "Stefan L Frank", "Antal Van den Bosch", "Roel M Willems" ], "title": "Using stochastic language models (slm) to map lexical, syntactic, and phonological information processing in the brain", "venue": "PloS one,", "year": 2017 }, { "authors": [ "Christopher D Manning", "Kevin Clark", "John Hewitt", "Urvashi Khandelwal", "Omer Levy" ], "title": "Emergent linguistic structure in artificial neural networks trained by self-supervision", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Rebecca Marvin", "Tal Linzen" ], "title": "Targeted syntactic evaluation of language models", "venue": "Proceedings of the Society for Computation in Linguistics,", "year": 2019 }, { "authors": [ "Tom M Mitchell", "Svetlana V Shinkareva", "Andrew Carlson", "Kai-Min Chang", "Vicente L Malave", "Robert A Mason", "Marcel Adam Just" ], "title": "Predicting human brain activity associated with the meanings of nouns", "venue": null, "year": 2008 }, { "authors": [ "Joakim Nivre", "Marie-Catherine De Marneffe", "Filip Ginter", "Yoav Goldberg", "Jan Hajic", "Christopher D Manning", "Ryan McDonald", "Slav Petrov", "Sampo Pyysalo", "Natalia Silveira" ], "title": "Universal dependencies v1: A multilingual treebank collection", "venue": "In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16),", "year": 2016 }, { "authors": [ "Joakim Nivre", "Marie-Catherine de Marneffe", "Filip Ginter", "Jan Hajič", "Christopher D. Manning", "Sampo Pyysalo", "Sebastian Schuster", "Francis Tyers", "Daniel Zeman" ], "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", "venue": "In Proceedings of The 12th Language Resources and Evaluation Conference,", "year": 2020 }, { "authors": [ "Stephan Oepen", "Marco Kuhlmann", "Yusuke Miyao", "Daniel Zeman", "Dan Flickinger", "Jan Hajič", "Angelina Ivanova", "Yi Zhang" ], "title": "SemEval 2014 task 8: Broad-coverage semantic dependency parsing", "venue": "In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval", "year": 2014 }, { "authors": [ "Stephan Oepen", "Marco Kuhlmann", "Yusuke Miyao", "Daniel Zeman", "Silvie Cinková", "Dan Flickinger", "Jan Hajič", "Zdeňka" ], "title": "Urešová. SemEval 2015 task 18: Broad-coverage semantic dependency parsing", "venue": "In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval", "year": 2015 }, { "authors": [ "Stephan Oepen", "Marco Kuhlmann", "Yusuke Miyao", "Daniel Zeman", "Silvie Cinková", "Dan Flickinger", "Jan Hajic", "Angelina Ivanova", "Zdenka Urešová" ], "title": "Semantic dependency parsing (sdp) graph banks release 1.0 ldc2016t10", "venue": "Web Download,", "year": 2016 }, { "authors": [ "Francisco Pereira", "Bin Lou", "Brianna Pritchett", "Samuel Ritter", "Samuel J Gershman", "Nancy Kanwisher", "Matthew Botvinick", "Evelina Fedorenko" ], "title": "Toward a universal decoder of linguistic meaning from brain activation", "venue": "Nature communications,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Martin Schrimpf", "Idan Blank", "Greta Tuckute", "Carina Kauf", "Eghbal A. Hosseini", "Nancy Kanwisher", "Joshua Tenenbaum", "Evelina Fedorenko" ], "title": "Artificial neural networks accurately predict language processing in the brain", "venue": "bioRxiv,", "year": 2020 }, { "authors": [ "Dan Schwartz", "Mariya Toneva", "Leila Wehbe" ], "title": "Inducing brain-relevant bias in natural language processing models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Petr Sgall", "Eva Hajicová", "Jarmila Panevová" ], "title": "The meaning of the sentence and its semantic and pragmatic aspects", "venue": null, "year": 1986 }, { "authors": [ "Natalia Silveira", "Timothy Dozat", "Marie-Catherine De Marneffe", "Samuel R Bowman", "Miriam Connor", "John Bauer", "Christopher D Manning" ], "title": "A gold standard dependency corpus for english", "venue": "In LREC,", "year": 2014 }, { "authors": [ "Emma Strubell", "Andrew McCallum" ], "title": "Syntax helps elmo understand semantics: Is syntax still relevant in a deep neural architecture for srl", "venue": "arXiv preprint arXiv:1811.04773,", "year": 2018 }, { "authors": [ "Emma Strubell", "Patrick Verga", "Daniel Andor", "David Weiss", "Andrew McCallum" ], "title": "Linguisticallyinformed self-attention for semantic role labeling", "venue": "In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing,", "year": 2018 }, { "authors": [ "Mariya Toneva", "Leila Wehbe" ], "title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel Bowman" ], "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "venue": "In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP,", "year": 2018 }, { "authors": [ "Shaonan Wang", "Jiajun Zhang", "Haiyan Wang", "Nan Lin", "Chengqing Zong" ], "title": "Fine-grained neural decoding with distributed word representations", "venue": "Information Sciences,", "year": 2020 }, { "authors": [ "Alex Warstadt", "Samuel R Bowman" ], "title": "Can neural networks acquire a structural bias from raw linguistic data", "venue": "arXiv preprint arXiv:2007.06761,", "year": 2020 }, { "authors": [ "Leila Wehbe", "Ashish Vaswani", "Kevin Knight", "Tom M. Mitchell" ], "title": "Aligning context-based statistical models of language with brain activity during reading", "venue": "In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "Frank Wilcoxon" ], "title": "Individual comparisons by ranking methods", "venue": "In Breakthroughs in statistics,", "year": 1992 }, { "authors": [ "Yonghui Wu", "Mike Schuster", "Zhifeng Chen", "Quoc V Le", "Mohammad Norouzi", "Wolfgang Macherey", "Maxim Krikun", "Yuan Cao", "Qin Gao", "Klaus Macherey" ], "title": "Google’s neural machine translation system: Bridging the gap between human and machine translation", "venue": "arXiv preprint arXiv:1609.08144,", "year": 2016 }, { "authors": [ "Yue Zhang", "Rui Wang", "Luo Si" ], "title": "Syntax-enhanced self-attention-based semantic role labeling", "venue": "arXiv preprint arXiv:1910.11204,", "year": 2019 }, { "authors": [ "Gauthier", "Levy" ], "title": "2019), which give the rank of a ground-truth sentence representation in the list of nearest neighbors of a predicted sentence representation, ordered by increasing cosine distance. This metric evaluates representations based on their support for contrasts between sentences/words which are relevant to the brain recordings", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent advances in deep neural networks for natural language processing (NLP) have generated excitement among computational neuroscientists, who aim to model how the brain processes language. These models are argued to better capture the complexity of natural language semantics than previous computational models, and are thought to represent meaning in a way that is more similar to how it is hypothesized to be represented in the human brain. For neuroscientists, these models provide possible hypotheses for how word meanings compose in the brain. Previous work has evaluated the plausibility of such candidate models by testing how well representations of text extracted from these models align with brain recordings of humans during language comprehension tasks (Wehbe et al., 2014; Jain & Huth, 2018; Gauthier & Ivanova, 2018; Gauthier & Levy, 2019; Abnar et al., 2019; Toneva & Wehbe, 2019; Schrimpf et al., 2020; Caucheteux & King, 2020), and found some correspondences.\nHowever, modern NLP models are often trained without explicit linguistic supervision (Devlin et al., 2018; Radford et al., 2019), and the observation that they nevertheless learn some linguistic structure has been used to question the relevance of symbolic linguistic theories. Whether injecting such symbolic structures into language models would lead to even better alignment with cognitive measurements, however, has not been studied. In this work, we address this gap by training BERT (§3.1) with structural bias, and evaluate its alignment with brain recordings (§3.2). Structure is derived from three formalisms—UD, DM and UCCA (§3.3)—which come from different linguistic traditions, and capture different aspects of syntax and semantics.\nOur approach, illustrated in Figure 1, allows for quantifying the brain alignment of the structurallybiased NLP models in comparison to the base models, as related to new information about linguistic structure learned by the models that is also potentially relevant to language comprehension in the brain. More specifically, in this paper, we:\n(a) Employ a fine-tuning method utilising structurally guided attention for injecting structural bias into language model (LM) representations.\n(b) Assess the representational alignment to brain activity measurements of the fine-tuned and non-fine-tuned LMs.\n(c) Further evaluate the LMs on a range of targeted syntactic probing tasks and a semantic tagging task, which allow us to uncover fine-grained information about their structuresensitive linguistic capabilities.\n(d) Present an analysis of various linguistic factors that may lead to improved or deteriorated brain alignment." }, { "heading": "2 BACKGROUND: BRAIN ACTIVITY AND NLP", "text": "Mitchell et al. (2008) first showed that there is a relationship between the co-occurrence patterns of words in text and brain activation for processing the semantics of words. Specifically, they showed that a computational model trained on co-occurrence patterns for a few verbs was able to predict fMRI activations for novel nouns. Since this paper was introduced, many works have attempted to isolate other features that enable prediction and interpretation of brain activity (Frank et al., 2015; Brennan et al., 2016; Lopopolo et al., 2017; Anderson et al., 2017; Pereira et al., 2018; Wang et al., 2020). Gauthier & Ivanova (2018) however, emphasize that directly optimizing for the decoding of neural representation is limiting, as it does not allow for the uncovering of the mechanisms that underlie these representations. The authors suggest that in order for us to better understand linguistic processing in the brain, we should also aim to train models that optimize for a specific linguistic task and explicitly test these against brain activity.\nFollowing this line of work, Toneva & Wehbe (2019) present experiments both predicting brain activity and evaluating representations on a set of linguistic tasks. They first show that using uniform attention in early layers of BERT (Devlin et al., 2018) instead of pretrained attention leads to better prediction of brain activity. They then use the representations of this altered model to make predictions on a range of syntactic probe tasks, which isolate different syntactic phenomena (Marvin & Linzen, 2019), finding improvements against the pretrained BERT attention. Gauthier & Levy (2019) present a series of experiments in which they fine-tune BERT on a variety of tasks including language modeling as well as some custom tasks such as scrambled language modeling and part-of-speechlanguage modeling. They then perform brain decoding, where a linear mapping is learnt from fMRI recordings to the fine-tuned BERT model activations. They find that the best mapping is obtained with the scrambled language modelling fine-tuning. Further analysis using a structural probe method\nconfirmed that the token representations from the scrambled language model performed poorly when used for reconstructing Universal Dependencies (UD; Nivre et al., 2016) parse trees.\nWhen dealing with brain activity, many confounds may lead to seemingly divergent findings, such as the size of fMRI data, the temporal resolution of fMRI, the low signal-to-noise ratio, as well as how the tasks were presented to the subjects, among many other factors. For this reason, it is essential to take sound measures for reporting results, such as cross-validating models, evaluating on unseen test sets, and conducting a thorough statistical analysis." }, { "heading": "3 APPROACH", "text": "Figure 1 shows a high-level outline of our experimental design, which aims to establish whether injecting structure derived from a variety of syntacto-semantic formalisms into neural language model representations can lead to better correspondence with human brain activation data. We utilize fMRI recordings of human subjects reading a set of texts. Representations of these texts are then derived from the activations of the language models. Following Gauthier & Levy (2019), we obtain LM representations from BERT1 for all our experiments. We apply masked language model fine-tuning with attention guided by the formalisms to incorporate structural bias into BERT’s hidden-state representations. Finally, to compute alignment between the BERT-derived representations—with and without structural bias—and the fMRI recordings, we employ the brain decoding framework, where a linear decoder is trained to predict the LM derived representation of a word or a sentence from the corresponding fMRI recordings." }, { "heading": "3.1 LM-DERIVED REPRESENTATIONS", "text": "BERT uses wordpiece tokenization, dividing the text to sub-word units. For a sentence S made up of P wordpieces , we perform mean-pooling over BERT’s final layer hidden-states [h1, ..., hP ], obtaining a vector representation of the sentence Smean = 1P ∑ p hp (Wu et al., 2016). In initial experiments, we found that this leads to a closer match with brain activity measurements compared to both max-pooling and the special [CLS] token, which is used by Gauthier & Levy (2019). Similarly, for a word W made up of P wordpieces, to derive word representations, we apply mean-pooling over hidden-states [h1, ..., hP ], which correspond to the wordpieces that make upW : Wmean = 1P ∑ p hp. For each dataset, DLM ∈ Rn×dH denotes a matrix of n LM-derived word or sentence representations where dH is BERT’s hidden layer dimensionality (dH = 1024 in our experiments)." }, { "heading": "3.2 NEUROIMAGING DATASETS", "text": "We utilize two fMRI datasets, which differ in the granularity of linguistic cues to which human responses were recorded. The first, collected in Pereira et al. (2018)’s experiment 2, comprises a single brain image per entire sentence. In the second, more fine-grained dataset, recorded by Wehbe et al. (2014), each brain image corresponds to 4 words. We conduct a sentence-level analysis for the former and a word-level one for the latter.2\nPereira2018 consists of fMRI recordings from 8 subjects. The subjects were presented with stimuli consisting of 96 Wikipedia-style passages written by the authors, consisting of 4 sentences each. The subjects read the sentences one by one and were instructed to think about their meaning. The resulting data for each subject consists of 384 vectors of dimension 200,000; a vector per sentence. These were reduced to 256 dimensions using PCA by Gauthier & Levy (2019). These PCA projections explain more than 95% of the variance among sentence responses within each subject. We use this reduced version in our experiments.\nWehbe2014 consists of fMRI recordings from 8 subjects as they read a chapter from Harry Potter and the Sorcerer’s Stone. For the 5000 word chapter, subjects were presented with words one by one for 0.5 seconds each. An fMRI image was taken every 2 seconds, as a result, each image corresponds\n1Specifically: bert-large-uncased trained with whole-word masking. 2Even though the images are recorded at the 4-gram level of granularity, a word-level analysis is applied, as\nin Schwartz et al. (2019).\nto 4 words. The data was further preprocessed (i.e. detrended, smoothed, trimmed) and released by Toneva & Wehbe (2019). We use this preprocessed version to conduct word-level analysis, for which we use PCA to reduce the dimensions of the fMRI images from 25,000 to 750, explaining at least 95% variance for each participant." }, { "heading": "3.3 FORMALISMS AND DATA", "text": "To inject linguistic structure into language models, we experiment with three distinct formalisms for representation of syntactic/semantic structure, coming from different linguistic traditions and capturing different aspects of linguistic signal: UD, DM and UCCA. An example graph for each formalism is shown in Figure 2. Although there are other important linguistic structured formalisms, including meaning representations such as AMR (Banarescu et al., 2013), DRS (Kamp & Reyle, 1993; Bos et al., 2017) and FGD (Sgall et al., 1986; Hajic et al., 2012), we select three relatively different formalisms as a somewhat representative sample. All three have manually annotated datasets, which we use for our experiments.\nUD (Universal Dependencies; Nivre et al., 2020) is a syntactic bi-lexical dependency framework (dependencies are denoted as arcs between words, with one word being the head and another the dependent), which represents grammatical relations according to a coarse cross-lingual scheme. For UD data, we use the English Web Treebank corpus (EWT; Silveira et al., 2014), which contains 254,830 words and 16,622 sentences, taken from five genres of web media: weblogs, newsgroups, emails, reviews, and Yahoo! answers.\nDM (DELPH-IN MRS Bi-Lexical Dependencies; Ivanova et al., 2012) is derived from the underspecified logical forms computed by the English Resource Grammar (Flickinger et al., 2017; Copestake et al., 2005), and is one of the frameworks targeted by the Semantic Dependency Parsing SemEval Shared Tasks (SDP; Oepen et al., 2014; 2015). We use the English SDP data for DM (Oepen et al., 2016), annotated on newspaper text from the Wall Street Journal (WSJ), containing 802,717 words and 35,656 sentences.\nUCCA (Universal Cognitive Conceptual Annotation; Abend & Rappoport, 2013) is based on cognitive linguistic and typological theories, primarily Basic Linguistic Theory (Dixon, 2010/2012). We use UCCA annotations over web reviews text from the English Web Treebank, and from English Wikipedia articles on celebrities. In total, they contain 138,268 words and 6,572 sentences. For uniformity with the other formalisms, we use bi-lexical approximation to convert UCCA graphs, which have a hierarchical constituency-like structure, to bi-lexical graphs with edges between words. This conversion keeps about 91% of the information (Hershcovich et al., 2017)." }, { "heading": "3.4 INJECTING STRUCTURAL BIAS INTO LMS", "text": "Recent work has explored ways of modifying attention in order to incorporate structure into neural models (Chen et al., 2016; Strubell et al., 2018; Strubell & McCallum, 2018; Zhang et al., 2019;\nBugliarello & Okazaki, 2019). For instance, Strubell et al. (2018) incorporate syntactic information by training one attention head to attend to syntactic heads, and find that this leads to improvements in Semantic Role Labeling (SRL). Drawing on these approaches, we modify the BERT Masked Language Model (MLM) objective with an additional structural attention constraint. BERTLARGE consists of 24 layers and 16 attention heads. Each attention head headi takes in as input a sequence of representations h = [h1, ..., hP ] corresponding to the P wordpieces in the input sequence. Each representation in hp is transformed into query, key, and value vectors. The scaled dot product is computed between the query and all keys and a softmax function is applied to obtain the attention weights. The output of headi is a matrix Oi, corresponding to the weighted sum of the value vectors.\nFor each formalism and its corresponding corpus, we extract an adjacency matrix from each sentence’s parse. For the sequence S, the adjacency matrix AS is a matrix of size P × P , where the columns correspond to the heads in the parse tree and the rows correspond to the dependents. The matrix elements denote which tokens are connected in the parse tree, taking into account BERT’s wordpiece tokenization. Edge directionality is not considered. We modify BERT to accept as input a matrix AS as well as S; maintaining the original MLM objective. For each attention head headi, we compute the binary cross-entropy loss betweenOi andAS and add that to our total loss, potentially down-weighted by a factor of α (a hyperparameter). BERT’s default MLM fine-tuning hyperparameters are employed and α is set to 0.1 based on validation set perplexity scores in initial experiments.\nStructural information can be injected into BERT in many ways, in many heads, across many layers. Because the appropriate level and extent of supervision is unknown a priori, we run various finetuninig settings with respect to combinations of number of layers (1, . . . , 24) and attention heads (1, 3, 5, 7, 9, 11, 12) supervised via attention guidance. Layers are excluded from the bottom up (e.g.: when 10 layers are supervised, it is the topmost 10); heads are chosen according to their indices (which are arbitrary). This results in a total of 168 fine-tuning settings per formalism. For each fine-tuning setting, we perform two fine-tuning runs.3 For each run r of each fine-tuning setting f , we derive a set of sentence or word representations Dfr ∈ Rn×dH from each fine-tuned model using the approach described in §3.1 for obtaining DLM , the baseline set of representations from BERT before fine-tuning. We then use development set4 embedding space hubness—an indicator of the degree of difficulty of indexing and analysing data (Houle, 2015) which has been used to evaluate embedding space quality (Dinu et al., 2014)—as an unsupervised selection criterion for the fine-tuned models, selecting the model with the lowest degree of hubness (per formalism) according to the Robin Hood Index (Feldbauer et al., 2018). This yields three models for each of the two datasets—one per formalism—for which we present results below.\nIn addition to the approach described above, we also experiment with directly optimizing for the prediction of the formalism graphs (i.e., parsing) as a way of encoding structural information in LM representations. We find that this leads to a consistent decline in alignment of the LMs’ representations to brain recordings. Further details can be found in Appendix A." }, { "heading": "3.5 BRAIN DECODING", "text": "To measure the alignment of the different LM-derived representations to the brain activity measurements, brain decoding is performed, following the setup described in Gauthier & Levy (2019).5 For each subject i’s fMRI images corresponding to a set of n sentences or words, a ridge regression model is trained to linearly map from brain activity Bi ∈ Rn×dB (n = 384; dB = 256 for Pereira2018 and n = 4369; dB = 750 for Wehbe2014) to a LM-derived representation (Dfr or DLM ), minimizing the following loss:\nLifr = ‖BiGi→fr −Dfr‖22 + λ ‖Gi→fr‖ 2 2\nwhere Gi→fr : RdH×dB is a linear map, and λ is a hyperparameter for ridge regularization. Nested 12-fold cross-validation (Cawley & Talbot, 2010) is used for selection of λ, training and evaluation.\n3We find that the mean difference in brain decoding score (Pearson’s r) between two runs of the same setting (across all settings) is low (0.003), indicating that random initialization does not play a major part in our results. We, therefore, do not carry out more runs.\n4For Wehbe2014: second chapter of Harry Potter. For Pereira2018: first 500 sentences of English Wikipedia. 5Other methods for evaluating representational correspondence such as Representational Similarity Analysis (Kriegeskorte et al., 2008) and the Centered Kernel Alignment similarity index (Kornblith et al., 2019) were also explored but were found to be either less powerful or less consistent across subjects and datasets.\nEvaluation To evaluate the regression models, Pearson’s correlation coefficient between the predicted and the corresponding heldout true sentence or word representations is computed. We find that this metric6 is consistent across subjects and across the two datasets. We run 5000 bootstrap resampling iterations and a) report the mean 7 correlation coefficient (referred to as brain decoding score/performance), b) use a paired bootstrap test to establish whether two models’ mean (across stimuli) scores were drawn from populations having the same distribution 8, c) apply the Wilcoxon signed rank test (Wilcoxon, 1992) to the by-subject scores to for evidence of strength of generalization over subjects. Bonferroni correction (correcting for 3 multiple comparisons) is used to adjust for multiple hypothesis testing. See Appendix C for details." }, { "heading": "4 RESULTS", "text": "To evaluate the effect of the structurally-guided attention, we compute the brain decoding scores for the guided-attention models corresponding to each formalism and fMRI dataset and compare these scores against the brain decoding scores from two baseline models: 1) a domain-finetuned BERT (DF), which finetunes BERT using the regular MLM objective on the text of each formalism’s training data, and a pretrained BERT. We introduce the domain-finetuned baseline in order to control for any effect that finetuning using a specific text domain may have on the model representations. Comparing against this baseline allows us to better isolate the effect of injecting the structural bias from the possible effect of simply fine-tuning on the text domain. We further compare to a pretrained baseline in order to evaluate how the structurally-guided attention approach performs against an off-the-shelf model that is commonly used in brain-alignment experiments." }, { "heading": "4.1 PEREIRA2018", "text": "Figure 3 shows the sentence-level brain decoding performance on the Pereira2018 dataset, for the guided attention fine-tuned models (GA) and both baseline models (domain-finetuned and pretrained). We find that the domain-finetuned baseline (shown in Figure 3 as solid lines) leads to brain decoding\n6Appendix B shows results for the rank-based metric reported in Gauthier & Levy (2019), which we find to strongly correspond to Pearson’s correlation. This metric evaluates representations based on their support for contrasts between sentences/words which are relevant to the brain recordings. Other metrics for the evaluation of goodness of fit were found to be less consistent.\n7Across fine-tuning runs, cross-validation splits, and bootstrap iterations. 8This is applied per subject to test for strength of evidence of generalization over sentence stimuli.\nscores that are either lower than or not significantly different from the pretrained baseline. Specifically, for DM and UCCA, the DF baseline performs below the pretrained baseline, which suggests that simply fine-tuning on these corpora results in BERT’s representations becoming less aligned with the brain activation measurements from Pereira2018. We find that all GA models outperform their respective DF baselines (for all subjects, p < 0.05). We further find that compared to the pretrained baselines, with p < 0.05: a) the UD GA model shows significantly better brain decoding scores for 7 out of 8 subjects, b) the DM GA model for 4 out of 8 subjects, c) the UCCA GA shows scores not significantly different from or lower, for all subjects. For details see Appendix C." }, { "heading": "4.2 WEHBE2014", "text": "For Wehbe2014, where analysis is conducted on the word level, we again find that domain-finetuned models—especially the one finetuned on the UCCA domain text—achieve considerably lower brain decoding scores than the pretrained model, as shown in Figure 3. Furthermore, the guided-attention models for all three formalisms outperform both baselines by a large, significant margin (after Bonferroni correction, p < 0.0001)." }, { "heading": "5 DISCUSSION AND ANALYSIS", "text": "Overall, our results show that structural bias from syntacto-semantic formalisms can improve the ability of a linear decoder to map the BERT representations of stimuli sentences to their brain recordings. This improvement is especially clear for Wehbe 2014, where token representations and not aggregated sentence representations (as in Pereira 2018) are decoded, indicating that finer-grain recordings and analyses might be necessary for modelling the correlates of linguistic structure in brain imaging data. To arrive at a better understanding of the effect of the structural bias and its relationship to brain alignment, in what follows, we present an analysis of the various factors which affect and interact with this relationship.\nThe effect of domain Our results suggest that the domain of fine-tuning data and of stimuli might play a significant role, despite having been previously overlooked: simply fine-tuning on data from different domains leads to varying degrees of alignment to brain data. To quantify this effect, we compute the average word perplexity of the stimuli from both fMRI datasets for the pretrained and DF baselines on each of the three domain datasets.9 If the domain of the corpora used for fine-tuning influences our results as hypothesized, we expect this score to be higher for the DF baselines. We find that this is indeed the case and that for those baselines (DF), increase in perplexity roughly corresponds to lower brain decoding scores—see detailed results in Appendix D. This finding calls to attention the necessity of accounting for domain match in work utilizing cognitive measurements and emphasizes the importance of the domain-finetuned baseline in this study.\nTargeted syntactic evaluation We evaluate all models on a range of syntactic probing tasks proposed by Marvin & Linzen (2019).10 This dataset tests the ability of models to distinguish minimal pairs of grammatical and ungrammatical sentences across a range of syntactic phenomena. Figure 4 shows the results for the three Wehbe2014 models across all subject-verb agreement (SVA) tasks.11 We observe that after attention-guided fine-tuning: a) the DM guided-attention model, and to a lesser extent the UD guided-attention model have a higher score than the pretrained baseline and the domain-finetuned baselines for most SVA tasks and b) the ranking of the models corresponds to their ranking on the brain decoding task (DM > UD > UCCA).12 Although all three formalisms annotate the subject-verb-object or predicate-argument structure necessary for solving SVA tasks, it appears that some of them do so more effectively, at least when encoded into a LM by GA.\n9Note that this is not equivalent to the commonly utilised sequence perplexity (which can not be calculated for non-auto-regressive models) but suffices for quantifying the effect of domain shift.\n10Using the evaluation script from Goldberg (2019). 11See Appendix F for the full set of results for both Wehbe2014 and for Pereira2018 with similar patterns. 12For reflexive anaphora tasks, these trends are reversed: the models underperform the pretrained baseline and their ranking is the converse of their brain decoding scores. Reflexive Anaphora, are not explicitly annotated for in any of the three formalisms. We find, however, that they occur in a larger proportion of the sentences comprising the UCCA corpus (1.4%) than those the UD (0.67%) or DM (0.64%) ones, indicating that domain might play a role here too.\nEffect on semantics To evaluate the impact of structural bias on encoding of semantic information, we consider Semantic Tagging (Abzianidze & Bos, 2017), commonly used to analyse the semantics encoded in LM representations (Belinkov et al., 2018; Liu et al., 2019): tokens are labeled to reflect their semantic role in context. For each of the three guided attention Wehbe2014 models and the pretrained model, a linear probe is trained to predict a word’s semantic tag, given the contextual representation induced by the model (see Appendix E for details). For each of the three GA models, Figure 5 shows the change in test set classification F1-score,13 relative to the pretrained baseline, per coarse-grained grouping of tags.14 We find that the structural bias improves the ability to correctly recognize almost all of the semantic phenomena considered, indicating that our method for injecting linguistic structure leads to better encoding of a broad range of semantic distinctions. Furthermore, the improvements are largest for phenomena that have a special treatment in the linguistic formalisms, namely discourse markers and temporal entities. Identifying named entities is negatively impacted by GA with DM, where they are indiscriminately labeled as compounds.\nContent words and function words are treated differently by each of the formalisms: UD and UCCA encode all words, where function words have special labels, and DM only attaches content words. Our guided attention ignores edge labels (dependency relations), and so it considers UD and UCCA’s attachment of function words just as meaningful as that of content words. Figure 8 in Appendix G shows a breakdown of brain decoding performance on content and function words for Wehbe2014. We find that: a) all GA models and the pretrained model show a higher function than content word decoding score, b) a large part of the decrease in decoding score of two of the three domain-finetuned baselines (UD and DM) compared to the pretrained model is due to content words.\n13Note that the test set consists of 263,516 instances, therefore, the margin of change in number of instances here is considerable, e.g. 5652 ∗ 0.6 ≈ 40 instances for the DM and UCCA models on the temporal category, which is the least frequent in the test set. See test set category frequencies in the appendix.\n14The eight most frequent coarse-grained categories from an original set of ten are included—ordered by frequency from left to right; we exclude the UNKNOWN category because it is uninformative and the ANAPHORIC category because it shows no change from the baseline for all three models.\nCaveats The fMRI data used for both the sentence and word level analyses was recorded while participants read text without performing a specific task. Although we observe some correlates of linguistic structure, it is possible that uncovering more fine-grained patterns would necessitate brain data recorded while participants perform a targeted task. For future work it would be interesting to investigate if an analysis based on a continuous, naturalistic listening fMRI dataset (Brennan & Hale, 2019) matches up to the results we have obtained. Regarding the different linguistic formalisms, there are potential confounds such as domain, corpus size15, and dependency length, (i.e. the distance between words attached by a relation), which depend both on the formalism and on the underlying training set text. To properly control for them, a corpus annotated for all formalisms is necessary, but such a corpus of sufficient size is not currently available.\nConclusions We propose a framework to investigate the effect of incorporating specific structural biases in language models for brain decoding. We present evidence that inducing linguistic structure bias through fine-tuning using attention guided according to syntacto-semantic formalisms, can improve brain decoding performance across two fMRI datasets. For each of the 3 investigated formalisms, we observed that the models that aligned most with the brain performed best at a range of subject-verb agreement syntactic tasks, suggesting that language comprehension in the brain, as captured by fMRI recordings, and the tested syntactic tasks may rely on common linguistic structure, that was partly induced by the added attention constraints. Across formalisms, we found that models with attention guided by DM and UD consistently exhibited better alignment with the brain than UCCA for both fMRI datasets. Rather than concluding that DM and UD are more cognitively plausible, controlled experiments, with fine-tuning on each annotated corpus as plain text, suggest that the text domain is an important, previously overlooked confound. Further investigation is needed using a common annotated corpus for all formalisms to make conclusions about their relative aptness.\nOverall, our proposed approach enables the evaluation of more targeted hypotheses about the composition of meaning in the brain, and opens up new opportunities for cross-pollination between computational neuroscience and linguistics. To facilitate this, we make all code and data for our experiments available at: http://github.com/anonymized\n15It is interesting to note that decoding score rank for Wehbe2014 corresponds to fine-tuning corpus size for the GA models (DM > UD > UCCA), but not the domain-finetuned models. A reasonable conclusion to draw from this is that dataset size might play a role in the effective learning of a structural bias." }, { "heading": "UD 0.277 0.186 0.159", "text": "" }, { "heading": "UD 0.225 0.092 0.065", "text": "Results for the models fine-tuned via parsing show divergence in brain decoding performance. Indeed, we find that as parsing performance (as measured by unlabeled undirected attachment scores (UUAS)) improves on the held-out development set, brain decoding performance declines. This finding is congruent with the results of Gauthier & Levy (2019), which show that fine-tuning on GLUE tasks Wang et al. (2018) leads to a decline in brain decoding performance, until a ceiling point where it eventually stabilizes. In our experiments, after one epoch of fine-tuning, decoding performance is equivalent to the one achieved by the pretrained model. However, with more fine-tuning, the models consistently diverge, as shown in Table 1. These results are averaged over two fine-tuning runs. Understanding the learning dynamics that lead to such divergence is an interesting avenue for future work." }, { "heading": "B MEAN/MEDIAN RANK RESULTS", "text": "Table 2 shows results for the Pearson’s r metric reported in the main paper, alongside the mean and median rank metrics reported in Gauthier & Levy (2019), which give the rank of a ground-truth sentence representation in the list of nearest neighbors of a predicted sentence representation, ordered by increasing cosine distance. This metric evaluates representations based on their support for contrasts between sentences/words which are relevant to the brain recordings. The table shows that the models which have higher Pearson r scores, also have a lower average ground truth word/sentence nearest neighbour rank i.e. induce representations that better support contrasts between sentences/words which are relevant to the brain recordings." }, { "heading": "C SIGNIFICANCE TESTING", "text": "Bootstrapping The bootstrapping procedure is described below. For each subject of m subjects:\n1. There are n stimuli sentences, corresponding to n fMRI recordings. A linear decoder is trained to map each recording to its corresponding LM-extracted (PRE, DF-B,GA) sentence representation. This is done using 12-fold cross-validation. This yields predicted a ‘sentence representation’ per stimuli sentence.\n2. To compensate for the small size of the dataset which might lead to a noise estimate of the linear decoder’s performance, we now randomly resample n datapoints (with replacement) from the full n datapoints.\n3. For each resampling, our evaluation metrics (pearson’s r, mean rank, etc.) are computed between the sampled predictions and their corresponding ‘gold representations’, for all sets of LM reps. We store the mean metric value (e.g. pearson r score) across the n ‘sampled’ datapoints. We run 5000 such iterations.\n4. This gives us 5000 such paired mean (across the n samples, that is) scores for all models. 5. When comparing two models, e.g. GA DM vs.PRE, to test our results for strength of\nevidence of generalization over stimuli, we compute the proportion of these 5000 paired samples where e.g. GA DM’s mean sample score is greater than PRE. After Bonferroni correction for multiple hypothesis testing, is the p-value we report. See 3 for these per subject p-values for Pereira 2018. For Wehbe 2014, comparisons between each of the GA models and the pretrained baseline lead to p = 0.000 (i.e. The GA model mean score is greater than the pretrained baseline’s mean score for all 5000 sets of paired samples), for all subjects. We, therefore, do not include a similar table.\n6. We average over these 5000 samples per subject, and use these m subject means for the across-subject significance testing, which is described below.\nStrength of generalization across subjects To test our results for strength of generalization across subjects, we apply the Wilcoxon signed rank test (Wilcoxon, 1992) to the m by-subject mean scores (see above), comparing the GA models to the pretrained baselines. Since m = 8 for both datasets, the lowest p-value is 0.0078 (if every subject’s difference score consistently favors the GA model over the baseline or vice versa).\nIn the case of Pereira 2018: for PRE vs. GA UD we get a p-value of 0.0078 (0.0234 after Bonferroni correction); for PRE vs. GA DM we get an p-value of 0.015 (0.045 after Bonferroni correction); for\nPRE vs. GA UCCA we get a p-value of 0.0078 (0.0234 after Bonferroni correction, here PRE > GA UCCA for all subjects).\nIn the case of Wehbe 2014: all comparisons yield a p-value of 0.0078 (0.045 after Bonferroni correction), where the GA model > the pretrained baseline." }, { "heading": "D THE DOMAIN EFFECT", "text": "Table 4 shows average word perplexity scores for the pretrained model and the domain-finetuned models for each of the three text domains on the stimuli from Pereira2018 and Wehbe2014. Scores are averaged over the words in a sentence and the sentences (stimuli) in the datasets." }, { "heading": "E SEMANTIC TAGGING", "text": "Probing details Representations for the probing task are derived as described in 3.1 for each sentence in the development and testing sets from Abzianidze & Bos (2017). The development set is employed as a training set, because it is mostly manually annotated/corrected (as opposed to the much noisier training set) and because it is already possible to train rather accurate semantic taggers which suffice for our analysis with a training set of that size (131337 instances). We report results for the official test set. Table 5 shows the frequency of each semantic category we report scores for in the test set. An L2 regularised logistic regression model is utilised.\nFurther discussion We observe the largest improvements for the DISCOURSE and TEMPORAL categories. The former involves identifying subordinate, coordinate, appositional, and contrast relations. These relations are highly influenced by context, and correctly classifying them can often be contingent on longer dependencies, which the structural bias increases ’awareness’ of. The TEMPORAL category, on the other hand, consists of tags such as clocktime or time of day which are applied to multi-word expressions, e.g 27th December. Highlighting these dependencies by assigning more weight to the attention between their sub-parts is likely helpful for their accurate identification." }, { "heading": "F TARGETED SYNTACTIC EVALUATION SCORES", "text": "Figures 6 and 7 show the performance of the Pereira2018 and Wehbe2014 models and the four baselines for each of the syntactic categories from Marvin & Linzen (2019)." }, { "heading": "G CONTENT WORDS AND FUNCTION WORDS ANALYSIS", "text": "Figure 8 shows the breakdown of brain decoding accuracy by content and function words for Wehbe2014. We consider content words as words whose universal part-of-speech according to spaCy is one of the following: {ADJ, ADV, NOUN, PROPN, VERB, X, NUM}. Out of a total of 4369, 2804 are considered content words and 1835 as function words." } ]
2,020
DOES INJECTING LINGUISTIC STRUCTURE INTO LAN- GUAGE MODELS LEAD TO BETTER ALIGNMENT WITH BRAIN RECORDINGS?
SP:1387491705ff05c21b119fad95ef3e63beaa57c9
[ "This work extends the binary Hopfield network (Demircigil et al., 2017) to continuous patterns and states. Connections are drawn between the result model to the attention layers of the transformers, the pooling operation of LSTM, similarity search, and fully connected layers. Experimental results are briefly described for analyzing the attention of Bert models, multiple instance learning, and small UCI classification tasks." ]
We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-ofthe-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: https://github.com/ml-jku/hopfield-layers
[ { "affiliations": [], "name": "YOU NEED" }, { "affiliations": [], "name": "Hubert Ramsauer" }, { "affiliations": [], "name": "Bernhard Schäfl" }, { "affiliations": [], "name": "Johannes Lehner" }, { "affiliations": [], "name": "Philipp Seidl" }, { "affiliations": [], "name": "Michael Widrich" }, { "affiliations": [], "name": "Thomas Adler" }, { "affiliations": [], "name": "Lukas Gruber" }, { "affiliations": [], "name": "Markus Holzleitner" }, { "affiliations": [], "name": "David Kreil" }, { "affiliations": [], "name": "Michael Kopp" }, { "affiliations": [], "name": "Günter Klambauer" }, { "affiliations": [], "name": "Johannes Brandstetter" }, { "affiliations": [], "name": "Sepp Hochreiter" } ]
[ { "authors": [ "Y. Abu-Mostafa", "J.-M" ], "title": "StJacques. Information capacity of the Hopfield model", "venue": "IEEE Transactions on Information Theory,", "year": 1985 }, { "authors": [ "R. Agrawal", "T. Imieliundefinedski", "A. Swami" ], "title": "Mining association rules between sets of items in large databases", "venue": "SIGMOD Rec.,", "year": 1993 }, { "authors": [ "R. Akbar", "P.A. Robert", "M. Pavlović", "J.R. Jeliazkov", "I. Snapkov", "A. Slabodkin", "C.R. Weber", "L. Scheffer", "E. Miho", "I.H. Haff" ], "title": "A compact vocabulary of paratope-epitope interactions enables predictability of antibody-antigen binding", "venue": "bioRxiv,", "year": 2019 }, { "authors": [ "F. Alzahrani", "A. Salem" ], "title": "Sharp bounds for the lambert w function", "venue": "Integral Transforms and Special Functions,", "year": 2018 }, { "authors": [ "S. Andrews", "I. Tsochantaridis", "T. Hofmann" ], "title": "Support vector machines for multiple-instance learning", "venue": "Advances in Neural Information Processing Systems", "year": 2003 }, { "authors": [ "J. Ba", "G.E. Hinton", "V. Mnih", "J.Z. Leibo", "C. Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": "Advances in Neural Information Processing Systems", "year": 2016 }, { "authors": [ "J. Ba", "G.E. Hinton", "V. Mnih", "J.Z. Leibo", "C. Ionescu" ], "title": "Using fast weights to attend to the recent past", "venue": null, "year": 2016 }, { "authors": [ "D. Bahdanau", "K. Cho", "Y. Bengio" ], "title": "Neural machine translation by jointly learning to align and translate", "venue": "ArXiv, 1409.0473,", "year": 2014 }, { "authors": [ "A. Banino", "A.P. Badia", "R. Köster", "M.J. Chadwick", "V. Zambaldi", "D. Hassabis", "C. Barry", "M. Botvinick", "D. Kumaran", "C. Blundell" ], "title": "MEMO: a deep network for flexible combination of episodic memories", "venue": null, "year": 2001 }, { "authors": [ "A. Barra", "M. Beccaria", "A. Fachechi" ], "title": "A new mechanical approach to handle generalized Hopfield neural networks", "venue": "Neural Networks,", "year": 2018 }, { "authors": [ "H.H. Bauschke", "P.L. Combettes" ], "title": "Convex Analysis and Monotone Operator Theory in Hilbert Spaces", "venue": "Cham: Springer International Publishing,", "year": 2017 }, { "authors": [ "S. Boyd", "L. Vandenberghe" ], "title": "Convex Optimization", "venue": null, "year": 2009 }, { "authors": [ "J.S. Brauchart", "A.B. Reznikov", "E.B. Saff", "I.H. Sloan", "Y.G. Wang", "R.S. Womersley" ], "title": "Random point sets on the sphere - hole radii, covering, and separation", "venue": "Experimental Mathematics,", "year": 2016 }, { "authors": [ "J. Bruck", "V.P. Roychowdhury" ], "title": "On the number of spurious memories in the Hopfield model", "venue": "IEEE Transactions on Information Theory,", "year": 1990 }, { "authors": [ "T. Cai", "J. Fan", "T. Jiang" ], "title": "Distributions of angles in random packing on spheres", "venue": "Journal of Machine Learning Research,", "year": 2013 }, { "authors": [ "M.-A. Carbonneau", "V. Cheplygina", "E. Granger", "G. Gagnon" ], "title": "Multiple instance learning: a survey of problem characteristics and applications", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "Marc-André Carbonneau", "Eric Granger", "Alexandre J. Raymond", "Ghyslain Gagnon" ], "title": "Robust multiple-instance learning ensembles using random subspace instance selection", "venue": "Pattern Recognition,", "year": 2031 }, { "authors": [ "M. Carreira-Perpiñán", "C.K.I. Williams" ], "title": "An isotropic Gaussian mixture can have more modes than components", "venue": "Technical Report EDI-INF-RR-0185,", "year": 2003 }, { "authors": [ "A. Carta", "A. Sperduti", "D. Bacciu" ], "title": "Encoding-based memory modules for recurrent neural networks. ArXiv", "venue": null, "year": 2001 }, { "authors": [ "T. Chen", "C. Guestrin" ], "title": "XGBoost: A scalable tree boosting system", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Y. Chen", "J. Bi", "J.Z. Wang" ], "title": "MILES: Multiple-instance learning via embedded instance selection", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 1931 }, { "authors": [ "V Cheplygina", "DM Tax", "M Loog" ], "title": "Dissimilarity-based ensembles for multiple instance learning", "venue": "IEEE transactions on neural networks and learning systems,", "year": 2016 }, { "authors": [ "K. Cho", "B. vanMerriënboer", "C. Gulcehre", "D. Bahdanau", "F. Bougares", "H. Schwenk", "Y. Bengio" ], "title": "Learning phrase representations using RNN encoder–decoder for statistical machine translation", "venue": "In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2014 }, { "authors": [ "K. Clark", "M.-T. Luong", "Q.V. Le", "C.D. Manning" ], "title": "ELECTRA: Pre-training text encoders as discriminators rather than generators. ArXiv", "venue": null, "year": 2003 }, { "authors": [ "A. Crisanti", "D.J. Amit", "H. Gutfreund" ], "title": "Saturation level of the Hopfield model for neural network", "venue": "Europhysics Letters (EPL),", "year": 1986 }, { "authors": [ "I. Danihelka", "G. Wayne", "B. Uria", "N. Kalchbrenner", "A. Graves" ], "title": "Associative long short-term memory", "venue": "Proceedings of The 33rd International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "M. Daniluk", "T. Rocktäschel", "J. Welbl", "S. Riedel" ], "title": "Frustratingly short attention spans in neural language modeling", "venue": "ICRL", "year": 2017 }, { "authors": [ "M. Demircigil", "J. Heusel", "M. Löwe", "S. Upgang", "F. Vermet" ], "title": "On a model of associative memory with huge storage capacity", "venue": "Journal of Statistical Physics,", "year": 2017 }, { "authors": [ "J. Devlin", "M.-W. Chang", "K. Lee", "K. Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": null, "year": 2018 }, { "authors": [ "J. Devlin", "M.-W. Chang", "K. Lee", "K. Toutanova" ], "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association", "year": 2019 }, { "authors": [ "T.G. Dietterich", "R.H. Lathrop", "T. Lozano-Pérez" ], "title": "Solving the multiple instance problem with axis-parallel rectangles", "venue": "Artificial Intelligence,", "year": 1997 }, { "authors": [ "R.O. Emerson", "W.S. DeWitt", "M. Vignali", "J. Gravley", "J.K. Hu", "E.J. Osborne", "C. Desmarais", "M. Klinger", "C.S. Carlson", "J.A. Hansen" ], "title": "Immunosequencing identifies signatures of cytomegalovirus exposure history and HLA-mediated effects on the T cell repertoire", "venue": "Nature Genetics,", "year": 2017 }, { "authors": [ "M. Fernández-Delgado", "E. Cernadas", "S. Barro", "D. Amorim" ], "title": "Do we need hundreds of classifiers to solve real world classification problems", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "V. Folli", "M. Leonetti", "G. Ruocco" ], "title": "On the maximum storage capacity of the Hopfield model", "venue": "Frontiers in Computational Neuroscience,", "year": 2017 }, { "authors": [ "B. Gao", "L. Pavel" ], "title": "On the properties of the softmax function with application in game theory and reinforcement learning", "venue": null, "year": 2017 }, { "authors": [ "D.J.H. Garling" ], "title": "Analysis on Polish Spaces and an Introduction to Optimal Transportation. London Mathematical Society Student Texts", "venue": null, "year": 2157 }, { "authors": [ "J. Gilmer", "S.S. Schoenholz", "P.F. Riley", "O. Vinyals", "G.E. Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": "In Proceedings of the 34th International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "N. Guttenberg", "N. Virgo", "O. Witkowski", "H. Aoki", "R. Kanai" ], "title": "Permutation-equivariant neural networks applied to dynamics", "venue": "prediction. arXiv,", "year": 2016 }, { "authors": [ "J. Hertz", "A. Krogh", "R.G. Palmer" ], "title": "Introduction to the Theory of Neural Computation", "venue": "AddisonWesley Longman Publishing Co., Inc.,", "year": 1991 }, { "authors": [ "S. Hochreiter" ], "title": "Untersuchungen zu dynamischen neuronalen Netzen", "venue": "Diploma thesis, Institut für Informatik, Lehrstuhl Prof. Brauer, Technische Universität München,", "year": 1991 }, { "authors": [ "S. Hochreiter", "J. Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural Comput.,", "year": 1997 }, { "authors": [ "A. Hoorfar", "M. Hassani" ], "title": "Inequalities on the Lambertw function and hyperpower function", "venue": "Journal of Inequalities in Pure and Applied Mathematics,", "year": 2008 }, { "authors": [ "J.J. Hopfield" ], "title": "Neural networks and physical systems with emergent collective computational abilities", "venue": "Proceedings of the National Academy of Sciences,", "year": 1982 }, { "authors": [ "J.J. Hopfield" ], "title": "Neurons with graded response have collective computational properties like those of two-state neurons", "venue": "Proceedings of the National Academy of Sciences,", "year": 1984 }, { "authors": [ "M. Ilse", "J.M. Tomczak", "M. Welling" ], "title": "Attention-based deep multiple instance learning", "venue": "International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "M. Ilse", "J.M. Tomczak", "M. Welling" ], "title": "Deep multiple instance learning for digital histopathology", "venue": "In Handbook of Medical Image Computing and Computer Assisted Intervention,", "year": 2020 }, { "authors": [ "D. Jiang", "Z. Wu", "C.-Y. Hsieh", "G. Chen", "B. Liao", "Z. Wang", "C. Shen", "D. Cao", "J. Wu", "T. Hou" ], "title": "Could graph neural networks learn better molecular representation for drug discovery? a comparison study of descriptor-based and graph-based models", "venue": "Journal of Cheminformatics,", "year": 2020 }, { "authors": [ "M. Kandemir", "C. Zhang", "F.A. Hamprecht" ], "title": "Empowering multiple instance histopathology cancer diagnosis by cell graphs", "venue": "In International Conference on Medical Image Computing and Computer-Assisted Intervention,", "year": 2014 }, { "authors": [ "M.M.R. Khan", "R.B. Arif", "M.A.B. Siddique", "M. R" ], "title": "Oishe. Study and observation of the variation of accuracies of KNN, SVM, LMNN, ENN algorithms on eleven different datasets from UCI machine learning repository", "venue": "In 4th International Conference on Electrical Engineering and Information & Communication Technology (iCEEiCT),", "year": 2018 }, { "authors": [ "T.N. Kipf", "M. Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "ArXiv, 1609.02907,", "year": 2016 }, { "authors": [ "G. Klambauer", "T. Unterthiner", "A. Mayr", "S. Hochreiter" ], "title": "Self-normalizing neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "P. Koiran" ], "title": "Dynamics of discrete time, continuous state Hopfield networks", "venue": "Neural Computation,", "year": 1994 }, { "authors": [ "I. Korshunova", "J. Degrave", "F. Huszar", "Y. Gal", "A. Gretton", "J. Dambre" ], "title": "BRUNO: A deep recurrent model for exchangeable data", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "D. Krotov", "J.J. Hopfield" ], "title": "Dense associative memory for pattern recognition", "venue": "Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "D. Krotov", "J.J. Hopfield" ], "title": "Dense associative memory is robust to adversarial inputs", "venue": "Neural Computation,", "year": 2018 }, { "authors": [ "D. Krotov", "J.J. Hopfield" ], "title": "Large associative memory problem in neurobiology and machine learning", "venue": null, "year": 2008 }, { "authors": [ "M.G.E.Ş. Küçükaşcı" ], "title": "Baydoğan. Bag encoding strategies in multiple instance learning problems", "venue": "Information Sciences,", "year": 2018 }, { "authors": [ "M. Kuhn", "I. Letunic", "L.J. Jensen", "P. Bork" ], "title": "The SIDER database of drugs and side effects", "venue": "Nucleic Acids Research,", "year": 2016 }, { "authors": [ "T. Lipp", "S. Boyd" ], "title": "Variations and extension of the convex–concave procedure", "venue": "Optimization and Engineering,", "year": 2016 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "arXiv preprint arXiv:1711.05101,", "year": 2017 }, { "authors": [ "O. Maron", "T. Lozano-Pérez" ], "title": "A framework for multiple-instance learning", "venue": "Advances in Neural Information Processing Systems,", "year": 1998 }, { "authors": [ "I.F. Martins", "A.L. Teixeira", "L. Pinheiro", "A.O. Falcao" ], "title": "A Bayesian approach to in silico blood-brain barrier penetration modeling", "venue": "Journal of Chemical Information and Modeling,", "year": 2012 }, { "authors": [ "C. Mazza" ], "title": "On the storage capacity of nonlinear neural networks", "venue": "Neural Networks,", "year": 1997 }, { "authors": [ "R.J. McEliece", "E.C. Posner", "E.R. Rodemich", "S.S. Venkatesh" ], "title": "The capacity of the Hopfield associative memory", "venue": "IEEE Trans. Inf. Theor.,", "year": 1987 }, { "authors": [ "R.R. Meyer" ], "title": "Sufficient conditions for the convergence of monotonic mathematical programming algorithms", "venue": "Journal of Computer and System Sciences,", "year": 1976 }, { "authors": [ "F.W.J. Olver", "D.W. Lozier", "R.F. Boisvert", "C.W. Clark" ], "title": "NIST handbook of mathematical functions", "venue": null, "year": 2010 }, { "authors": [ "A. Paszke", "S. Gross", "S. Chintala", "G. Chanan", "E. Yang", "Z. DeVito", "Z. Lin", "A. Desmaison", "L. Antiga", "A. Lerer" ], "title": "Automatic differentiation in PyTorch", "venue": "In Workshop in Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "A. Paszke", "S. Gross", "F. Massa", "A. Lerer", "J. Bradbury", "G. Chanan", "T. Killeen", "Z. Lin", "N. Gimelshein", "L. Antiga" ], "title": "PyTorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "C.R. Qi", "H. Su", "M. Kaichun", "L.J. Guibas" ], "title": "PointNet: Deep learning on point sets for 3d classification and segmentation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "C.R. Qi", "L. Yi", "H. Su", "L.J. Guibas" ], "title": "PointNet++: Deep hierarchical feature learning on point sets in a metric space", "venue": "In 31st International Conference on Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "A. Rangarajan", "S. Gold", "E. Mjolsness" ], "title": "A novel optimizing network architecture with applications", "venue": "Neural Computation,", "year": 1996 }, { "authors": [ "A. Rangarajan", "A. Yuille", "Eric E. Mjolsness" ], "title": "Convergence properties of the softassign quadratic assignment algorithm", "venue": "Neural Computation,", "year": 1999 }, { "authors": [ "S. Ravanbakhsh", "J. Schneider", "B. Poczos" ], "title": "Deep learning with sets and point", "venue": "clouds. arXiv,", "year": 2016 }, { "authors": [ "I. Schlag", "J. Schmidhuber" ], "title": "Learning to reason with third order tensor products", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "I. Schlag", "P. Smolensky", "R. Fernandez", "N. Jojic", "J. Schmidhuber", "J. Gao" ], "title": "Enhancing the transformer with explicit relational encoding for math problem solving", "venue": null, "year": 1910 }, { "authors": [ "I. Schlag", "K. Irie", "J. Schmidhuber" ], "title": "Linear transformers are secretly fast weight memory", "venue": "systems. arXiv,", "year": 2021 }, { "authors": [ "J. Schmidhuber" ], "title": "Learning to control fast-weight memories: An alternative to dynamic recurrent networks", "venue": "In Neural Computations,", "year": 1992 }, { "authors": [ "J. Schmidhuber" ], "title": "Deep learning in neural networks: An overview", "venue": "Neural Networks,", "year": 2015 }, { "authors": [ "B. Schölkopf", "A.J. Smola" ], "title": "Learning with Kernels – Support Vector Machines, Regularization, Optimization, and Beyond", "venue": null, "year": 2002 }, { "authors": [ "B.K. Sriperumbudur", "G.R. Lanckriet" ], "title": "On the convergence of the concave-convex procedure", "venue": "Advances in Neural Information Processing Systems", "year": 2009 }, { "authors": [ "S. Sukhbaatar", "A. Szlam", "J. Weston", "R. Fergus" ], "title": "End-to-end memory networks", "venue": "Advances in Neural Information Processing Systems", "year": 2015 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "F. Tanaka", "S.F. Edwards" ], "title": "Analytic theory of the ground state properties of a spin glass. I. Ising spin glass", "venue": "Journal of Physics F: Metal Physics,", "year": 1980 }, { "authors": [ "Y. Tay", "D. Bahri", "D. Metzler", "D.-C. Juan", "Z. Zhao", "C. Zheng" ], "title": "Synthesizer: Rethinking selfattention in transformer models", "venue": null, "year": 2005 }, { "authors": [ "M. Toneva", "L. Wehbe" ], "title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)", "venue": "Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "M. Toneva", "L. Wehbe" ], "title": "Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)", "venue": "arXiv, 1905.11833,", "year": 2019 }, { "authors": [ "J.J. Torres", "L. Pantic", "Hilbert H.J. Kappen" ], "title": "Storage capacity of attractor neural networks with depressing synapses", "venue": "Phys. Rev. E,", "year": 2002 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "L. Kaiser", "I. Polosukhin" ], "title": "Attention is all you need", "venue": "Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "A. Vaswani", "N. Shazeer", "N. Parmar", "J. Uszkoreit", "L. Jones", "A.N. Gomez", "L. Kaiser", "I. Polosukhin" ], "title": "Attention is all you", "venue": "need. ArXiv,", "year": 2017 }, { "authors": [ "M. Wainberg", "B. Alipanahi", "B.J. Frey" ], "title": "Are random forests truly the best classifiers", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "G. Wainrib", "J. Touboul" ], "title": "Topological and dynamical complexity of random neural networks", "venue": "Phys. Rev. Lett.,", "year": 2013 }, { "authors": [ "J. Wang" ], "title": "Solving the multiple-instance problem: A lazy learning approach", "venue": "In Proceedings of the 17th International Conference on Machine Learning (ICML),", "year": 2000 }, { "authors": [ "X. Wang", "Y. Yan", "P. Tang", "X. Bai", "W. Liu" ], "title": "Revisiting multiple instance neural networks", "venue": "Pattern Recognition,", "year": 2018 }, { "authors": [ "C.R. Weber", "R. Akbar", "A. Yermanos", "M. Pavlović", "I. Snapkov", "G.K. Sandve", "S.T. Reddy", "V. Greiff" ], "title": "immuneSIM: tunable multi-feature simulation of B- and T-cell receptor repertoires for immunoinformatics", "venue": "benchmarking. Bioinformatics,", "year": 2020 }, { "authors": [ "M. Widrich", "B. Schäfl", "M. Pavlović", "H. Ramsauer", "L. Gruber", "M. Holzleitner", "J. Brandstetter", "G.K. Sandve", "V. Greiff", "S. Hochreiter", "G. Klambauer" ], "title": "Modern Hopfield networks and attention for immune repertoire classification", "venue": "ArXiv, 2007.13505,", "year": 2020 }, { "authors": [ "M. Widrich", "B. Schäfl", "M. Pavlović", "H. Ramsauer", "L. Gruber", "M. Holzleitner", "J. Brandstetter", "G.K. Sandve", "V. Greiff", "S. Hochreiter", "G. Klambauer" ], "title": "Modern Hopfield networks and attention for immune repertoire classification", "venue": "In Advances in Neural Information Processing Systems. Curran Associates, Inc.,", "year": 2020 }, { "authors": [ "T. Wolf", "L. Debut", "V. Sanh", "J. Chaumond", "C. Delangue", "A. Moi", "P. Cistac", "T. Rault", "R. Louf", "M. Funtowicz", "J. Brew" ], "title": "HuggingFace’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "J.C.F. Wu" ], "title": "On the convergence properties of the em algorithm", "venue": "Ann. Statist.,", "year": 1983 }, { "authors": [ "X. Wu", "X. Liu", "W. Li", "Q. Wu" ], "title": "Improved expressivity through dendritic neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Z. Wu", "B. Ramsundar", "E.N. Feinberg", "J. Gomes", "C. Geniesse", "A.S. Pappu", "K. Leswing", "V. Pande" ], "title": "MoleculeNet: A benchmark for molecular machine", "venue": "learning. arXiv,", "year": 2017 }, { "authors": [ "Z. Xiong", "D. Wang", "X. Liu", "F. Zhong", "X. Wan", "X. Li", "Z. Li", "X. Luo", "K. Chen", "H. Jiang", "M. Zheng" ], "title": "Pushing the boundaries of molecular representation for drug discovery with the graph attention mechanism", "venue": "Journal of Medicinal Chemistry,", "year": 2020 }, { "authors": [ "Y. Xu", "T. Fan", "M. Xu", "L. Zeng", "Y. Qiao" ], "title": "SpiderCNN: Deep learning on point sets with parameterized convolutional filters", "venue": "European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "A.L. Yuille", "A. Rangarajan" ], "title": "The concave-convex procedure (CCCP)", "venue": "Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "A.L. Yuille", "A. Rangarajan" ], "title": "The concave-convex procedure", "venue": "Neural Computation,", "year": 2003 }, { "authors": [ "W.I. Zangwill" ], "title": "Nonlinear programming: a unified approach. Prentice-Hall international series in management", "venue": "Englewood Cliffs, N.J.,", "year": 1969 }, { "authors": [ "S. Zhai", "W. Talbott", "M.A. Bautista", "C. Guestrin", "J.M. Susskind" ], "title": "Set distribution networks: a generative model for sets of images", "venue": null, "year": 2006 }, { "authors": [ "W. Zhang", "B. Zhou" ], "title": "Learning to update auto-associative memory in recurrent neural networks for improving sequence memorization", "venue": null, "year": 2017 }, { "authors": [ "Y. Zhu", "R. Kiros", "R.S. Zemel", "R. Salakhutdinov", "R. Urtasun", "A. Torralba", "S. Fidler" ], "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "venue": "Proceedings of the IEEE international conference on computer vision,", "year": 2015 } ]
[ { "heading": null, "text": "We introduce a modern Hopfield network with continuous states and a corresponding update rule. The new Hopfield network can store exponentially (with the dimension of the associative space) many patterns, retrieves the pattern with one update, and has exponentially small retrieval errors. It has three types of energy minima (fixed points of the update): (1) global fixed point averaging over all patterns, (2) metastable states averaging over a subset of patterns, and (3) fixed points which store a single pattern. The new update rule is equivalent to the attention mechanism used in transformers. This equivalence enables a characterization of the heads of transformer models. These heads perform in the first layers preferably global averaging and in higher layers partial averaging via metastable states. The new modern Hopfield network can be integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. We demonstrate the broad applicability of the Hopfield layers across various domains. Hopfield layers improved state-of-the-art on three out of four considered multiple instance learning problems as well as on immune repertoire classification with several hundreds of thousands of instances. On the UCI benchmark collections of small classification tasks, where deep learning methods typically struggle, Hopfield layers yielded a new state-ofthe-art when compared to different machine learning methods. Finally, Hopfield layers achieved state-of-the-art on two drug design datasets. The implementation is available at: https://github.com/ml-jku/hopfield-layers" }, { "heading": "1 INTRODUCTION", "text": "The deep learning community has been looking for alternatives to recurrent neural networks (RNNs) for storing information. For example, linear memory networks use a linear autoencoder for sequences as a memory (Carta et al., 2020). Additional memories for RNNs like holographic reduced representations (Danihelka et al., 2016), tensor product representations (Schlag & Schmidhuber, 2018; Schlag et al., 2019) and classical associative memories (extended to fast weight approaches) (Schmidhuber, 1992; Ba et al., 2016a;b; Zhang & Zhou, 2017; Schlag et al., 2021) have been suggested. Most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014). Memory networks (Weston et al., 2014) use an arg max attention by first mapping a query and patterns into a space and then retrieving the pattern with the largest dot product. End to end memory networks (EMN) make this attention scheme differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a;b). EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a;b) and its\nextensions (Dehghani et al., 2018). The transformer has had a great impact on the natural language processing (NLP) community, in particular via the BERT models (Devlin et al., 2018; 2019).\nContribution of this work: (i) introducing novel deep learning layers that are equipped with a memory via modern Hopfield networks, (ii) introducing a novel energy function and a novel update rule for continuous modern Hopfield networks that are differentiable and typically retrieve patterns after one update. Differentiability is required for gradient descent parameter updates and retrieval with one update is compatible with activating the layers of deep networks.\nWe suggest using modern Hopfield networks to store information or learned prototypes in different layers of neural networks. Binary Hopfield networks were introduced as associative memories that can store and retrieve patterns (Hopfield, 1982). A query pattern can retrieve the pattern to which it is most similar or an average over similar patterns. Hopfield networks seem to be an ancient technique, however, new energy functions improved their properties. The stability of spurious states or metastable states was sensibly reduced (Barra et al., 2018). The largest and most impactful successes are reported on increasing the storage capacity of Hopfield networks. In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ log(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (McEliece et al., 1987). The same bound holds for nonlinear learning rules (Mazza, 1997). Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002). If the learning rule is not related to the Hebb rule, then up to d patterns can be stored (Abu-Mostafa & StJacques, 1985). For Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd log(d) (Folli et al., 2017). In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponential in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013).\nThe standard binary Hopfield network has an energy function that can be expressed as the sum of interaction functions F with F (x) = x2. Modern Hopfield networks, also called “dense associative memory” (DAM) models, use an energy function with interaction functions of the form F (x) = xn and, thereby, achieve a storage capacity proportional to dn−1 (Krotov & Hopfield, 2016; 2018). The energy function of modern Hopfield networks makes them robust against adversarial attacks (Krotov & Hopfield, 2018). Modern binary Hopfield networks with energy functions based on interaction functions of the form F (x) = exp(x) even lead to storage capacity of 2d/2, where all stored binary patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017). However, in order to integrate Hopfield networks into deep learning architectures, it is necessary to make them differentiable, that is, we require continuous Hopfield networks (Hopfield, 1984; Koiran, 1994).\nTherefore, we generalize the energy function of Demircigil et al. (2017) that builds on exponential interaction functions to continuous patterns and states and obtain a new modern Hopfield network. We also propose a new update rule which ensures global convergence to stationary points of the energy (local minima or saddle points). We prove that our new modern Hopfield network typically retrieves patterns in one update step ( -close to the fixed point) with an exponentially low error and has a storage capacity proportional to c d−1 4 (reasonable settings for c = 1.37 and c = 3.15 are given in Theorem 3). The retrieval of patterns with one update is important to integrate Hopfield networks in deep learning architectures, where layers are activated only once. Surprisingly, our new update rule is also the key-value attention as used in transformer and BERT models (see Fig. 1). Our modern Hopfield networks can be integrated as a new layer in deep learning architectures for pooling, memory, prototype learning, and attention. We test these new layers on different benchmark datasets and tasks like immune repertoire classification." }, { "heading": "2 MODERN HOPFIELD NETS WITH CONTINUOUS STATES", "text": "New energy function for continuous state Hopfield networks. In order to integrate modern Hopfield networks into deep learning architectures, we have to make them continuous. To allow for continuous states, we propose a new energy function that is a modification of the energy of modern Hopfield networks (Demircigil et al., 2017). We also propose a new update rule which can be proven to converge to stationary points of the energy (local minima or saddle points).\nWe have N stored (key) patterns xi ∈ Rd represented by the matrix X = (x1, . . . ,xN ) with the largest pattern M = maxi ‖xi‖. The state (query) pattern is ξ ∈ Rd. For exponential interaction functions, we need the log-sum-exp function (lse) for 0 < β\nlse(β,x) = β−1 log ( N∑ i=1 exp(βxi) ) , (1)\nwhich is convex (see appendix Eq. (461), and Lemma A22). The energy function E of the modern Hopfield networks for binary patterns xi and a binary state pattern ξ is E = − ∑N i=1 F ( ξTxi ) (Krotov & Hopfield, 2016). Here, F (x) = xn is the interaction function, where n = 2 gives the classical Hopfield network. The storage capacity is proportional to dn−1 (Krotov & Hopfield, 2016). This model was generalized by Demircigil et al. (2017) to exponential interaction functions F (x) = exp(x) which gives the energy E = − exp(lse(1,XT ξ)). This energy leads to an exponential storage capacity of N = 2d/2 for binary patterns. Furthermore, with a single update, the fixed point is recovered with high probability for random patterns. However, still this modern Hopfield network has binary states.\nWe generalize this energy function to continuous-valued patterns while keeping the properties of the modern Hopfield networks like the exponential storage capacity and the extremely fast convergence (see Fig. 1). For the new energy we take the logarithm of the negative energy of modern Hopfield networks and add a quadratic term of the current state. The quadratic term ensures that the norm of the state vector ξ remains finite and the energy is bounded. Classical Hopfield networks do not require to bound the norm of their state vector, since it is binary and has fixed length. We define the novel energy function E as\nE = − lse(β,XT ξ) + 1 2 ξT ξ + β−1 logN + 1 2 M2 . (2)\nWe have 0 6 E 6 2M2 (see appendix Lemma A1). Using p = softmax(βXT ξ), we define a novel update rule (see Fig. 1):\nξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) . (3)\nThe next theorem states that the update rule Eq. (3) converges globally. The proof uses the ConcaveConvex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003), which is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003). Theorem 1. The update rule Eq. (3) converges globally: For ξt+1 = f(ξt), the energy E(ξt) → E(ξ∗) for t→∞ and a fixed point ξ∗.\nProof. The update rule in Eq. (3) is the CCCP for minimizing the energy E, which is the sum of the convex 1/2ξT ξ and concave −lse (see details in appendix Theorem 1). Theorem 2 in Yuille & Rangarajan (2002) yields the global convergence property. Also, in Theorem 2 in Sriperumbudur & Lanckriet (2009) the global convergence of CCCP is proven via a rigorous analysis using Zangwill’s global convergence theory of iterative algorithms.\nThe global convergence theorem only assures that for the energy E(ξt)→ E(ξ∗) for t→∞ but not ξt → ξ∗. The next theorem strengthens Zangwill’s global convergence theorem (Meyer, 1976) and gives convergence results similar to those known for expectation maximization (Wu, 1983). Theorem 2. For the iteration Eq. (3) we have E (ξt) → E (ξ∗) = E∗ as t → ∞, for some stationary point ξ∗. Furthermore,\n∥∥ξt+1 − ξt∥∥ → 0 and either {ξt}∞t=0 converges or, in the other case, the set of limit points of {ξt}∞t=0 is a connected and compact subset of L (E∗), where L (a) = {ξ ∈ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (3). If L (E∗) is finite, then any sequence {ξt}∞t=0 generated by the iteration Eq. (3) converges to some ξ∗ ∈ L (E∗).\nFor a proof, see appendix Theorem 2. Therefore, all the limit points of any sequence generated by the iteration Eq. (3) are stationary points (local minima or saddle points) of the energy function E. Either the iteration converges or, otherwise, the set of limit points is a connected and compact set.\nThe next theorem gives the results on the storage capacity of our new continuous state modern Hopfield network. We first define what we mean by storing and retrieving patterns using a modern Hopfield network with continuous states. Definition 1 (Pattern Stored and Retrieved). We assume that around every pattern xi a sphere Si is given. We say xi is stored if there is a single fixed point x∗i ∈ Si to which all points ξ ∈ Si converge, and Si ∩ Sj = ∅ for i 6= j. We say xi is retrieved for a given if iteration (update rule) Eq. (3) gives a point x̃i that is at least -close to the single fixed point x∗i ∈ Si. The retrieval error is ‖x̃i − xi‖.\nAs with classical Hopfield networks, we consider patterns on the sphere, i.e. patterns with a fixed norm. For randomly chosen patterns, the number of patterns that can be stored is exponential in the dimension d of the space of the patterns (xi ∈ Rd). Theorem 3. We assume a failure probability 0 < p 6 1 and randomly chosen patterns on the sphere with radius M := K √ d− 1. We define a := 2d−1 (1 + ln(2βK 2p(d− 1))), b := 2K 2β\n5 , and c := bW0(exp(a+ln(b)) , where W0 is the upper branch of the Lambert W function (Olver et al., 2010,\n(4.13)), and ensure c ≥ (\n2√ p\n) 4 d−1\n. Then with probability 1− p, the number of random patterns that can be stored is\nN ≥ √p c d−1 4 . (4)\nTherefore it is proven for c ≥ 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a+ ln(b) > 1.27) and proven for c ≥ 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a+ ln(b) < −0.94).\nFor a proof, see appendix Theorem A5.\nThe next theorem states that the update rule typically retrieves patterns after one update. Retrieval of a pattern xi for fixed point x∗i and query ξ is defined via an by ‖f(ξ) − x∗i ‖ < , that is, the update is -close to the fixed point. Retrieval with one update is crucial to integrate modern Hopfield networks into deep learning architectures, where layers are activated only once. First we need the concept of separation of a pattern. For pattern xi we define its separation ∆i to other patterns by:\n∆i := min j,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (5)\nThe update rule retrieves patterns with one update for well separated patterns, that is, patterns with large ∆i. Theorem 4. With query ξ, after one update the distance of the new point f(ξ) to the fixed point x∗i is exponentially small in the separation ∆i. The precise bounds using the Jacobian J = ∂f(ξ) ∂ξ and its value Jm in the mean value theorem are:\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ , (6)\n‖Jm‖2 6 2 β N M 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) .\n(7)\nFor given and sufficient large ∆i, we have ‖f(ξ) − x∗i ‖ < , that is, retrieval with one update.\nSee proof in appendix Theorem A8.\nAt the same time, the retrieval error decreases exponentially with the separation ∆i. Theorem 5 (Exponentially Small Retrieval Error). The retrieval error ‖f(ξ) − xi‖ of pattern xi is bounded by\n‖f(ξ) − xi‖ 6 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) M (8)\nand for ‖xi − x∗i ‖ 6 12 β M together with ‖xi − ξ‖ 6 1 2 β M by\n‖xi − x∗i ‖ 6 2 e (N − 1) M exp(− β ∆i) . (9)\nSee proof in appendix Theorem A9.\nMetastable states and one global fixed point. So far, we considered patterns xi that are well separated and the iteration converges to a fixed point which is near a pattern xi. If no pattern xi is well separated from the others, then the iteration converges to a global fixed point close to the arithmetic mean of the vectors. In this case the softmax vector p is close to uniform, that is, pi = 1/N . If some vectors are similar to each other and well separated from all other vectors, then a metastable state near the similar vectors exists. Iterations that start near the metastable state converge to this metastable state, also if initialized by one of the similar patterns. For convergence proofs to one global fixed point and to metastable states see appendix Lemma A7 and Lemma A12, respectively.\nHopfield update rule is attention of the transformer. The Hopfield network update rule is the attention mechanism used in transformer and BERT models (see Fig. 1). To see this, we assume N stored (key) patterns yi and S state (query) patterns ri that are mapped to the Hopfield space of dimension dk. We set xi = W TKyi, ξi = W T Q ri, and multiply the result of our update rule withWV . The matrices Y = (y1, . . . ,yN )T and R = (r1, . . . , rS)T combine the yi and ri as row vectors. We define the matricesXT = K = YWK , ΞT = Q = RWQ, and V = YWKWV = XTWV , where WK ∈ Rdy×dk ,WQ ∈ Rdr×dk ,WV ∈ Rdk×dv . If β = 1/ √ dk and softmax ∈ RN is changed to a row vector, we obtain for the update rule Eq. (3) multiplied byWV :\nZ = softmax ( 1/ √ dk QK T ) V = softmax ( β RWQW T KY T ) Y WKWV . (10)\nThe left part of Eq. (10) is the transformer attention. In the transformer self-attentionR = Y , and WKWV replaced by justWV . Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. (10) serves to explain these specific layers." }, { "heading": "3 NEW HOPFIELD LAYERS FOR DEEP LEARNING", "text": "Modern Hopfield networks with continuous states can be integrated into deep learning architectures, because they are continuous and differentiable with respect to their parameters. Furthermore, they typically retrieve patterns with one update, which is conform to deep learning layers that are activated only once. For these two reasons, modern Hopfield networks can serve as specialized layers in deep networks to equip them with memories. Below, we introduce three types of Hopfield layers: Hopfield, HopfieldPooling, and HopfieldLayer. Possible applications of Hopfield layers in deep network architectures comprise:\n• multiple instance learning (MIL) (Dietterich et al., 1997),\n• processing of and learning with point sets (Qi et al., 2017a;b; Xu et al., 2018),\n• set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020),\n• attention-based learning (Vaswani et al., 2017a),\n• deep learning with associative memories (Graves et al., 2014; Weston et al., 2014; Ba et al., 2016a;b; Schlag & Schmidhuber, 2018; Schlag et al., 2019),\n• natural language processing (Devlin et al., 2018; 2019),\n• sequence analysis and time series prediction (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997; Cho et al., 2014), and\n• storing and retrieving reference data, e.g. the training data, outliers, high error data points, prototypes or cluster centers, support vectors & border cases.\nHopfield network layers can substitute existing layers like pooling layers, permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016), GRU (Cho et al., 2014) & LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997) layers, and attention layers (Vaswani et al., 2017a;b; Bahdanau et al., 2014).\nTypes of neural networks. We consider two types of feedforward neural networks: (I) Neural networks that propagate an activation vector from the input layer to the output layer. Examples are fully-connected or convolutional neural networks. (II) Neural networks that propagate a set of vectors from the input layer to the output layer, where each layer applies the same operation to each element of the set and the output layer may summarize the set via a vector. An example is the transformer. Recurrent neural networks are networks of type (I), which are iteratively applied to a set or a sequence, where intermediate results are stored in a memory and can be reused. Modern Hopfield networks can be integrated into both types of neural network architectures and enable to equip each of their layers with associative memories. See Fig. 2.\nTypes of new Hopfield layers. We introduce three types of Hopfield layers: Hopfield, HopfieldPooling, and HopfieldLayer. The continuous modern Hopfield network results in a plethora of new deep learning architectures, since we can (a) propagate sets or single vectors, (b) propagate queries,\nstored patterns, or both, (c) learn static queries or stored patterns, (d) fill the memory by training sets, prototypes, or external data. Next, we provide three useful types of Hopfield layers. The implementation is available at: https://github.com/ml-jku/hopfield-layers\n(1) Layer Hopfield for networks that propagate sets of vectors via state (query) patterns R and stored (key) patterns Y . The layer Hopfield is the realization of formula (10). The memory of the Hopfield layer can be filled with sets from the input or previous layers, see Fig. 3. The memory may be filled with a reference set, which is covered by providing the reference set as additional input. Thus, the layer Hopfield allows the association of two sets. A prominent example of a layer that performs such association is the transformer attention mechanism, which associates keys and queries, e.g. two point sets that have to be compared. This layer allows for different kinds of sequence-to-sequence learning, point set operations, and retrieval-based methods. The layer Hopfield with skip connections in a ResNet architecture is identical to the popular transformer and BERT models. In the experiments, we analyzed these Hopfield layers in transformer architectures. In our experiments in which we compare machine learning methods on small datasets of the UCI benchmark collection the layer Hopfield is also used.\n(2) Layer HopfieldPooling for networks that propagate patterns via the stored (key) patterns Y . This layer performs a pooling or summarization of sets Y obtained from queries in previous layers or the input. The memory of the HopfieldPooling layer is filled with sets from the input or previous layers. The HopfieldPooling layer uses the queries to search for patterns in the memory, the stored set. If more patterns are similar to a particular search pattern (query), then the result is an average over these patterns. The state (query) patterns of each layer are static and can be learned. Multiple queries supply a set to the next layer, where each query corresponds to one element of the set. Thus, the layer HopfieldPooling enables fixed pattern search, pooling operations, and memories like LSTMs or GRUs. The static pattern functionality is typically needed if particular patterns must be identified in the data. A single HopfieldPooling layer allows for multiple instance learning. Static state (query)\npatterns together with position encoding in the keys allows for performing pooling operations. The position encoding can be two-dimensional, where standard convolutional filters can be constructed as in convolutional neural networks (CNNs). The HopfieldPooling layer can substitute pooling, averaging, LSTM, and permutation equivariant layers. See Fig. 4. The layer HopfieldPooling is used for experiments with multiple instance learning tasks, e.g. for immune repertoire classification in the experiments.\n(3) Layer HopfieldLayer for networks that propagate a vector or a set of vectors via state (query) patterns R. The queries R can be input vectors or queries that are computed from the output of previous layers. The memory of the HopfieldLayer layer is filled with a fixed set, which can be the training set, a reference set, prototype set, or a learned set (a learned matrix). The stored (key) patterns are static and can be learned. If the training set is stored in the memory, then each layer constructs a new set of queries based on the query results of previous layers. The stored patterns can be initialized by the training set or a reference set and then learned, in which case they deviate from the training set. The stored patterns can be interpreted as weights from the state (query) to hidden neurons that have a softmax activation function (Krotov & Hopfield, 2020). The layer HopfieldLayer can substitute a fully connected layer, see Fig. 5. A single HopfieldLayer layer also allows for approaches similar to support vector machines (SVMs), approaches similar to k-nearest neighbor, approaches similar to learning vector quantization, and pattern search. For classification, the raw data yi = (zi, ti) can be the concatenation of input zi and target ti. In this case, the matrices WK and WV can be designed such that inside the softmax the input zi is used and outside the softmax the target ti. Thus, the softmax provides a weighted average of the target vectors based on the similarity between the query and the inputs. Also SVM models, k-nearest neighbor, and learning vector quantization can be considered as weighted averages of the targets. The encoder-decoder attention layer of the transformers are a HopfieldLayer layer, where the memory is filled with the encoder output set. In our experiments with the drug design benchmark datasets, the layer HopfieldLayer has been applied and compared to other machine learning methods.\nAdditional functionality of new Hopfield layers. The insights about energy, convergence, and storage properties provide all new Hopfield layers with additional functionalities: i) multiple updates\nto control how precise fixed points are found without additional parameters needed. ii) variable β to determine the kind of fixed points such as the size of metastable states. The variable β controls over how many patterns is averaged. As observed in the experiments, the variable is relevant in combination with the learning rate to steer the learning dynamics. The parameter β governs the fixed point dynamics and can be learned, too. iii) controlling the storage capacity via the dimension of the associative space. The storage capacity can be relevant for tasks with a huge number of instances as in the immune repertoire classification experiment. iv) pattern normalization controls, like the layernorm, the fixed point dynamics by the norm and shift of the patterns. For more details see appendix, Section A.6." }, { "heading": "4 EXPERIMENTS", "text": "We show that our proposed Hopfield layers can be applied successfully to a wide range of tasks. The tasks are from natural language processing, contain multiple instance learning problems, a collection of small classification tasks, and drug design problems.\nAnalysis of transformer and BERT models. Transformer and BERT models can be implemented by the layer Hopfield. The kind of fixed point of the Hopfield net is determined by how the pattern xi is separated from others patterns. (a) a global fixed point: no separation of a pattern from the others, (b) a fixed point close to a single pattern: pattern is separated from other patterns, (c) metastable state: some patterns are similar to each other and well separated from all other vectors. We observed that the attention heads of transformer and BERT models are predominantly in metastable states, which are categorized into four classes: (I) averaging over a very large number of patterns (very large metastable state or fixed point (a)), (II) averaging over a large number of patterns (large metastable state), (III) averaging over a medium number of patterns (medium metastable state), (IV) averaging over a small number of patterns (small metastable state or fixed point (c)). For analyzing the metastable states, we calculated the minimal number k of softmax values required to sum up to 0.90. Hence, k indicates the size of a metastable state. To determine in which of the four classes a head is mainly operating, we computed the distribution of k across sequences. Concretely, for N tokens and for k̄ as the median of the distribution, a head is classified as operating in class (I) if 1/2N < k̄, as operating in class (II) if 1/8N < k̄ 6 1/2N , as operating in class (III) if 1/32N < k̄ 6 1/8N , and as operating in class (IV) if k̄ 6 1/32N . We analyzed pre-trained BERT models from Hugging Face Inc. (Wolf et al., 2019) according to these operating classes. In Fig. A.3 in the appendix the distribution of the pre-trained bert-base-cased model is depicted (for other models see appendix Section A.5.1.4). Operating classes (II) (large metastable states) and (IV) (small metastable states) are often observed in the middle layers. Operating class (I) (averaging over a very large number of patterns) is abundant in lower layers. Similar observations have been reported in other studies (Toneva & Wehbe, 2019a;b; Tay et al., 2020). Operating class (III) (medium metastable states) is predominant in the last layers.\nMultiple Instance Learning Datasets. For multiple instance learning (MIL) (Dietterich et al., 1997), we integrate our new Hopfield network via the layer HopfieldPooling into deep learning architectures. Recently, deep learning methods have been applied to MIL problems (Ilse et al., 2018), but still the performance on many datasets lacks improvement. Thus, MIL datasets still pose an interesting challenge, in which Hopfield layers equipped with memory are a promising approach.\n•Immune Repertoire Classification. The first MIL task is immune repertoire classification, where a deep learning architecture with HopfieldPooling (DeepRC) was used (Widrich et al., 2020a;b). Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. The datasets contain ≈ 300,000 instances per immune repertoire, which represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018). Most MIL methods fail due the large number of instances. This experiment comprises real-world and simulated datasets. Simulated datasets are generated by implanting sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low frequency into simulated or experimentally-observed immune receptor sequences. The performance of DeepRC was compared with other machine learning methods: (i) known motif, (ii) SVM using k-mers and MinMax or Jaccard kernel, (iii) K-Nearest Neighbor (KNN) with kmers, (iv) logistic regression with k-mers, (v) burden test with k-mers, and (vi) logistic multiple\ninstance learning (lMIL). On the real-world dataset DeepRC achieved an AUC of 0.832 ± 0.022, followed by the SVM with MinMax kernel (AUC 0.825± 0.022) and the burden test with an AUC of 0.699 ± 0.041. Across datasets, DeepRC outperformed all competing methods with respect to average AUC (Widrich et al., 2020a;b).\n•MIL benchmark datasets. We apply Hopfield layers to further MIL datasets (Ilse et al., 2018; Küçükaşcı & Baydoğan, 2018; Cheplygina et al., 2016): Elephant, Fox and Tiger for image annotation (Andrews et al., 2003). These datasets consist of color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant comprises 1,391 instances and 230 features, Fox 1,320 instances and 230 features, and Tiger has 1,220 instances and 230 features. Furthermore, we use the UCSB breast cancer classification (Kandemir et al., 2014) dataset, which consists of 2,002 instances across 58 input objects. An instance represents a patch of a histopathological image of cancerous or normal tissue. The layer HopfieldPooling is used, which allows for computing a per-input-object representation by extracting an average of instances that are indicative for one of the two classes. The input to the layer HopfieldPooling is a set of embedded instances Y . A trainable but fixed state (query) patternQ is used for averaging over class-indicative instances. This averaging enables a compression of variable-sized bags to a fixedsized representation to discriminate the bags. More details in appendix Sec. A.5.2. Our approach has set a new state-of-the-art and has outperformed other methods (Küçükaşcı & Baydoğan, 2018; Carbonneau et al., 2016) on the datasets Tiger, Elephant and UCSB Breast Cancer (see Table 1).\nUCI Benchmark Collection. So far deep learning struggled with small datasets. However, Hopfield networks are promising for handling small datasets, since they can store the training data points or their representations to perform similarity-based, nearest neighbor, or learning vector quantization methods. Therefore, we test the Hopfield layer Hopfield on the small datasets of the UC Irvine (UCI) Machine Learning Repository that have been used to benchmark supervised learning methods (Fernández-Delgado et al., 2014; Wainberg et al., 2016; Khan et al., 2018) and also feed-forward neural networks (Klambauer et al., 2017a; Wu et al., 2018), where our Hopfield networks could exploit their memory. The whole 121 datasets in the collection vary strongly with respect to their size, number of features, and difficulties (Fernández-Delgado et al., 2014), such that they have been divided into 75 “small datasets” with less than 1,000 samples and 45 “large datasets” with more than or equal to 1,000 samples in Klambauer et al. (2017a).\nOn the 75 small datasets, Random Forests (RFs) and Support Vector Machines (SVM) are highly accurate, whereas on the large datasets, deep learning methods and neural networks are in the lead (Klambauer et al., 2017a;b; Wu et al., 2018). We applied a modern Hopfield network via the layer HopfieldLayer, where a selfnormalizing net (SNN) maps the input vector to Y andR. The output Z of HopfieldLayer enters a softmax output. We compared our modern Hopfield networks against deep learning\nmethods (e.g. SNNs, resnet), RFs, SVMs, boosting, bagging, and many other machine learning methods of Fernández-Delgado et al. (2014). Since for each method, multiple variants and implementations had been included, we used method groups and representatives as defined by Klambauer et al. (2017a). For each dataset, a ranking of the methods was calculated which is presented in Table 2. We found that Hopfield networks outperform all other methods on the small datasets, setting a new state-of-the-art for 10 datasets. The difference is significant except for the first three runner-up methods (Wilcoxon signed rank test). See appendix Section A.5.3 for details.\nDrug Design Benchmark Datasets. We test the Hopfield layer HopfieldLayer, on four drug design datasets. These datasets represent four main areas of modeling tasks in drug design, concretely to develop accurate models for predicting a) new anti-virals (HIV) by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen, b) new protein inhibitors, concretely human β-secretase (BACE) inhibitors by Subramanian et al. (2016), c) metabolic effects as blood-brain barrier permeability (BBBP) (Martins et al., 2012) and d) side effects of a chemical compound from the Side Effect Resource (SIDER) Kuhn et al. (2016). We applied the Hopfield layer HopfieldLayer, where the training data is used as stored patterns Y , the input vector as state patternR, and the corresponding training label to project the output of the Hopfield layer YWV . Our architecture with HopfieldLayer has reached state-of-the-art for predicting side effects on SIDER 0.672± 0.019 as well as for predicting β-secretase BACE 0.902± 0.023. For details, see Table A.5 in the appendix. Conclusion. We have introduced a modern Hopfield network with continuous states and the corresponding new update rule. This network can store exponentially many patterns, retrieves patterns with one update, and has exponentially small retrieval errors. We analyzed the attention heads of BERT models. The new modern Hopfield networks have been integrated into deep learning architectures as layers to allow the storage of and access to raw input data, intermediate results, or learned prototypes. These Hopfield layers enable new ways of deep learning, beyond fully-connected, convolutional, or recurrent networks, and provide pooling, memory, association, and attention mechanisms. Hopfield layers that equip neural network layers with memories improved state-of-the-art in three out of four considered multiple instance learning problems and on immune repertoire classification, and on two drug design dataset. They yielded the best results among different machine learning methods on the UCI benchmark collections of small classification tasks." }, { "heading": "ACKNOWLEDGMENTS", "text": "The ELLIS Unit Linz, the LIT AI Lab and the Institute for Machine Learning are supported by the Land Oberösterreich, LIT grants DeepToxGen (LIT-2017-3-YOU-003), and AI-SNN (LIT2018-6-YOU-214), the Medical Cognitive Computing Center (MC3), Janssen Pharmaceutica, UCB Biopharma, Merck Group, Audi.JKU Deep Learning Center, Audi Electronic Venture GmbH, TGW, Primal, S3AI (FFG-872172), Silicon Austria Labs (SAL), Anyline, FILL, EnliteAI, Google Brain, ZF Friedrichshafen AG, Robert Bosch GmbH, TÜV Austria, DCS, and the NVIDIA Corporation. IARAI is supported by Here Technologies." }, { "heading": "A APPENDIX", "text": "This appendix consists of six sections (A.1–A.6). Section A.1 introduces the new modern Hopfield network with continuous states and its update rule. Furthermore, Section A.1 provides a thorough and profound theoretical analysis of this new Hopfield network. Section A.2 provides the mathematical background for Section A.1. Section A.3 reviews binary Modern Hopfield Networks of Krotov & Hopfield. Section A.4 shows that the Hopfield update rule is the attention mechanism of the transformer. Section A.5 gives details on the experiments. Section A.6 describes the PyTorch implementation of layers based on the new Hopfield networks and how to use them." }, { "heading": "CONTENTS OF THE APPENDIX", "text": "A.1 Continuous State Modern Hopfield Networks (A New Concept) . . . . . . . . . . 12\nA.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12\nA.1.2 New Energy Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13\nA.1.3 New Update Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15\nA.1.4 Global Convergence of the Update Rule . . . . . . . . . . . . . . . . . . . 16\nA.1.5 Local Convergence of the Update Rule: Fixed Point Iteration . . . . . . . . 19\nA.1.6 Properties of Fixed Points Near Stored Pattern . . . . . . . . . . . . . . . 44\nA.1.7 Learning Associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57\nA.1.8 Infinite Many Patterns and Forgetting Patterns . . . . . . . . . . . . . . . . 60\nA.1.9 Number of Spurious States . . . . . . . . . . . . . . . . . . . . . . . . . . 61\nA.2 Properties of Softmax, Log-Sum-Exponential, Legendre Transform, Lambert W Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62\nA.3 Modern Hopfield Networks: Binary States (Krotov and Hopfield) . . . . . . . . . . 70\nA.3.1 Modern Hopfield Networks: Introduction . . . . . . . . . . . . . . . . . . 70\nA.3.2 Energy and Update Rule for Binary Modern Hopfield Networks . . . . . . 71\nA.4 Hopfield Update Rule is Attention of The Transformer . . . . . . . . . . . . . . . 73\nA.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73\nA.5.1 Experiment 1: Attention in Transformers described by Hopfield dynamics . 73\nA.5.2 Experiment 2: Multiple Instance Learning Datasets. . . . . . . . . . . . . 78\nA.5.3 Experiment 3: Classification on Small UCI Benchmark Datasets . . . . . . 81\nA.5.4 Experiment 4: Drug Design Benchmark Datasets . . . . . . . . . . . . . . 83\nA.6 PyTorch Implementation of Hopfield Layers . . . . . . . . . . . . . . . . . . . . . 84\nA.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84\nA.6.2 Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85\nA.6.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87" }, { "heading": "LIST OF THEOREMS", "text": "A1 Theorem (Global Convergence (Zangwill): Energy) . . . . . . . . . . . . . . . . . 16\nA2 Theorem (Global Convergence: Stationary Points) . . . . . . . . . . . . . . . . . . 18\nA3 Theorem (Storage Capacity (M=2): Placed Patterns) . . . . . . . . . . . . . . . . 46\nA4 Theorem (Storage Capacity (M=5): Placed Patterns) . . . . . . . . . . . . . . . . 47\nA5 Theorem (Storage Capacity (Main): Random Patterns) . . . . . . . . . . . . . . . 49\nA6 Theorem (Storage Capacity (d computed): Random Patterns) . . . . . . . . . . . . 52\nA7 Theorem (Storage Capacity (expected separation): Random Patterns) . . . . . . . . 55\nA8 Theorem (Pattern Retrieval with One Update) . . . . . . . . . . . . . . . . . . . . 56\nA9 Theorem (Exponentially Small Retrieval Error) . . . . . . . . . . . . . . . . . . . 57\nA10 Theorem (Storage Capacity for Binary Modern Hopfield Nets (Demircigil et al. 2017)) 72" }, { "heading": "LIST OF DEFINITIONS", "text": "A1 Definition (Softmax) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62\nA2 Definition (Log-Sum-Exp Function) . . . . . . . . . . . . . . . . . . . . . . . . . 62\nA3 Definition (Convex Conjugate) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\nA4 Definition (Legendre Transform) . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\nA5 Definition (Epi-Sum) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66\nA6 Definition (Lambert Function) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69" }, { "heading": "LIST OF FIGURES", "text": "A.1 The three cases of fixed points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19\nA.2 From binary Hopfield network to transformer . . . . . . . . . . . . . . . . . . . . 73\nA.4 Ridge plots of the distribution of counts . . . . . . . . . . . . . . . . . . . . . . . 76\nA.5 Change of count density during training . . . . . . . . . . . . . . . . . . . . . . . 77\nA.6 Attentions of a Gaussian averaging heads . . . . . . . . . . . . . . . . . . . . . . 78\nA.7 A flowchart of the Hopfield layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 88" }, { "heading": "LIST OF TABLES", "text": "A.1 Results of immune repertoire classification across all datasets . . . . . . . . . . . . 79\nA.2 Hyperparameter selection for MIL datasets . . . . . . . . . . . . . . . . . . . . . 80\nA.3 Hyperparameter selection for small UCI benchmark datasets . . . . . . . . . . . . 82\nA.4 Hyperparameter selection for drug design datasets . . . . . . . . . . . . . . . . . . 83\nA.5 Results on drug design benchmark datasets . . . . . . . . . . . . . . . . . . . . . 84" }, { "heading": "A.1 CONTINUOUS STATE MODERN HOPFIELD NETWORKS (A NEW CONCEPT)", "text": "A.1.1 INTRODUCTION\nIn Section A.1 our new modern Hopfield network is introduced. In Subsection A.1.2 we present the new energy function. Then in Subsection A.1.3, our new update rule is introduced. In Subsection A.1.4, we show that this update rule ensures global convergence. We show that all the limit points of any sequence generated by the update rule are the stationary points (local minima or saddle points) of the energy function. In Section A.1.5, we consider the local convergence of the update rule and see that patterns are retrieved with one update. In Subsection A.1.6, we consider the properties of the fixed points that are associated with the stored patterns. In Subsection A.1.6.1, we show that exponentially many patterns can be stored. The main result is given in Theorem A5: For random\npatterns on a sphere we can store and retrieve exponentially (in the dimension of the Hopfield space) many patterns. Subsection A.1.6.2 reports that patterns are typically retrieved with one update step and that the retrieval error is exponentially small.\nIn Subsection A.1.7, we consider how associations for the new Hopfield networks can be learned. In Subsection A.1.7.2, we analyze if the association is learned directly by a bilinear form. In Subsection A.1.7.3, we analyze if stored patterns and query patterns are mapped to the space of the Hopfield network. Therefore, we treat the architecture of the transformer and BERT. In Subsection A.1.8, we introduce a temporal component into the new Hopfield network that leads to a forgetting behavior. The forgetting allows us to treat infinite memory capacity in Subsection A.1.8.1. In Subsection A.1.8.2, we consider the controlled forgetting behavior.\nIn Section A.2, we provide the mathematical background that is needed for our proofs. In particular we give lemmas on properties of the softmax, the log-sum-exponential, the Legendre transform, and the Lambert W function.\nIn Section A.3, we review the new Hopfield network as introduced by Krotov and Hopfield in 2016. However in contrast to our new Hopfield network, the Hopfield network of Krotov and Hopfield is binary, that is, a network with binary states. In Subsection A.3.1, we give an introduction to neural networks equipped with associative memories and new Hopfield networks. In Subsection A.3.1.1, we discuss neural networks that are enhanced by an additional external memory and by attention mechanisms. In Subsection A.3.1.2, we give an overview over the modern Hopfield networks. Finally, in Subsection A.3.2, we present the energy function and the update rule for the modern, binary Hopfield networks." }, { "heading": "A.1.2 NEW ENERGY FUNCTION", "text": "We have patterns x1, . . . ,xN that are represented by the matrix\nX = (x1, . . . ,xN ) . (11)\nThe largest norm of a pattern is\nM = max i ‖xi‖ . (12)\nThe query or state of the Hopfield network is ξ. The energy function E in the new type of Hopfield models of Krotov and Hopfield is E = − ∑N i=1 F ( ξTxi ) for binary patterns xi and binary state ξ with interaction function F (x) = xn, where n = 2 gives classical Hopfield model (Krotov & Hopfield, 2016). The storage capacity is proportional to dn−1 (Krotov & Hopfield, 2016). This model was generalized by Demircigil et al. (Demircigil et al., 2017) to exponential interaction functions F (x) = exp(x), which gives the energy E = − exp(lse(1,XT ξ)). This energy leads to an exponential storage capacity of N = 2d/2 for binary patterns. Furthermore, with a single update the fixed point is recovered with high probability. See more details in Section A.3.\nIn contrast to the these binary modern Hopfield networks, we focus on modern Hopfield networks with continuous states that can store continuous patterns. We generalize the energy of Demircigil et al. (Demircigil et al., 2017) to continuous states while keeping the lse properties which ensure high storage capacity and fast convergence. Our new energy E for a continuous query or state ξ is defined\nas\nE = − lse(β,XT ξ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2 (13)\n= − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + β−1 lnN + 1 2 ξT ξ + 1 2 M2 (14)\n= − β−1 ln\n( 1\nN N∑ i=1 exp ( − 1 2 β ( M2 − ‖xi‖2 )) exp ( − 1 2 β ‖xi − ξ‖2 )) . (15)\nFirst let us collect and prove some properties of E. The next lemma gives bounds on the energy E. Lemma A1. The energy E is larger than zero:\n0 6 E . (16) For ξ in the simplex defined by the patterns, the energy E is upper bounded by:\nE 6 β−1 lnN + 1\n2 M2 , (17)\nE 6 2 M2 . (18)\nProof. We start by deriving the lower bound of zero. The pattern most similar to query or state ξ is xξ:\nxξ = xk , k = arg max i ξTxi . (19)\nWe obtain\nE = − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + β−1 lnN + 1 2 ξT ξ + 1 2 M2 (20)\n= − β−1 ln\n( 1\nN N∑ i=1 exp(βxTi ξ)\n) + 1\n2 ξT ξ +\n1 2 M2\n≥ − β−1 ln\n( 1\nN N∑ i=1 exp(βxTi ξ)\n) + 1\n2 ξT ξ +\n1 2 xTξ xξ\n≥ − β−1 ln ( exp(βxTξ ξ) ) + 1\n2 ξT ξ +\n1 2 xTξ xξ\n= − xTξ ξ + 1\n2 ξT ξ +\n1 2 xTξ xξ\n= 1\n2 (ξ − xξ)T (ξ − xξ) =\n1 2 ‖ξ − xξ‖2 ≥ 0 .\nThe energy is zero and, therefore, the bound attained, if all xi are equal, that is, xi = x for all i and ξ = x.\nFor deriving upper bounds on the energy E, we require the the query ξ to be in the simplex defined by the patterns, that is,\nξ = N∑ i=1 pi xi , N∑ i=1 pi = 1 , ∀i : 0 6 pi . (21)\nThe first upper bound is.\nE = − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2 (22)\n6 − N∑ i=1 pi (x T i ξ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2\n= − 1 2 ξT ξ + β−1 lnN + 1 2 M2 6 β−1 lnN + 1 2 M2 .\nFor the first inequality we applied Lemma A19 to −lse(β,XT ξ) with z = p giving\n− lse(β,XT ξ) 6 − N∑ i=1 pi (x T i ξ) + β −1 N∑ i=1 pi ln pi 6 − N∑ i=1 pi (x T i ξ) , (23)\nas the term involving the logarithm is non-positive.\nNext we derive the second upper bound, for which we need the meanmx of the patterns\nmx = 1\nN N∑ i=1 xi . (24)\nWe obtain\nE = − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2 (25)\n6 − N∑ i=1 1 N xTi ξ + 1 2 ξT ξ + 1 2 M2\n= −mTxξ + 1\n2 ξT ξ +\n1 2 M2\n6 ‖mx‖ ‖ξ‖ + 1 2 ‖ξ‖2 + 1 2 M2 6 2 M2 ,\nwhere for the first inequality we again applied Lemma A19 with z = (1/N, . . . , 1/N) and β−1 ∑ i 1/N ln(1/N) = −β−1 ln(N). This inequality also follows from Jensen’s inequality. The second inequality uses the Cauchy-Schwarz inequality. The last inequality uses\n‖ξ‖ = ∥∥∥∥∥∑ i pi xi ∥∥∥∥∥ 6 ∑ i pi ‖xi‖ 6 ∑ i piM = M (26)\nand\n‖mx‖ = ∥∥∥∥∥∑ i (1/N) xi ∥∥∥∥∥ 6 ∑ i (1/N) ‖xi‖ 6 ∑ i (1/N) M = M . (27)" }, { "heading": "A.1.3 NEW UPDATE RULE", "text": "We now introduce an update rule for minimizing the energy function E. The new update rule is\nξnew = Xp = Xsoftmax(βXT ξ) , (28)\nwhere we used\np = softmax(βXT ξ) . (29)\nThe new state ξnew is in the simplex defined by the patterns, no matter what the previous state ξ was. For comparison, the synchronous update rule for the classical Hopfield network with threshold zero is\nξnew = sgn (XXT ξ) . (30)\nTherefore, instead of using the vectorXT ξ as in the classical Hopfield network, its softmax version softmax(βXT ξ) is used.\nIn the next section (Section A.1.4) we show that the update rule Eq. (28) ensures global convergence. We show that all the limit points of any sequence generated by the update rule are the stationary points (local minima or saddle points) of the energy function E. In Section A.1.5 we consider the local convergence of the update rule Eq. (28) and see that patterns are retrieved with one update." }, { "heading": "A.1.4 GLOBAL CONVERGENCE OF THE UPDATE RULE", "text": "We are interested in the global convergence, that is, convergence from each initial point, of the iteration\nξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) , (31)\nwhere we used\np = softmax(βXT ξ) . (32)\nWe defined the energy function\nE = − lse(β,XT ξ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2 (33)\n= − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + β−1 lnN + 1 2 ξT ξ + 1 2 M2 . (34)\nWe will show that the update rule in Eq. (31) is the Concave-Convex Procedure (CCCP) for minimizing the energy E. The CCCP is proven to converge globally.\nTheorem A1 (Global Convergence (Zangwill): Energy). The update rule Eq. (31) converges globally: For ξt+1 = f(ξt), the energy E(ξt)→ E(ξ∗) for t→∞ and a fixed point ξ∗.\nProof. The Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) minimizes a function that is the sum of a concave function and a convex function. CCCP is equivalent to Legendre minimization (Rangarajan et al., 1996; 1999) algorithms (Yuille & Rangarajan, 2003). The Jacobian of the softmax is positive semi-definite according to Lemma A22. The Jacobian of the softmax is the Hessian of the lse, therefore lse is a convex and −lse a concave function. Therefore, the energy function E(ξ) is the sum of the convex function E1(ξ) = 1/2ξT ξ + C1 and the concave function E2(ξ) = −lse:\nE(ξ) = E1(ξ) + E2(ξ) , (35)\nE1(ξ) = 1\n2 ξT ξ + β−1 lnN +\n1 2 M2 = 1 2 ξT ξ + C1 , (36)\nE2(ξ) = − lse(β,XT ξ) , (37)\nwhere C1 does not depend on ξ.\nThe Concave-Convex Procedure (CCCP) (Yuille & Rangarajan, 2002; 2003) applied to E is ∇ξE1 ( ξt+1 ) = − ∇ξE2 ( ξt ) , (38)\nwhich is\n∇ξ ( 1\n2 ξT ξ + C1\n)( ξt+1 ) = ∇ξlse(β,XT ξt) . (39)\nThe resulting update rule is\nξt+1 = Xpt = Xsoftmax(βXT ξt) (40)\nusing\npt = softmax(βXT ξt) . (41)\nThis is the update rule in Eq. (31).\nTheorem 2 in Yuille & Rangarajan (2002) and Theorem 2 in Yuille & Rangarajan (2003) state that the update rule Eq. (31) is guaranteed to monotonically decrease the energy E as a function of time. See also Theorem 2 in Sriperumbudur & Lanckriet (2009).\nAlthough the objective converges in all cases, it does not necessarily converge to a local minimum (Lipp & Boyd, 2016).\nHowever the convergence proof of CCCP in Yuille & Rangarajan (2002; 2003) was not as rigorous as required. In Sriperumbudur & Lanckriet (2009) a rigorous analysis of the convergence of CCCP is performed using Zangwill’s global convergence theory of iterative algorithms.\nIn Sriperumbudur & Lanckriet (2009) the minimization problem\nmin ξ E1 + E2 (42)\ns.t. c(ξ) 6 0 , d(ξ) = 0\nis considered with E1 convex, −E2 convex, c component-wise convex function, and d an affine function. The CCCP algorithm solves this minimization problem by linearization of the concave part and is defined in Sriperumbudur & Lanckriet (2009) as\nξt+1 ∈ arg min ξ E1 (ξ) + ξ T∇ξE2\n( ξt )\n(43)\ns.t. c(ξ) 6 0 , d(ξ) = 0 .\nWe define the upper bound EC on the energy: EC ( ξ, ξt ) := E1 (ξ) + E2 ( ξt ) + ( ξ − ξt )T ∇ξE2 (ξt) . (44) EC is equal to the energy E (ξt) for ξ = ξt:\nEC ( ξt, ξt ) = E1 ( ξt ) + E2 ( ξt ) = E ( ξt ) . (45)\nSince −E2 is convex, the first order characterization of convexity holds (Eq. 3.2 in Boyd & Vandenberghe (2009)):\n− E2 (ξ) ≥ − E2 ( ξt ) − ( ξ − ξt )T ∇ξE2 (ξt) , (46) that is\nE2 (ξ) 6 E2 ( ξt ) + ( ξ − ξt )T ∇ξE2 (ξt) . (47) Therefore, for ξ 6= ξt the function EC is an upper bound on the energy:\nE (ξ) 6 EC ( ξ, ξt ) = E1 (ξ) + E2 ( ξt ) + ( ξ − ξt )T ∇ξE2 (ξt) (48) = E1 (ξ) + ξ T∇ξE2 ( ξt ) + C2 ,\nwhere C2 does not depend on ξ. Since we do not have constraints, ξt+1 is defined as\nξt+1 ∈ arg min ξ\nEC ( ξ, ξt ) , (49)\nhence EC ( ξt+1, ξt ) 6 EC (ξt, ξt). Combining the inequalities gives:\nE ( ξt+1 ) 6 EC ( ξt+1, ξt ) 6 EC ( ξt, ξt ) = E ( ξt ) . (50)\nSince we do not have constraints, ξt+1 is the minimum of EC ( ξ, ξt ) = E1 (ξ) + ξ T∇ξE2 ( ξt ) + C2 (51)\nas a function of ξ.\nFor a minimum not at the border, the derivative has to be the zero vector\n∂EC (ξ, ξ t)\n∂ξ = ξ + ∇ξE2\n( ξt ) = ξ − Xsoftmax(βXT ξt) = 0 (52)\nand the Hessian must be positive semi-definite\n∂2EC (ξ, ξ t)\n∂ξ2 = I . (53)\nThe Hessian is strict positive definite everywhere, therefore the optimization problem is strict convex (if the domain is convex) and there exist only one minimum, which is a global minimum. EC can even be written as a quadratic form:\nEC ( ξ, ξt ) = 1\n2\n( ξ + ∇ξE2 ( ξt ))T ( ξ + ∇ξE2 ( ξt )) + C3 , (54)\nwhere C3 does not depend on ξ.\nTherefore, the minimum is ξt+1 = − ∇ξE2 ( ξt ) = Xsoftmax(βXT ξt) (55)\nif it is in the domain as we assume.\nUsing M = maxi ‖xi‖, ξt+1 is in the sphere S = {x | ‖x‖ 6M} which is a convex and compact set. Hence, if ξ0 ∈ S, then the iteration is a mapping from S to S. Therefore, the point-set-map defined by the iteration Eq. (55) is uniformly compact on S according to Remark 7 in Sriperumbudur & Lanckriet (2009). Theorem 2 and Theorem 4 in (Sriperumbudur & Lanckriet, 2009) states that all the limit points of the iteration Eq. (55) are stationary points. These theorems follow from Zangwill’s global convergence theorem: Convergence Theorem A, page 91 in Zangwill (1969) and page 3 in Wu (1983).\nThe global convergence theorem only assures that for the sequence ξt+1 = f(ξt) and a function Φ we have Φ(ξt) → Φ(ξ∗) for t → ∞ but not ξt → ξ∗. However, if f is strictly monotone with respect to Φ, then we can strengthen Zangwill’s global convergence theorem (Meyer, 1976). We set Φ = E and show E(ξt+1) < E(ξt) if ξt is not a stationary point of E, that is, f is strictly monotone with respect to E. The following theorem is similar to the convergence results for the expectation maximization (EM) algorithm in Wu (1983) which are given in theorems 1 to 6 in Wu (1983). The following theorem is also very similar to Theorem 8 in Sriperumbudur & Lanckriet (2009). Theorem A2 (Global Convergence: Stationary Points). For the iteration Eq. (55) we have E (ξt)→ E (ξ∗) = E∗ as t → ∞, for some stationary point ξ∗. Furthermore\n∥∥ξt+1 − ξt∥∥ → 0 and either {ξt}∞t=0 converges or, in the other case, the set of limit points of {ξt}∞t=0 is a connected and compact subset of L (E∗), where L (a) = {ξ ∈ L | E (ξ) = a} and L is the set of stationary points of the iteration Eq. (55). If L (E∗) is finite, then any sequence {ξt}∞t=0 generated by the iteration Eq. (55) converges to some ξ∗ ∈ L (E∗).\nProof. We have E (ξt) = E1 (ξt) + E2 (ξt). The gradient ∇ξE2 (ξt) = −∇ξlse(β,XT ξ) is continuous. Therefore, Eq. (51) has minimum in the sphere S, which is a convex and compact set. If ξt+1 6= ξt, then ξt was not the minimum of Eq. (48) as the derivative at ξt is not equal to zero. Eq. (53) shows that the optimization problem Eq. (48) is strict convex, hence it has only one minimum, which is a global minimum. Eq. (54) shows that the optimization problem Eq. (48) is even a quadratic form. Therefore, we have\nE ( ξt+1 ) 6 EC ( ξt+1, ξt ) < EC ( ξt, ξt ) = E ( ξt ) . (56)\nTherefore, the point-set-map defined by the iteration Eq. (55) (for definitions see (Sriperumbudur & Lanckriet, 2009)) is strictly monotonic with respect to E. Therefore, we can apply Theorem 3 in Sriperumbudur & Lanckriet (2009) or Theorem 3.1 and Corollary 3.2 in Meyer (1976), which give the statements of the theorem.\nWe showed global convergence of the iteration Eq. (31). We have shown that all the limit points of any sequence generated by the iteration Eq. (31) are the stationary points (critical points; local minima or saddle points) of the energy function E. Local maxima as stationary points are only possible if the iterations exactly hits a local maximum. However, convergence to a local maximum without being there is not possible because Eq. (56) ensures a strict decrease of the energy E. Therefore, almost sure local maxima are not obtained as stationary points. Either the iteration converges or, in the second case, the set of limit points is a connected and compact set. But what happens if ξ0 is in an -neighborhood around a local minimum ξ∗? Will the iteration Eq. (31) converge to ξ∗? What is the rate of convergence? These questions are about local convergence which will be treated in detail in next section." }, { "heading": "A.1.5 LOCAL CONVERGENCE OF THE UPDATE RULE: FIXED POINT ITERATION", "text": "For the proof of local convergence to a fixed point we will apply Banach fixed point theorem. For the rate of convergence we will rely on properties of a contraction mapping.\nA.1.5.1 General Bound on the Jacobian of the Iteration. We consider the iteration ξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) (57)\nusing\np = softmax(βXT ξ) . (58)\nThe Jacobian J is symmetric and has the following form:\nJ = ∂f(ξ)\n∂ξ = β X\n( diag(p)− ppT ) XT = XJsX T , (59)\nwhere Js is Jacobian of the softmax.\nTo analyze the local convergence of the iteration, we distinguish between the following three cases (see also Fig. A.1). Here we only provide an informal discussion to give the reader some intuition. A rigorous formulation of the results can be found in the corresponding subsections.\na) If the patterns xi are not well separated, the iteration goes to a fixed point close to the arithmetic mean of the vectors. In this case p is close to pi = 1/N .\nb) If the patterns xi are well separated, then the iteration goes to the pattern to which the initial ξ is similar. If the initial ξ is similar to a vector xi then it will converge to a vector close to xi and p will converge to a vector close to ei.\nc) If some vectors are similar to each other but well separated from all other vectors, then a so called metastable state between the similar vectors exists. Iterations that start near the metastable state converge to this metastable state.\nLemma A2. For N patterns X = (x1, . . . ,xN ), p = softmax(βXT ξ), M = maxi ‖xi‖, and m = maxi pi(1− pi), the spectral norm of the Jacobian J of the fixed point iteration is bounded:\n‖J‖2 6 2 β ‖X‖ 2 2 m 6 2 β N M 2 m . (60)\nIf pmax = maxi pi ≥ 1− , then for the spectral norm of the Jacobian holds ‖J‖2 6 2 β N M 2 − 2 2 β N M2 < 2 β N M2 . (61)\nProof. With\np = softmax(βXT ξ) , (62)\nthe symmetric Jacobian J is\nJ = ∂f(ξ)\n∂ξ = β X\n( diag(p)− ppT ) XT = XJsX T , (63)\nwhere Js is Jacobian of the softmax.\nWith m = maxi pi(1− pi), Eq. (476) from Lemma A24 is ‖Js‖2 = β ∥∥diag(p)− ppT∥∥ 2 6 2 m β . (64)\nUsing this bound on ‖Js‖2, we obtain ‖J‖2 6 β ∥∥XT∥∥ 2 ‖Js‖2 ‖X‖2 6 2 m β ‖X‖ 2 2 . (65)\nThe spectral norm ‖.‖2 is bounded by the Frobenius norm ‖.‖F which can be expressed by the norm squared of its column vectors:\n‖X‖2 6 ‖X‖F = √∑\ni\n‖xi‖2 . (66)\nTherefore, we obtain the first statement of the lemma:\n‖J‖2 6 2 β ‖X‖ 2 2 m 6 2 β N M 2 m . (67)\nWith pmax = maxi pi ≥ 1− Eq. (480) in Lemma A24 is ‖Js‖2 6 2 β − 2 2 β < 2 β . (68)\nUsing this inequality, we obtain the second statement of the lemma:\n‖J‖2 6 2 β N M 2 − 2 2 β N M2 < 2 β N M2 . (69)\nWe now define the “separation” ∆i of a pattern xi from dataX = (x1, . . . ,xN ) here, since it has an important role for the convergence properties of the iteration. Definition 2 (Separation of Patterns). We define ∆i, i.e. the separation of pattern xi from data X = (x1, . . . ,xN ) as:\n∆i = min j,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (70)\nThe pattern is separated from the other data if 0 < ∆i. Using the parallelogram identity, ∆i can also be expressed as\n∆i = min j,j 6=i\n1\n2\n( ‖xi‖2 − ‖xj‖2 + ‖xi − xj‖2 ) (71)\n= 1\n2 ‖xi‖2 −\n1 2 max j,j 6=i\n( ‖xj‖2 − ‖xi − xj‖2 ) .\nFor ‖xi‖ = ‖xj‖ we have ∆i = 1/2 minj,j 6=i ‖xi − xj‖2. Analog we say for a query ξ and data X = (x1, . . . ,xN ), that xi is least separated from ξ while being separated from other xj with j 6= i if\ni = arg max k min j,j 6=k\n( ξTxk − ξTxj ) = arg max\nk\n( ξTxk − max\nj,j 6=k ξTxj\n) (72)\n0 6 c = max k min j,j 6=k\n( ξTxk − ξTxj ) = max\nk\n( ξTxk − max\nj,j 6=k ξTxj\n) . (73)\nNext we consider the case where the iteration has only one stable fixed point.\nA.1.5.2 One Stable State: Fixed Point Near the Mean of the Patterns. We start with the case where no pattern is well separated from the others.\n•Global fixed point near the global mean: Analysis using the data center.\nWe revisit the bound on the Jacobian of the iteration by utilizing properties of pattern distributions. We begin with a probabilistic interpretation where we consider pi as the probability of selecting the vector xi. Consequently, we define expectations as Ep[f(x)] = ∑N i=1 pif(xi). In this setting the matrix\nX ( diag(p)− ppT ) XT (74)\nis the covariance matrix of dataX when its vectors are selected according to the probability p: X ( diag(p) − ppT ) XT = Xdiag(p)XT − XppTXT (75)\n= N∑ i=1 pi xi x T i − ( N∑ i=1 pi xi )( N∑ i=1 pi xi )T (76)\n= Ep[x x T ] − Ep[x] Ep[x]T = Varp[x] , (77)\ntherefore we have\nJ = β Varp[x] . (78)\nThe largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue.\nWe define:\nmx = 1\nN N∑ i=1 xi , (79)\nmmax = max 16i6N ‖xi − mx‖2 . (80)\nmx is the arithmetic mean (the center) of the patterns. mmax is the maximal distance of the patterns to the centermx .\nThe variance of the patterns is\nVarp[x] = N∑ i=1 pi xi x T i − ( N∑ i=1 pi xi ) ( N∑ i=1 pi xi )T (81)\n= N∑ i=1 pi\n( xi −\nN∑ i=1 pixi\n) ( xi −\nN∑ i=1 pixi\n)T .\nThe maximal distance to the center mmax allows the derivation of a bound on the norm of the Jacobian.\nNext lemma gives a condition for a global fixed point. Lemma A3. The following bound on the norm ‖J‖2 of the Jacobian of the fixed point iteration f holds independent of p or the query ξ.\n‖J‖2 6 β m 2 max . (82)\nFor β m2max < 1 there exists a unique fixed point (global fixed point) of iteration f in each compact set.\nProof. In order to bound the variance we compute the vector a that minimizes\nf(a) = N∑ i=1 pi‖xi − a‖2 = N∑ i=1 pi(xi − a)T (xi − a) . (83)\nThe solution to\n∂f(a)\n∂a = 2 N∑ i=1 pi(a − xi) = 0 (84)\nis\na = N∑ i=1 pixi . (85)\nThe Hessian of f is positive definite since\n∂2f(a)\n∂a2 = 2 N∑ i=1 pi I = 2 I (86)\nand f is a convex function. Hence, the mean\nx̄ := N∑ i=1 pi xi (87)\nminimizes ∑N i=1 pi‖xi − a‖\n2. Therefore, we have N∑ i=1 pi‖xi − x̄‖2 6 N∑ i=1 pi‖xi − mx‖2 6 m2max . (88)\nLet us quickly recall that the spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors:∥∥abT∥∥ 2 = √ λmax(baTabT ) = ‖a‖ √ λmax(bbT ) = ‖a‖ ‖b‖ , (89)\nsince bbT has eigenvector b/‖b‖ with eigenvalue ‖b‖2 and otherwise zero eigenvalues. We now bound the variance of the patterns:\n‖Varp[x]‖2 6 N∑ i=1 pi ∥∥∥(xi − x̄) (xi − x̄)T∥∥∥ 2\n(90)\n= N∑ i=1 pi‖xi − x̄‖2 6 N∑ i=1 pi‖xi − mx‖2 6 m2max .\nThe bound of the lemma on ‖J‖2 follows from Eq. (78).\nFor ‖J‖2 6 β m2max < 1 we have a contraction mapping on each compact set. Banach fixed point theorem says there is a unique fixed point in the compact set.\nNow let us further investigate the tightness of the bound on ‖Varp[x]‖2 via ‖xi − x̄‖ 2: we consider the trace, which is the sum ∑d k=1 ek of the w.l.o.g. ordered nonnegative eigenvalues ek of Varp[x] The spectral norm is equal to the largest eigenvalue e1, which is equal to the largest singular value, as we have positive semidefinite matrices. We obtain:\n‖Varp[x]‖2 = Tr ( N∑ i=1 pi (xi − x̄) (xi − x̄)T ) − d∑ k=2 ek (91)\n= N∑ i=1 piTr ( (xi − x̄) (xi − x̄)T ) − d∑ k=2 ek\n= N∑ i=1 pi‖xi − x̄‖2 − d∑ k=2 ek .\nTherefore, the tightness of the bound depends on eigenvalues which are not the largest. Hence variations which are not along the largest variation weaken the bound.\nNext we investigate the location of fixed points which existence is ensured by the global convergence stated in Theorem A2. For N patternsX = (x1, . . . ,xN ), we consider the iteration\nξnew = f(ξ) = Xp = Xsoftmax(βXT ξ) (92)\nusing\np = softmax(βXT ξ) . (93) ξnew is in the simplex of the patterns, that is, ξnew = ∑ i pixi with ∑ i pi = 1 and 0 6 pi. Hence, after one update ξ is in the simplex of the pattern and stays there. If the centermx is the zero vector mx = 0, that is, the data is centered, then the mean is a fixed point of the iteration. For ξ = mx = 0 we have\np = 1/N 1 (94)\nand\nξnew = 1/N X 1 = mx = ξ . (95)\nIn particular normalization methods like batch normalization would promote the mean as a fixed point.\nWe consider the differences of dot products for xi: xTi xi−xTi xj = xTi (xi−xj), for fixed pointm∗x: (m∗x)\nTxi−(m∗x)Txj = (m∗x)T (xi−xj), and for the centermx: mTxxi−mTxxj = mTx(xi−xj). Using the Cauchy-Schwarz inequality, we get∣∣ξT (xi − xj)∣∣ 6 ‖ξ‖ ‖xi − xj‖ 6 ‖ξ‖ (‖xi − mx‖ + ‖xj − mx‖) (96)\n6 2 mmax ‖ξ‖ .\nThis inequality gives: ∣∣ξT (xi − xj)∣∣ 6 2 mmax (mmax + ‖mx‖) , (97)∣∣ξT (xi − xj)∣∣ 6 2 mmax M , where we used ‖ξ − 0‖ 6 ‖ξ −mx‖ + ‖mx − 0‖, ‖ξ −mx‖ = ‖ ∑ i pixi −mx‖ 6∑\ni pi‖xi −mx‖ 6 mmax, and M = maxi ‖xi‖. In particular β ∣∣mTx(xi − xj)∣∣ 6 2 β mmax ‖mx‖ , (98)\nβ ∣∣(m∗x)T (xi − xj)∣∣ 6 2 β mmax ‖m∗x‖ 6 2 β mmax (mmax + ‖mx‖) , (99) β ∣∣xTi (xi − xj)∣∣ 6 2 β mmax ‖xi‖ 6 2 β mmax (mmax + ‖mx‖) . (100)\nLet i = arg maxj ξTxj , therefore the maximal softmax component is i. For the maximal softmax component i we have:\n[softmax(β XT ξ)]i = 1 1 + ∑ j 6=i exp(− β (ξTxi − ξTxj))\n(101)\n6 1 1 + ∑ j 6=i exp(− 2 β mmax (mmax + ‖mx‖))\n= 1\n1 + (N − 1) exp(− 2 β mmax (mmax + ‖mx‖))\n= exp(2 β mmax (mmax + ‖mx‖))\nexp(2 β mmax (mmax + ‖mx‖)) + (N − 1) 6 1/N exp(2 β mmax (mmax + ‖mx‖)) .\nAnalogously we obtain for i = arg maxjmTxxj , a bound on the maximal softmax component i if the center is put into the iteration:\n[softmax(β XTmx)]i 6 1/N exp(2 β mmax ‖mx‖) . (102)\nAnalog we obtain a bound for i = arg maxj(m∗x) Txj on the maximal softmax component i of the fixed point:\n[softmax(β XTm∗x)]i 6 1/N exp(2 β mmax ‖m∗x‖) (103) 6 1/N exp(2 β mmax (mmax + ‖mx‖)) .\nThe two important terms are mmax, the variance or spread of the data and ‖mx‖, which tells how well the data is centered. For a contraction mapping we already required βm2max < 1, therefore the first term in the exponent is 2βm2max < 2. The second term 2βmmax‖mx‖ is small if the data is centered.\n•Global fixed point near the global mean: Analysis using softmax values.\nIf ξTxi ≈ ξTxj for all i and j, then pi ≈ 1/N and we have m = maxi pi(1 − pi) < 1/N . For M 6 1/ √ 2β we obtain from Lemma A2:\n‖J‖2 < 1 . (104) The local fixed point ism∗x ≈mx = (1/N) ∑N i=1 xi with pi ≈ 1/N .\nWe now treat this case more formally. First we discuss conditions that ensure that the iteration is a contraction mapping. We consider the iteration Eq. (57) in the variable p:\npnew = g(p) = softmax(βXTXp) . (105)\nThe Jacobian is\nJ(p) = ∂g(p)\n∂p = XTX Js (106)\nwith\nJs(p new) = β ( diag(pnew) − pnew(pnew)T ) . (107)\nThe version of the mean value theorem in Lemma A32 states for Jm = ∫ 1\n0 J(λp) dλ = XTXJms with the symmetric matrix Jms = ∫ 1 0 Js(λp) dλ:\npnew = g(p) = g(0) + (Jm)Tp = g(0) + Jms X TX p = 1/N 1 + Jms X TX p . (108)\nWith m = maxi pi(1− pi), Eq. (476) from Lemma A24 is ‖Js(p)‖2 = β ∥∥diag(p)− ppT∥∥ 2 6 2 m β . (109)\nFirst observe that λpi(1− λpi) 6 pi(1− pi) for pi 6 0.5 and λ ∈ [0, 1], since pi(1− pi)− λpi(1− λpi) = (1 − λ)pi(1 − (1 + λ)pi) ≥ 0. For maxi pi 6 0.5 this observation leads to the following bound for Jms :\n‖Jms ‖2 6 2 m β . (110)\nEq. (479) in Lemma A24 states that every Js is bounded by 1/2β, therefore also the mean:\n‖Jms ‖2 6 0.5 β . (111)\nSince m = maxi pi(1− pi) < maxi pi = pmax, the previous bounds can be combined as follows:\n‖Jms ‖2 6 2 min{0.25, pmax} β . (112)\nConsequently,\n‖Jm‖2 6 N M 2 2 min{0.25, pmax} β , (113) where we used Eq. (170). ∥∥XTX∥∥ 2 = ∥∥XXT∥∥ 2 , therefore ∥∥XTX∥∥ 2\nis N times the maximal second moment of the data squared.\nObviously, g(p) is a contraction mapping in compact sets, where\nN M2 2 min{0.25, pmax} β < 1 . (114)\nS is the sphere around the origin 0 with radius one. For\npnew = g(p) = 1/N 1 + Jm p , (115)\nwe have ‖p‖ 6 ‖p‖1 = 1 and ‖pnew‖ 6 ‖pnew‖1 = 1. Therefore, g maps points from S into S. g is a contraction mapping for\n‖Jm‖2 6 N M 2 2 min{0.25, pmax} β = c < 1 . (116)\nAccording to Banach fixed point theorem g has a fixed point in the sphere S.\nHölder’s inequality gives:\n‖p‖2 = pTp 6 ‖p‖1‖p‖∞ = ‖p‖∞ = pmax . (117) Alternatively:\n‖p‖2 = ∑ i p2i = pmax ∑ i pi pmax pi 6 pmax ∑ i pi = pmax . (118)\nLet now S be the sphere around the origin 0 with radius 1/ √ N + √ pmax and let ‖Jm(p)‖2 6 c < 1\nfor p ∈ S. The old p is in the sphere S (p ∈ S) since pmax < √ pmax for pmax < 1. We have\n‖pnew‖ 6 1/ √ N + ‖Jm‖2 ‖p‖ 6 1/ √ N + √ pmax . (119)\nTherefore, g is a mapping from S into S and a contraction mapping. According to Banach fixed point theorem, a fixed point exists in S.\nFor the 1-norm, we use Lemma A24 and ‖p‖1 = 1 to obtain from Eq. (115): ‖pnew − 1/N 1‖1 6 ‖J\nm‖1 6 2 β m ‖X‖∞ M1 , (120) ‖pnew − 1/N 1‖1 6 ‖J\nm‖1 6 2 β m N M∞ M1 , (121) ‖pnew − 1/N 1‖1 6 ‖J m‖1 6 2 β m N M 2 , (122)\nwhere m = maxi pi(1− pi), M1 = ‖X‖1 = maxi ‖xi‖1, M = maxi ‖xi‖, ‖X‖∞ = ∥∥XT∥∥ 1 =\nmaxi ∥∥[XT ]i∥∥1 (maximal absolute row sum norm), andM∞ = maxi ‖xi‖∞. Let us quickly mention some auxiliary estimates related toXTX:∥∥XTX∥∥ 1\n= max i N∑ j=1 ∣∣xTi xj∣∣ 6 max i N∑ j=1 ‖xi‖∞ ‖xj‖1 (123)\n6 M∞ N∑ j=1 M1 = N M∞ M1 ,\nwhere the first inequaltiy is from Hölder’s inequality. We used∥∥XTX∥∥ 1\n= max i N∑ j=1 ∣∣xTi xj∣∣ 6 max i N∑ j=1 ‖xi‖ ‖xj‖ (124) 6 M N∑ j=1 M = N M2 ,\nwhere the first inequality is from Hölder’s inequality (here the same as the Cauchy-Schwarz inequality). See proof of Lemma A24 for the 1-norm bound on Js. Everything else follows from the fact that the 1-norm is sub-multiplicative as induced matrix norm.\nWe consider the minimal ‖p‖. min p ‖p‖2 (125)\ns.t. ∑ i pi = 1\n∀i : pi ≥ 0 .\nThe solution to this minimization problem is p = (1/N)1. Therefore, we have 1/ √ N 6 ‖p‖ and 1/N 6 ‖p‖2 Using Eq. (119) we obtain 1/ √ N 6 ‖pnew‖ 6 1/ √ N + √ pmax . (126)\nMoreover ‖pnew‖2 = (pnew)Tpnew = 1/N + (pnew)T Jm p 6 1/N + ‖Jm‖2 ‖p‖ (127) 6 1/N + ‖Jm‖2 , since pnew ∈ S and p ∈ S. For the fixed point, we have\n‖p∗‖2 = (p∗)Tp∗ = 1/N + (p∗)T Jm p∗ 6 1/N + ‖Jm‖2 ‖p ∗‖2 , (128)\nand hence\n1/N 6 ‖p∗‖2 6 1/N 1 1 − ‖Jm‖2 = 1/N (1 + ‖Jm‖2 1 − ‖Jm‖2 ) . (129)\nTherefore, for small ‖Jm‖2 we have p∗ ≈ (1/N)1.\nA.1.5.3 Many Stable States: Fixed Points Near Stored Patterns. We move on to the next case, where the patterns xi are well separated. In this case the iteration goes to the pattern to which the initial ξ is most similar. If the initial ξ is similar to a vector xi then it will converge to xi and p will be ei. The main ingredients are again Banach’s Theorem and estimates on the Jacobian norm.\n•Proof of a fixed point by Banach Fixed Point Theorem.\n→ Mapped Vectors Stay in a Compact Environment. We show that if xi is sufficient dissimilar to other xj then there is an compact environment of xi (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere around xi for which points from the sphere are mapped by f into the sphere.\nWe first need following lemma which bounds the distance ‖xi − f(ξ)‖, where xi is the pattern that is least separated from ξ but separated from other patterns. Lemma A4. For a query ξ and data X = (x1, . . . ,xN ), there exists a xi that is least separated from ξ while being separated from other xj with j 6= i:\ni = arg max k min j,j 6=k\n( ξTxk − ξTxj ) = arg max\nk\n( ξTxk − max\nj,j 6=k ξTxj\n) (130)\n0 6 c = max k min j,j 6=k\n( ξTxk − ξTxj ) = max\nk\n( ξTxk − max\nj,j 6=k ξTxj\n) . (131)\nFor xi, the following holds: ‖xi − f(ξ)‖ 6 2 M , (132)\nwhere M = max\ni ‖xi‖ , (133)\n= (N − 1) exp(− β c) . (134)\nProof. For the softmax component i we have:\n[softmax(β XT ξ)]i = 1 1 + ∑ j 6=i exp(β (ξ Txj − ξTxi)) ≥ 1 1 + ∑ j 6=i exp(− β c) (135)\n= 1 1 + (N − 1) exp(− β c) = 1 − (N − 1) exp(− β c)\n1 + (N − 1) exp(− β c) ≥ 1 − (N − 1) exp(− β c) = 1 −\nFor softmax components k 6= i we have\n[softmax(βXT ξ)]k = exp(β (ξTxk − ξTxi)) 1 + ∑ j 6=i exp(β (ξ Txj − ξTxi)) 6 exp(− β c) = N − 1 .\n(136)\nThe iteration f can be written as\nf(ξ) = Xsoftmax(βXT ξ) = N∑ j=1 xj [softmax(βX T ξ)]j . (137)\nWe now can bound ‖xi − f(ξ)‖:\n‖xi − f(ξ)‖ = ∥∥∥∥∥∥xi − N∑ j=1 [softmax(βXT ξ)]j xj ∥∥∥∥∥∥ (138) = ∥∥∥∥∥∥(1− [softmax(βXT ξ)]i) xi − N∑\nj=1,j 6=i\n[softmax(βXT ξ)]j xj ∥∥∥∥∥∥ 6 ‖xi‖ +\nN − 1 N∑ j=1,j 6=i ‖xj‖\n6 M +\nN − 1 N∑ j=1,j 6=i M = 2 M .\nWe define ∆i, i.e. the separation of pattern xi from dataX = (x1, . . . ,xN ) as:\n∆i = min j,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (139)\nThe pattern is separated from the other data if 0 < ∆i. Using the parallelogram identity, ∆i can also be expressed as\n∆i = min j,j 6=i\n1\n2\n( ‖xi‖2 − ‖xj‖2 + ‖xi − xj‖2 ) (140)\n= 1\n2 ‖xi‖2 −\n1 2 max j,j 6=i\n( ‖xj‖2 − ‖xi − xj‖2 ) .\nFor ‖xi‖ = ‖xj‖ we have ∆i = 1/2 minj,j 6=i ‖xi − xj‖2. Next we define the sphere where we want to apply Banach fixed point theorem. Definition 3 (Sphere Si). The sphere Si is defined as\nSi := { ξ | ‖ξ − xi‖ 6\n1\nβ N M\n} . (141)\nLemma A5. With ξ given, if the assumptions\nA1: ξ is inside sphere: ξ ∈ Si,\nA2: data point xi is well separated from the other data:\n∆i ≥ 2\nβ N +\n1 β ln ( 2 (N − 1) N β M2 ) (142)\nhold, then f(ξ) is inside the sphere: f(ξ) ∈ Si. Therefore, with assumption (A2), f is a mapping from Si into Si.\nProof. We need the separation ∆̃i of ξ from the data.\n∆̃i = min j,j 6=i\n( ξTxi − ξTxj ) . (143)\nUsing the Cauchy-Schwarz inequality, we obtain for 1 6 j 6 N :∣∣ξTxj − xTi xj∣∣ 6 ‖ξ − xi‖ ‖xj‖ 6 ‖ξ − xi‖M . (144) We have the lower bound\n∆̃i ≥ min j,j 6=i\n(( xTi xi − ‖ξ − xi‖M ) − ( xTi xj + ‖ξ − xi‖M )) (145)\n= − 2 ‖ξ − xi‖M + min j,j 6=i\n( xTi xi − xTi xj ) = ∆i − 2 ‖ξ − xi‖M\n≥ ∆i − 2\nβ N ,\nwhere we used the assumption (A1) of the lemma.\nFrom the proof in Lemma A4 we have\npmax = [softmax(βX T ξ)]i ≥ 1 − (N − 1) exp(− β ∆̃i) = 1 − ̃ . (146)\nLemma A4 states that\n‖xi − f(ξ)‖ 6 2 ̃ M = 2 (N − 1) exp(− β ∆̃i) M (147)\n6 2 (N − 1) exp(− β (∆i − 2\nβ N )) M .\nWe have\n‖xi − f(ξ)‖ (148)\n6 2 (N − 1) exp(− β ( 2 β N + 1 β ln ( 2 (N − 1) N β M2 ) − 2 β N )) M\n= 2 (N − 1) exp(− ln ( 2 (N − 1) N β M2 ) ) M\n= 1\nN β M ,\nwhere we used assumption (A2) of the lemma. Therefore, f(ξ) is a mapping from the sphere Si into the sphere Si: If ξ ∈ Si then f(ξ) ∈ Si.\n•Contraction mapping.\nFor applying Banach fixed point theorem we need to show that f is contraction in the compact environment Si. Lemma A6. Assume that\nA1:\n∆i ≥ 2\nβ N +\n1 β ln ( 2 (N − 1) N β M2 ) , (149)\nthen f is a contraction mapping in Si.\nProof. The version of the mean value theorem Lemma A32 states for Jm = ∫ 1\n0 J(λξ+(1−λ)xi) dλ:\nf(ξ) = f(xi) + J m (ξ − xi) . (150)\nTherefore\n‖f(ξ) − f(xi)‖ 6 ‖Jm‖2 ‖ξ − xi‖ . (151)\nWe define ξ̃ = λξ + (1− λ)xi for some λ ∈ [0, 1]. From the proof in Lemma A4 we have\npmax(ξ̃) = [softmax(β X T ξ̃)]i ≥ 1 − (N − 1) exp(− β ∆̃i) = 1 − ̃ , (152)\ñ = (N − 1) exp(− β ∆̃i) , (153)\n∆̃i = min j,j 6=i\n( ξ̃Txi − ξ̃Txj ) . (154)\nFirst we compute an upper bound on ̃. We need the separation ∆̃i of ξ from the data. Using the Cauchy-Schwarz inequality, we obtain for 1 6 j 6 N :∣∣∣ξ̃Txj − xTi xj∣∣∣ 6 ∥∥∥ξ̃ − xi∥∥∥ ‖xj‖ 6 ∥∥∥ξ̃ − xi∥∥∥M . (155) We have the lower bound on ∆̃i:\n∆̃i ≥ min j,j 6=i\n(( xTi xi − ∥∥∥ξ̃ − xi∥∥∥M) − (xTi xj + ∥∥∥ξ̃ − xi∥∥∥M)) (156) = − 2 ∥∥∥ξ̃ − xi∥∥∥M + min j,j 6=i ( xTi xi − xTi xj ) = ∆i − 2\n∥∥∥ξ̃ − xi∥∥∥M ≥ ∆i − 2 ‖ξ − xi‖M ,\nwhere we used ∥∥∥ξ̃ − xi∥∥∥ = λ‖ξ − xi‖ 6 ‖ξ − xi‖. From the definition of ̃ in Eq. (152) we have\ñ = (N − 1) exp(− β ∆̃i) (157) 6 (N − 1) exp (− β (∆i − 2 ‖ξ − xi‖M))\n6 (N − 1) exp ( − β ( ∆i − 2\nβ N\n)) ,\nwhere we used ξ ∈ Si, therefore ‖ξ − xi‖ 6 1β N M .\nNext we compute an lower bound on ̃. We start with an upper on ∆̃i:\n∆̃i 6 min j,j 6=i\n(( xTi xi + ∥∥∥ξ̃ − xi∥∥∥M) − (xTi xj − ∥∥∥ξ̃ − xi∥∥∥M)) (158) = 2 ∥∥∥ξ̃ − xi∥∥∥M + min j,j 6=i ( xTi xi − xTi xj ) = ∆i + 2\n∥∥∥ξ̃ − xi∥∥∥M 6 ∆i + 2 ‖ξ − xi‖M ,\nwhere we used ∥∥∥ξ̃ − xi∥∥∥ = λ‖ξ − xi‖ 6 ‖ξ − xi‖. From the definition of ̃ in Eq. (152) we have\ñ = (N − 1) exp(− β ∆̃i) (159) ≥ (N − 1) exp (− β (∆i + 2 ‖ξ − xi‖M))\n≥ (N − 1) exp ( − β ( ∆i + 2\nβ N\n)) ,\nwhere we used ξ ∈ Si, therefore ‖ξ − xi‖ 6 1β N M .\nNow we bound the Jacobian. We can assume ̃ 6 0.5 otherwise (1 − ̃) 6 0.5 in the following. From the proof of Lemma A24 we know for pmax(ξ̃) ≥ 1− ̃, then pi(ξ̃) 6 ̃ for pi(ξ̃) 6= pmax(ξ̃).\nTherefore, pi(ξ̃)(1 − pi(ξ̃)) 6 m 6 ̃(1 − ̃) for all i. Next we use the derived upper and lower bound on ̃ in previous Eq. (61) in Lemma A2:∥∥∥J(ξ̃)∥∥∥\n2 6 2 β N M2 ̃ − 2 ̃2 β N M2 (160) 6 2 β N M2 (N − 1) exp ( − β ( ∆i − 2\nβ N\n)) −\n2 (N − 1)2 exp ( − 2 β ( ∆i + 2\nβ N\n)) β N M2 .\nThe bound Eq. (160) holds for the mean Jm, too, since it averages over J(ξ̃):\n‖Jm‖2 6 2 β N M 2 (N − 1) exp\n( − β ( ∆i − 2\nβ N\n)) − (161)\n2 (N − 1)2 exp ( − 2 β ( ∆i + 2\nβ N\n)) β N M2 .\nThe assumption of the lemma is\n∆i ≥ 2\nβ N +\n1 β ln ( 2 (N − 1) N β M2 ) , (162)\nThis is\n∆i − 2 β N ≥ 1 β ln ( 2 (N − 1) N β M2 ) , (163)\nTherefore, the spectral norm ‖J‖2 can be bounded by: ‖Jm‖2 6 2 β (N − 1) exp ( − β 1 β ln ( 2 (N − 1) N β M2 )) N M2 − (164)\n2 (N − 1)2 exp ( − 2 β ( ∆i + 2\nβ N\n)) β N M2\n= 2 β (N − 1) 1 2 (N − 1) N β M2 N M2 −\n2 (N − 1)2 exp ( − 2 β ( ∆i + 2\nβ N\n)) β N M2\n= 1 − 2 (N − 1)2 exp ( − 2 β ( ∆i + 2\nβ N\n)) β N M2 < 1 .\nTherefore, f is a contraction mapping in Si.\n•Banach Fixed Point Theorem. Now we have all ingredients to apply Banach fixed point theorem.\nLemma A7. Assume that\nA1:\n∆i ≥ 2\nβ N +\n1 β ln ( 2 (N − 1) N β M2 ) , (165)\nthen f has a fixed point in Si.\nProof. We use Banach fixed point theorem: Lemma A5 says that f maps from Si into Si. Lemma A6 says that f is a contraction mapping in Si.\n•Contraction mapping with a fixed point.\nWe have shown that a fixed point exists. We want to know how fast the iteration converges to the fixed point. Let x∗i be the fixed point of the iteration f in the sphere Si. Using the mean value theorem Lemma A32, we have with Jm = ∫ 1 0\nJ(λξ + (1− λ)x∗i ) dλ: ‖f(ξ) − x∗i ‖ = ‖f(ξ) − f(x∗i )‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ (166)\nAccording to Lemma A24, if pmax = maxi pi ≥ 1− for all x̃ = λξ+ (1− λ)x∗i , then the spectral norm of the Jacobian is bounded by\n‖Js(x̃)‖2 < 2 β . (167) The norm of Jacobian at x̃ is bounded\n‖J(x̃)‖2 6 2 β ‖X‖ 2 2 6 2 β NM 2 . (168) We used that the spectral norm ‖.‖2 is bounded by the Frobenius norm ‖.‖F which can be expressed by the norm squared of its column vectors:\n‖X‖2 6 ‖X‖F = √∑\ni\n‖xi‖2 . (169)\nTherefore ‖X‖22 6 N M\n2 . (170) The norm of Jacobian of the fixed point iteration is bounded\n‖Jm‖2 6 2 β ‖X‖ 2 2 6 2 β NM 2 . (171)\nThe separation of pattern xi from dataX = (x1, . . . ,xN ) is ∆i = min\nj,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (172)\nWe need the separation ∆̃i of x̃ = λξ + (1− λ)x∗i from the data: ∆̃i = min\nj,j 6=i\n( x̃Txi − x̃Txj ) . (173)\nWe compute a lower bound on ∆̃i. Using the Cauchy-Schwarz inequality, we obtain for 1 6 j 6 N :∣∣x̃Txj − xTi xj∣∣ 6 ‖x̃ − xi‖ ‖xj‖ 6 ‖x̃ − xi‖M . (174) We have the lower bound\n∆̃i ≥ min j,j 6=i\n(( xTi xi − ‖x̃ − xi‖M ) − ( xTi xj + ‖x̃ − xi‖M )) (175)\n= − 2 ‖x̃ − xi‖M + min j,j 6=i\n( xTi xi − xTi xj ) = ∆i − 2 ‖x̃ − xi‖M .\nSince ‖x̃ − xi‖ = ‖λξ + (1− λ)x∗i − xi‖ (176)\n6 λ ‖ξ − xi‖ + (1− λ) ‖x∗i − xi‖ 6 max{‖ξ − xi‖, ‖x∗i − xi‖} ,\nwe have ∆̃i ≥ ∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M . (177)\nFor the softmax component i we have:\n[softmax(β XT ξ̃)]i = 1 1 + ∑ j 6=i exp(β (ξ̃ Txj − ξ̃Txi)) (178)\n≥ 1 1 + ∑ j 6=i exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M))\n= 1\n1 + (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M))\n= 1 − (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x ∗ i − xi‖}M)) 1 + (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) ≥ 1 − (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) = 1 − .\nTherefore\n= (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) . (179)\nWe can bound the spectral norm of the Jacobian, which upper bounds the Lipschitz constant:\n‖Jm‖2 6 2 β N M 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) . (180)\nFor a contraction mapping we require\n‖Jm‖2 < 1 , (181) which can be ensured by\n2 β NM2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) < 1 . (182) Solving this inequality for ∆i gives\n∆i > 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M + 1 β ln ( 2 (N − 1) N β M2 ) . (183)\nIn an environment around x∗i in which Eq. (183) holds, f is a contraction mapping and every point converges under the iteration f to x∗i when the iteration stays in the environment. After every iteration the mapped point f(ξ) is closer to the fixed point x∗i than the original point xi:\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ < ‖ξ − x∗i ‖ . (184)\nUsing\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ 6 ‖Jm‖2 ‖ξ − f(ξ)‖ + ‖J m‖2 ‖f(ξ) − x ∗ i ‖ , (185)\nwe obtain\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2\n1 − ‖Jm‖2 ‖ξ − f(ξ)‖ . (186)\nFor large ∆i the iteration is close to the fixed point even after one update. This has been confirmed in several experiments.\nA.1.5.4 Metastable States: Fixed Points Near Mean of Similar Patterns. The proof concept is the same as for a single pattern but now for the arithmetic mean of similar patterns.\n•Bound on the Jacobian.\nThe Jacobian of the fixed point iteration is J = β X ( diag(p)− ppT ) XT = XJsX T . (187)\nIf we consider pi as the probability of selecting the vector xi, then we can define expectations as Ep[f(x)] = ∑N i=1 pif(xi). In this setting the matrix\nX ( diag(p)− ppT ) XT (188)\nis the covariance matrix of dataX when its vectors are selected according to the probability p: X ( diag(p) − ppT ) XT = Xdiag(p)XT − XppTXT (189)\n= N∑ i=1 pi xi x T i − ( N∑ i=1 pi xi )( N∑ i=1 pi xi )T (190)\n= Ep[x x T ] − Ep[x] Ep[x]T = Varp[x] , (191)\ntherefore we have\nJ = β Varp[x] . (192)\nWe now elaborate more on this interpretation as variance. Specifically the singular values of J (or in other words: the covariance) should be reasonably small. The singular values are the key to ensure convergence of the iteration Eq. (57). Next we present some thoughts.\n1. It’s clear that the largest eigenvalue of the covariance matrix (equal to the largest singular value) is the variance in the direction of the eigenvector associated with the largest eigenvalue.\n2. Furthermore the variance goes to zero as one pi goes to one, since only one pattern is chosen and there is no variance.\n3. The variance is reasonable small if all patterns are chosen with equal probability.\n4. The variance is small if few similar patterns are chosen with high probability. If the patterns are sufficient similar, then the spectral norm of the covariance matrix is smaller than one.\nThe first three issues have already been adressed. Now we focus on the last one in greater detail. We assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define:\nM := max i ‖xi‖ , (193)\nγ = N∑ i=l+1 pi 6 , (194)\n1− γ = l∑ i=1 pi ≥ 1 − , (195)\np̃i := pi\n1− γ 6 pi/(1− ) , (196)\nl∑ i=1 p̃i = 1 , (197)\nmx = 1\nl l∑ i=1 xi , (198)\nmmax = max 16i6l\n‖xi − mx‖ . (199)\nM is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1 − is a lower bound the probability (1 − γ) of choosing one of the first l patterns. mx is the arithmetic mean (the center) of the first l patterns. mmax is the maximal distance of the patterns to the center mx . p̃ is the probability p normalized for the first l patterns.\nThe variance of the first l patterns is\nVarp̃[x1:l] = l∑ i=1 p̃i xi x T i − ( l∑ i=1 p̃i xi ) ( l∑ i=1 p̃i xi )T (200)\n= l∑ i=1 p̃i\n( xi −\nl∑ i=1 p̃ixi\n) ( xi −\nl∑ i=1 p̃ixi\n)T .\nLemma A8. With the definitions in Eq. (193) to Eq. (200), the following bounds on the norm ‖J‖2 of the Jacobian of the fixed point iteration hold. The γ-bound for ‖J‖2 is\n‖J‖2 6 β ( (1− γ) m2max + γ 2 (2 − γ) M2 ) (201)\nand the -bound for ‖J‖2 is:\n‖J‖2 6 β ( m2max + 2 (2 − ) M2 ) . (202)\nProof. The variance Varp̃[x1:l] can be expressed as: (1− γ) Varp̃[x1:l] = l∑ i=1 pi ( xi − 1 1− γ l∑ i=1 pi xi ) ( xi − 1 1− γ l∑ i=1 pi xi )T (203)\n= l∑ i=1 pi xi x T i − ( l∑ i=1 pi xi ) 1 1− γ ( l∑ i=1 pi xi )T − 1 1− γ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T\n+\n∑l i=1 pi (1− γ)2 ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T = l∑ i=1 pi xi x T i − 1 1− γ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T\n= l∑ i=1 pi xi x T i − ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T + ( 1 − 1 1− γ ) ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T\n= l∑ i=1 pi xi x T i − ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T − γ 1− γ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T .\nTherefore, we have\nl∑ i=1 pi xi x T i − ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T (204)\n= (1− γ) Varp̃[x1:l] + γ\n1− γ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T .\nWe now can reformulate the Jacobian J:\nJ = β ( l∑ i=1 pi xi x T i + N∑ i=l+1 pi xi x T i (205)\n− ( l∑ i=1 pi xi + N∑ i=l+1 pi xi )( l∑ i=1 pi xi + N∑ i=l+1 pi xi )T = β\n l∑ i=1 pi xi x T i − ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T\n+ N∑ i=l+1 pi xi x T i −\n( N∑\ni=l+1\npi xi\n) ( N∑\ni=l+1\npi xi\n)T\n− ( l∑ i=1 pi xi ) ( N∑ i=l+1 pi xi )T − ( N∑ i=l+1 pi xi )( l∑ i=1 pi xi )T = β (1− γ) Varp̃[x1:l] + γ 1− γ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T\n+ N∑ i=l+1 pi xi x T i −\n( N∑\ni=l+1\npi xi\n) ( N∑\ni=l+1\npi xi\n)T\n− ( l∑ i=1 pi xi ) ( N∑ i=l+1 pi xi )T − ( N∑ i=l+1 pi xi )( l∑ i=1 pi xi )T .\nThe spectral norm of an outer product of two vectors is the product of the Euclidean norms of the vectors: ∥∥abT∥∥ 2 = √ λmax(baTabT ) = ‖a‖ √ λmax(bbT ) = ‖a‖ ‖b‖ , (206)\nsince bbT has eigenvector b/‖b‖ with eigenvalue ‖b‖2 and otherwise zero eigenvalues. We now bound the norms of some matrices and vectors:∥∥∥∥∥ l∑ i=1 pi xi ∥∥∥∥∥ 6 l∑ i=1\npi ‖xi‖ 6 (1− γ) M , (207)∥∥∥∥∥ N∑\ni=l+1\npi xi ∥∥∥∥∥ 6 N∑\ni=l+1 pi ‖xi‖ 6 γ M , (208)∥∥∥∥∥ N∑\ni=l+1\npi xi x T i ∥∥∥∥∥ 2 6 N∑ i=l+1 pi ∥∥xi xTi ∥∥2 = N∑ i=l+1 pi ‖xi‖2 6 N∑ i=l+1 pi M 2 = γ M2 . (209)\nIn order to bound the variance of the first l patterns, we compute the vector a that minimizes\nf(a) = l∑ i=1 pi‖xi − a‖2 = l∑ i=1 pi(xi − a)T (xi − a) . (210)\nThe solution to\n∂f(a)\n∂a = 2 N∑ i=1 pi(a − xi) = 0 (211)\nis\na = N∑ i=1 pixi . (212)\nThe Hessian of f is positive definite since\n∂2f(a)\n∂a2 = 2 N∑ i=1 pi I = 2 I (213)\nand f is a convex function. Hence, the mean\nx̄ := N∑ i=1 pi xi (214)\nminimizes ∑N i=1 pi‖xi − a‖ 2. Therefore, we have\nl∑ i=1 pi‖xi − x̄‖2 6 l∑ i=1 pi‖xi − mx‖2 6 (1 − γ) m2max . (215)\nWe now bound the variance on the first l patterns:\n(1− γ) ‖Varp̃[x1:l]‖2 6 l∑ i=1 pi ∥∥∥(xi − x̄) (xi − x̄)T∥∥∥ 2\n(216)\n= l∑ i=1 pi‖xi − x̄‖2 6 l∑ i=1 pi‖xi − mx‖2 6 (1 − γ) m2max .\nWe obtain for the spectral norm of J:\n‖J‖2 6 β ( (1− γ) ‖Varp̃[x1:l]‖2 (217)\n+ γ\n1− γ ∥∥∥∥∥∥ ( l∑ i=1 pi xi ) ( l∑ i=1 pi xi )T∥∥∥∥∥∥ 2\n+ ∥∥∥∥∥ N∑\ni=l+1\npi xi x T i ∥∥∥∥∥ 2 + ∥∥∥∥∥∥ ( N∑ i=l+1 pi xi ) ( N∑ i=l+1 pi xi )T∥∥∥∥∥∥ 2\n+ ∥∥∥∥∥∥ ( l∑ i=1 pi xi ) ( N∑ i=l+1 pi xi )T∥∥∥∥∥∥ 2 + ∥∥∥∥∥∥ ( N∑ i=l+1 pi xi )( l∑ i=1 pi xi )T∥∥∥∥∥∥ 2 6 β ( (1− γ) ‖Varp̃[x1:l]‖2 + γ (1− γ) M 2 + γ M2 + γ2 M2 +\nγ (1− γ) M2 + γ (1− γ) M2 )\n= β ( (1− γ) ‖Varp̃[x1:l]‖2 + γ 2 (2 − γ) M 2 ) .\nCombining the previous two estimates immediately leads to Eq. (201).\nThe function h(x) = x2(2− x) has the derivative h′(x) = 4(1− x). Therefore, h(x) is monotone increasing for x < 1. For 0 6 γ 6 < 1, we can immediately deduce that γ2(2− γ) 6 2(2− ). Since is larger than γ, we obtain the following -bound for ‖J‖2:\n‖J‖2 6 β ( m2max + 2 (2 − ) M2 ) . (218)\nWe revisit the bound on (1− γ) Varp̃[x1:l]. The trace ∑d k=1 ek is the sum of the eigenvalues ek. The spectral norm is equal to the largest eigenvalue e1, that is, the largest singular value. We obtain:\n‖Varp̃[x1:l]‖2 = Tr ( l∑ i=1 pi (xi − x̄) (xi − x̄)T ) − d∑ k=2 ek (219)\n= l∑ i=1 piTr ( (xi − x̄) (xi − x̄)T ) − d∑ k=2 ek\n= l∑ i=1 pi‖xi − x̄‖2 − d∑ k=2 ek .\nTherefore, the tightness of the bound depends on eigenvalues which are not the largest. That is variations which are not along the strongest variation weaken the bound.\n•Proof of a fixed point by Banach Fixed Point Theorem.\nWithout restricting the generality, we assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define:\nM := max i ‖xi‖ , (220)\nγ = N∑ i=l+1 pi 6 , (221)\n1− γ = l∑ i=1 pi ≥ 1 − , (222)\np̃i := pi\n1− γ 6 pi/(1− ) , (223)\nl∑ i=1 p̃i = 1 , (224)\nmx = 1\nl l∑ i=1 xi , (225)\nmmax = max 16i6l\n‖xi − mx‖ . (226)\nM is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1 − is a lower bound the probability (1 − γ) of choosing one of the first l patterns. mx is the arithmetic mean (the center) of the first l patterns. mmax is the maximal distance of the patterns to the center mx . p̃ is the probability p normalized for the first l patterns.\n•Mapped vectors stay in a compact environment. We show that ifmx is sufficient dissimilar to other xj with l < j then there is an compact environment ofmx (a sphere) where the fixed point iteration maps this environment into itself. The idea of the proof is to define a sphere aroundmx for which the points from the sphere are mapped by f into the sphere.\nWe first need following lemma which bounds the distance ‖mx − f(ξ)‖ of a ξ which is close to mx. Lemma A9. For a query ξ and dataX = (x1, . . . ,xN ), we define\n0 6 c = min j,l<j\n( ξTmx − ξTxj ) = ξTmx − max\nj,l<j ξTxj . (227)" }, { "heading": "The following holds:", "text": "‖mx − f(ξ)‖ 6 mmax + 2 γ M 6 mmax + 2 M , (228) where\nM = max i ‖xi‖ , (229)\n= (N − l) exp(− β c) . (230)\nProof. Let s = arg maxj,j6l ξTxj , therefore ξTmx = 1l ∑l i=1 ξ Txi 6 1l ∑l i=1 ξ Txs = ξ Txs. For softmax components j with l < j we have\n[softmax(βXT ξ)]j = exp(β (ξTxj − ξTxs)) 1 + ∑ k,k 6=s exp(β (ξ Txk − ξTxs)) 6 exp(− β c) = N − l ,\n(231)\nsince ξTxs − ξTxj ≥ ξTmx − ξTxj for each j with l < j, therefore ξTxs − ξTxj ≥ c The iteration f can be written as\nf(ξ) = Xsoftmax(βXT ξ) = N∑ j=1 xj [softmax(βX T ξ)]j . (232)\nWe set pi = [softmax(βXT ξ)]i, therefore ∑l i=1 pi = 1 − γ ≥ 1 − and ∑N i=l+1 pi = γ 6 . Therefore∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj ∥∥∥∥∥∥ 2 = ∥∥∥∥∥∥ l∑\nj=1\npj 1− γ (mx − xj) ∥∥∥∥∥∥ 2\n(233)\n= l∑ j=1,k=1 pj 1− γ pk 1− γ (mx − xj)T (mx − xk)\n= 1\n2 l∑ j=1,k=1 pj 1− γ pk 1− γ ( ‖mx − xj‖2 + ‖mx − xk‖2 − ‖xj − xk‖2 )\n= l∑ j=1 pj 1− γ ‖mx − xj‖2 − 1 2 l∑ j=1,k=1 pj 1− γ pk 1− γ ‖xj − xk‖2 6 l∑\nj=1\npj 1− γ ‖mx − xj‖2 6 m2max .\nIt follows that ∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj ∥∥∥∥∥∥ 6 mmax (234) We now can bound ‖mx − f(ξ)‖:\n‖mx − f(ξ)‖ = ∥∥∥∥∥∥mx − N∑ j=1 pj xj ∥∥∥∥∥∥ (235) = ∥∥∥∥∥∥mx − l∑\nj=1\npj xj − N∑\nj=l+1\npj xj ∥∥∥∥∥∥ = ∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj + γ 1− γ l∑ j=1 pj xj − N∑ j=l+1 pj xj ∥∥∥∥∥∥ 6 ∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj ∥∥∥∥∥∥ + γ1− γ ∥∥∥∥∥∥ l∑ j=1 pj xj ∥∥∥∥∥∥ + ∥∥∥∥∥∥ N∑ j=l+1 pj xj ∥∥∥∥∥∥ 6 ∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj ∥∥∥∥∥∥ + γ1− γ l∑\nj=1\npj M + N∑ j=l+1 pj M\n6 ∥∥∥∥∥∥mx − l∑\nj=1\npj 1− γ xj ∥∥∥∥∥∥ + 2 γ M 6 mmax + 2 γ M 6 mmax + 2 M ,\nwhere we applied Eq. (233) in the penultimate inequality. This is the statement of the lemma.\nThe separation of the center (the arithmetic mean)mx of the first l from dataX = (xl+1, . . . ,xN ) is ∆m, defined as\n∆m = min j,l<j\n( mTxmx − mTxxj ) = mTxmx − max\nj,l<j mTxxj . (236)\nThe center is separated from the other data xj with l < j if 0 < ∆m. By the same arguments as in Eq. (140), ∆m can also be expressed as\n∆m = min j,l<j\n1\n2\n( ‖mx‖2 − ‖xj‖2 + ‖mx − xj‖2 ) (237)\n= 1\n2 ‖mx‖2 −\n1 2 max j,l<j\n( ‖xj‖2 − ‖mx − xj‖2 ) .\nFor ‖mx‖ = ‖xj‖ we have ∆m = 1/2 minj,l<j ‖mx − xj‖2. Next we define the sphere where we want to apply Banach fixed point theorem. Definition 4 (Sphere Sm). The sphere Sm is defined as\nSm := { ξ | ‖ξ − mx‖ 6\n1\nβ mmax\n} . (238)\nLemma A10. With ξ given, if the assumptions\nA1: ξ is inside sphere: ξ ∈ Sm,\nA2: the centermx is well separated from other data xj with l < j:\n∆m ≥ 2 M β mmax − 1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) , (239)\nA3: the distance mmax of similar patterns to the center is sufficient small:\nβ m2max 6 1 (240)\nhold, then f(ξ) ∈ Sm. Therefore, under conditions (A2) and (A3), f is a mapping from Sm into Sm.\nProof. We need the separation ∆̃m of ξ from the rest of the data, which is the last N − l data points X = (xl+1, . . . ,xN ).\n∆̃m = min j,l<j\n( ξTmx − ξTxj ) . (241)\nUsing the Cauchy-Schwarz inequality, we obtain for l + 1 6 j 6 N :∣∣ξTxj − mTxxj∣∣ 6 ‖ξ − mx‖ ‖xj‖ 6 ‖ξ − mx‖M . (242) We have the lower bound\n∆̃m ≥ min j,l<j\n(( mTxmx − ‖ξ − mx‖M ) − ( mTxxj + ‖ξ − mx‖M )) (243)\n= − 2 ‖ξ − mx‖M + min j,l<j\n( mTxmx − mTxxj ) = ∆m − 2 ‖ξ − mx‖M\n≥ ∆m − 2 M\nβ mmax ,\nwhere we used the assumption (A1) of the lemma.\nFrom the proof in Lemma A9 we have l∑ i=1 pi ≥ 1 − (N − l) exp(− β ∆̃m) = 1 − ̃ , (244)\nN∑ i=l+1 pi 6 (N − l) exp(− β ∆̃m) = ̃ . (245)\nLemma A9 states that\n‖mx − f(ξ)‖ 6 mmax + 2 ̃ M (246) 6 mmax + 2 (N − l) exp(− β ∆̃m) M .\n6 mmax + 2 (N − l) exp(− β (∆m − 2 M\nβ mmax )) M .\nTherefore, we have ‖mx − f(ξ)‖ 6 mmax + 2 (N − l) exp ( − β (∆m − 2 M\nβ mmax )\n) M (247)\n6 mmax + 2 (N − l) exp ( − β ( 2 M\nβ mmax −\n1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) − 2 M\nβ mmax\n)) M\n= mmax + 2 (N − l) 1 − β m2max\n2 β (N − l) M max{mmax , 2 M} M\n6 mmax + 1 − β m2max β mmax = 1 β mmax ,\nwhere we used assumption (A2) of the lemma. Therefore, f(ξ) is a mapping from the sphere Sm into the sphere Sm.\nmmax = max 16i6l\n‖xi −mx‖ (248)\n= max 16i6l ∥∥∥∥∥∥xi − 1/l l∑\nj=1\nxj ∥∥∥∥∥∥ (249) = max\n16i6l ∥∥∥∥∥∥1/l l∑\nj=1\n(xi − xj) ∥∥∥∥∥∥ (250) 6 max\n16i,j6l ‖xi − xj‖ (251)\n6 max 16i6l ‖xi‖+ max 16j6l ‖xi‖ (252)\n6 2M (253)\n•Contraction mapping.\nFor applying Banach fixed point theorem we need to show that f is contraction in the compact environment Sm. Lemma A11. Assume that\nA1:\n∆m ≥ 2 M β mmax − 1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) , (254)\nand\nA2:\nβ m2max 6 1 , (255)\nthen f is a contraction mapping in Sm. Proof. The version of the mean value theorem Lemma A32 states for the symmetric Jm = ∫ 1\n0 J(λξ+\n(1− λ)mx) dλ: f(ξ) = f(mx) + J\nm (ξ − mx) . (256) In complete analogy to Lemma A6, we get:\n‖f(ξ) − f(mx)‖ 6 ‖Jm‖2 ‖ξ − mx‖ . (257)\nWe define ξ̃ = λξ + (1− λ)mx for some λ ∈ [0, 1]. We need the separation ∆̃m of ξ̃ from the rest of the data, which is the last N − l data pointsX = (xl+1, . . . ,xN ).\n∆̃m = min j,l<j\n( ξ̃Tmx − ξ̃Txj ) . (258)\nFrom the proof in Lemma A9 we have\ñ = (N − l) exp(− β ∆̃m) , (259) l∑ i=1 pi(ξ̃) ≥ 1 − (N − l) exp(− β ∆̃m) = 1 − ̃ , (260)\nN∑ i=l+1 pi(ξ̃) 6 (N − l) exp(− β ∆̃m) = ̃ . (261)\nWe first compute an upper bound on ̃. Using the Cauchy-Schwarz inequality, we obtain for l + 1 6 j 6 N : ∣∣∣ξ̃Txj − mTxxj∣∣∣ 6 ∥∥∥ξ̃ − mx∥∥∥ ‖xj‖ 6 ∥∥∥ξ̃ − mx∥∥∥M . (262) We have the lower bound on ∆̃m:\n∆̃m ≥ min j,l<j\n(( mTxmx − ∥∥∥ξ̃ − mx∥∥∥M) − (mTxxj + ∥∥∥ξ̃ − mx∥∥∥M)) (263) = − 2 ∥∥∥ξ̃ − mx∥∥∥M + min j,l<j ( mTxmx − mTxxj ) = ∆m − 2\n∥∥∥ξ̃ − mx∥∥∥M ≥ ∆m − 2 ‖ξ − mx‖M .\nwhere we used ∥∥∥ξ̃ −mx∥∥∥ = λ‖ξ −mx‖ 6 ‖ξ −mx‖. We obtain the upper bound on ̃:\ñ 6 (N − l) exp (− β (∆m − 2 ‖ξ − mx‖M)) (264) 6 (N − l) exp ( − β ( ∆m − 2 M\nβ mmax\n)) .\nwhere we used that in the sphere Si holds:\n‖ξ − mx‖ 6 1\nβ mmax , (265)\ntherefore\n2 ‖ξ − mx‖M 6 2 M\nβ mmax . (266)\nNext we compute a lower bound on ̃ and to this end start with the upper bound on ∆̃m using the same arguments as in Eq. (158) in combination with Eq. (266).\n∆̃m ≥ min j,l<j\n(( mTxmx + ∥∥∥ξ̃ − mx∥∥∥M) − (mTxxj − ∥∥∥ξ̃ − mx∥∥∥M)) (267) = 2 ∥∥∥ξ̃ − mx∥∥∥M + min j,l<j ( mTxmx − mTxxj ) = ∆m + 2\n∥∥∥ξ̃ − mx∥∥∥M ≥ ∆m + 2 ‖ξ − mx‖M .\nwhere we used ∥∥∥ξ̃ −mx∥∥∥ = λ‖ξ −mx‖ 6 ‖ξ −mx‖. We obtain the lower bound on ̃:\ñ ≥ (N − l) exp ( − β ( ∆m + 2 M\nβ mmax\n)) , (268)\nwhere we used that in the sphere Si holds:\n‖ξ − mx‖ 6 1\nβ mmax , (269)\ntherefore\n2 ‖ξ − mx‖M 6 2 M\nβ mmax . (270)\nFrom Lemma A8 we have∥∥∥J(ξ̃)∥∥∥ 2 6 β ( m2max + ̃ 2 (2 − ̃) M2 ) (271)\n= β ( m2max + ̃4 M 2 − 2 ̃2 M2 )\n6 β ( m2max + (N − l) exp ( − β ( ∆m − 2 M\nβ mmax\n)) 4 M2 −\n2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 ) .\nThe bound Eq. (271) holds for the mean Jm, too, since it averages over J(ξ̃): ‖Jm‖2 6 β ( m2max + (N − l) exp ( − β ( ∆m − 2 M\nβ mmax\n)) 4 M2 − (272)\n2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 ) .\nThe assumption of the lemma is\n∆m ≥ 2 M β mmax − 1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) , (273)\nTherefore, we have\n∆m − 2 M β mmax ≥ − 1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) . (274)\nTherefore, the spectral norm ‖Jm‖2 can be bounded by: ‖Jm‖2 6 (275)\nβ ( m2max + (N − l) exp ( − β ( − 1 β ln ( 1 − β m2max 2 β (N − l) M max{mmax , 2 M} ))) 4 M2 − 2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 )\n= β ( m2max + (N − l) exp ( ln ( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M} )) 4 M2 − 2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 )\n= β ( m2max + (N − l)\n1 − β m2max 2 β (N − l) M max{mmax , 2 M} 4 M2 −\n2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 )\n= βm2max + 1 − β m2max\nmax{mmax , 2 M} 2 M − β 2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2\n6 βm2max + 1 − β m2max − β 2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2\n= 1 − β 2 (N − l)2 exp ( − 2 β ( ∆m + 2 M\nβ mmax\n)) M2 < 1 .\nFor the last but one inequality we used 2M 6 max{mmax, 2M}. Therefore, f is a contraction mapping in Sm.\n•Banach Fixed Point Theorem. Now we have all ingredients to apply Banach fixed point theorem. Lemma A12. Assume that\nA1:\n∆m ≥ 2 M β mmax − 1 β ln\n( 1 − β m2max\n2 β (N − l) M max{mmax , 2 M}\n) , (276)\nand\nA2:\nβ m2max 6 1 , (277)\nthen f has a fixed point in Sm.\nProof. We use Banach fixed point theorem: Lemma A10 says that f maps from the compact set Sm into the same compact set Sm. Lemma A11 says that f is a contraction mapping in Sm.\n•Contraction mapping with a fixed point.\nWe assume that the first l patterns are much more probable (and similar to one another) than the other patterns. Therefore, we define:\nM := max i ‖xi‖ , (278)\nγ = N∑ i=l+1 pi 6 , (279)\n1− γ = l∑ i=1 pi ≥ 1 − , (280)\np̃i := pi\n1− γ 6 pi/(1− ) , (281)\nl∑ i=1 p̃i = 1 , (282)\nmx = 1\nl l∑ i=1 xi , (283)\nmmax = max 16i6l\n‖xi − mx‖ . (284)\nM is an upper bound on the Euclidean norm of the patterns, which are vectors. is an upper bound on the probability γ of not choosing one of the first l patterns, while 1 − is a lower bound the probability (1 − γ) of choosing one of the first l patterns. mx is the arithmetic mean (the center) of the first l patterns. mmax is the maximal distance of the patterns to the center mx . p̃ is the probability p normalized for the first l patterns.\nThe variance of the first l patterns is\nVarp̃[x1:l] = l∑ i=1 p̃i xi x T i − ( l∑ i=1 p̃i xi ) ( l∑ i=1 p̃i xi )T (285)\n= l∑ i=1 p̃i\n( xi −\nl∑ i=1 p̃ixi\n) ( xi −\nl∑ i=1 p̃ixi\n)T .\nWe have shown that a fixed point exists. We want to know how fast the iteration converges to the fixed point. Let m∗x be the fixed point of the iteration f in the sphere Sm. Using the mean value theorem Lemma A32, we have with Jm = ∫ 1 0 J(λξ + (1− λ)m∗x) dλ:\n‖f(ξ) − m∗x‖ = ‖f(ξ) − f(m∗x)‖ 6 ‖Jm‖2 ‖ξ − m ∗ x‖ (286)\nAccording to Lemma A8 the following bounds on the norm ‖J‖2 of the Jacobian of the fixed point iteration hold. The γ-bound for ‖J‖2 is\n‖J‖2 6 β ( (1− γ) m2max + γ 2 (2 − γ) M2 ) , (287)\nwhile the -bound for ‖J‖2 is: ‖J‖2 6 β ( m2max + 2 (2 − ) M2 ) . (288)\nFrom the last condition we require for a contraction mapping:\nβ m2max < 1 . (289)\nWe want to see how large is. The separation of centermx from dataX = (xl+1, . . . ,xN ) is\n∆m = min j,l<j\n( mTxmx − mTxxj ) = mTxmx − max\nj,l<j mTxxj . (290)\nWe need the separation ∆̃m of x̃ = λξ + (1− λ)m∗x from the data.\n∆̃m = min j,l<j\n( x̃Tmx − x̃Txj ) . (291)\nWe compute a lower bound on ∆̃m. Using the Cauchy-Schwarz inequality, we obtain for 1 6 j 6 N :∣∣x̃Txj − mTxxj∣∣ 6 ‖x̃ − mx‖ ‖xj‖ 6 ‖x̃ − mx‖M . (292) We have the lower bound\n∆̃m ≥ min j,l<j\n(( mTxmx − ‖x̃ − mx‖M ) − ( mTxxj + ‖x̃ − mx‖M )) (293)\n= − 2 ‖x̃ − mx‖M + min j,l<j\n( mTxmx − mTxxj ) = ∆m − 2 ‖x̃ − mx‖M .\nSince\n‖x̃ − mx‖ = ‖λξ + (1− λ)m∗x − mx‖ (294) 6 λ ‖ξ − mx‖ + (1− λ) ‖m∗x − mx‖ 6 max{‖ξ − mx‖, ‖m∗x − mx‖} ,\nwe have\n∆̃m ≥ ∆m − 2 max{‖ξ − mx‖, ‖m∗x − mx‖}M . (295)\n= (N − l) exp(− β (∆m − 2 max{‖ξ − mx‖, ‖m∗x − mx‖}M)) . (296)" }, { "heading": "A.1.6 PROPERTIES OF FIXED POINTS NEAR STORED PATTERN", "text": "In Subsection A.1.5.3 many stable states that are fixed points near the stored patterns are considered. We now consider this case. In the fist subsection we investigate the storage capacity if all patterns are sufficiently separated so that metastable states do not appear. In the next subsection we look into the updates required and error when retrieving the stored patterns. For metastable states we can do the same analyses if each metastable state is treated as one state like one pattern.\nWe see a trade-off that is known from classical Hopfield networks and for modern Hopfield networks. Small separation ∆i of the pattern xi from the other patterns gives high storage capacity. However the convergence speed is lower and the retrieval error higher. In contrast, large separation ∆i of the pattern xi from the other pattern allows the retrieval of patterns with one update step and exponentially low error.\nA.1.6.1 Exponentially Many Patterns can be Stored. From Subsection A.1.5.3 need some definitions. We assume to have N patterns, the separation of pattern xi from the other patterns {x1, . . . ,xi−1,xi+1, . . . ,xN} is ∆i, defined as\n∆i = min j,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (297)\nThe pattern is separated from the other data if 0 < ∆i. The separation ∆i can also be expressed as\n∆i = min j,j 6=i\n1\n2\n( ‖xi‖2 − ‖xj‖2 + ‖xi − xj‖2 ) (298)\n= 1\n2 ‖xi‖2 −\n1 2 max j,j 6=i\n( ‖xj‖2 − ‖xi − xj‖2 ) .\nFor ‖xi‖ = ‖xj‖ we have ∆i = 1/2 minj,j 6=i ‖xi − xj‖2. The sphere Si with center xi is defined as\nSi = { ξ | ‖ξ − xi‖ 6\n1\nβ N M\n} . (299)\nThe maximal length of a pattern is M = maxi ‖xi‖. We next define what we mean with storing and retrieving a pattern. Definition 5 (Pattern Stored and Retrieved). We assume that around every pattern xi a sphere Si is given. We say xi is stored if there is a single fixed point x∗i ∈ Si to which all points ξ ∈ Si converge, and Si ∩Sj = ∅ for i 6= j. We say xi is retrieved for a given if iteration (update rule) Eq. (92) gives a point x̃i that is at least -close to the single fixed point x∗i ∈ Si. The retrieval error is ‖x̃i − xi‖.\nThe sphere Si around pattern xi can be any a sphere and do not have the specific sphere defined in Def. 3.\nFor a query ξ ∈ Si to converge to a fixed point x∗i ∈ Si we required for the application of Banach fixed point theorem and for ensuring a contraction mapping the following inequality:\n∆i ≥ 2\nβ N +\n1 β ln ( 2 (N − 1) N β M2 ) . (300)\nThis is the assumption in Lemma A7 to ensure a fixed point in sphere Si. Since replacing (N − 1)N by N2 gives\n2\nβ N +\n1 β ln ( 2 N2 β M2 ) > 2 β N + 1 β ln ( 2 (N − 1) N β M2 ) , (301)\nthe inequality follows from following master inequality\n∆i ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) , (302)\nIf we assume that Si∩Sj 6= ∅with i 6= j, then the triangle inequality with a point from the intersection gives\n‖xi − xj‖ 6 2\nβ N M . (303)\nTherefore, we have using the Cauchy-Schwarz inequality:\n∆i 6 x T i (xi − xj) 6 ‖xi‖ ‖xi − xj‖ 6M\n2\nβ N M =\n2\nβ N . (304)\nThe last inequality is a contraction to Eq. (302) if we assume that\n1 < 2 (N − 1) N β M2 . (305) With this assumption, the spheres Si and Sj do not intersect. Therefore, each xi has its separate fixed point in Si. We define\n∆min = min 16i6N ∆i (306)\nto obtain the master inequality\n∆min ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) . (307)\n•Patterns on a sphere.\nFor simplicity and in accordance with the results of the classical Hopfield network, we assume all patterns being on a sphere with radius M :\n∀i : ‖xi‖ = M . (308)\nUnder assumption Eq. (305) we have only to show that the master inequality Eq. (307) is fulfilled for each xi to have a separate fixed point near each xi.\nWe defined αij as the angle between xi and xj . The minimal angle αmin between two data points is\nαmin = min 16i<j6N αij . (309)\nOn the sphere with radius M we have\n∆min = min 16i<j6N\nM2(1 − cos(αij)) = M2(1 − cos(αmin)) , (310)\ntherefore it is sufficient to show the master inequality on the sphere:\nM2(1 − cos(αmin)) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) . (311)\nUnder assumption Eq. (305) we have only to show that the master inequality Eq. (307) is fulfilled for ∆min. We consider patterns on the sphere, therefore the master inequality Eq. (307) becomes Eq. (311). First we show results when pattern positions on the sphere are constructed and ∆min is ensured. Then we move on to random patterns on a sphere, where ∆min becomes a random variable.\n•Storage capacity for patterns placed on the sphere.\nNext theorem says how many patterns we can stored (fixed point with attraction basin near pattern) if we are allowed to place them on the sphere. Theorem A3 (Storage Capacity (M=2): Placed Patterns). We assume β = 1 and patterns on the sphere with radius M . If M = 2 √ d− 1 and the dimension d of the space is d ≥ 4 or if M = 1.7 √ d− 1 and the dimension d of the space is d ≥ 50, then the number of patterns N that can be stored (fixed point with attraction basin near pattern) is at least\nN = 22(d−1) . (312)\nProof. For random patterns on the sphere, we have to show that the master inequality Eq. (311) holds:\nM2(1 − cos(αmin)) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) . (313)\nWe now place the patterns equidistant on the sphere where the pattern are separated by an angle αmin:\n∀i : min j,j 6=i αij = αmin , (314)\nIn a d-dimensional space we can place\nN =\n( 2π\nαmin\n)d−1 (315)\npoints on the sphere. In a spherical coordinate system a pattern differs from its most closest patterns by an angle αmin and there are d− 1 angles. Solving for αmin gives\nαmin = 2π\nN1/(d−1) . (316)\nThe number of patterns that can be stored is determined by the largest N that fulfils M2 ( 1 − cos ( 2π\nN1/(d−1)\n)) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) . (317)\nWe set N = 22(d−1) and obtain for Eq. (317):\nM2 ( 1 − cos (π\n2\n)) ≥ 2\nβ 23(d−1) +\n1 β ln ( 2 β M2 ) + 1 β 4 (d− 1) ln 2 . (318)\nThis inequality is equivalent to\nβ M2 ≥ 1 22(d−1)−1\n+ ln ( 2 β M2 ) + 4 (d− 1) ln 2 . (319)\nThe last inequality can be fulfilled with M = K √ d− 1 and proper K. For β = 1, d = 4 and K = 2 the inequality is fulfilled. The left hand side minus the right hand side is 4(d− 1)− 1/22(d−1)−1 − ln(8(d−1))−4(d−1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 4. For β = 1, d = 50 and K = 1.7 the inequality is fulfilled. The left hand side minus the right hand side is 2.89(d− 1)− 1/22(d−1)−1 − ln(5.78(d− 1))− 4(d− 1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 50.\nIf we want to store considerably more patterns, then we have to increase the length of the vectors or the dimension of the space where the vectors live. The next theorem shows results for the number of patterns N with N = 23(d−1).\nTheorem A4 (Storage Capacity (M=5): Placed Patterns). We assume β = 1 and patterns on the sphere with radius M . If M = 5 √ d− 1 and the dimension d of the space is d ≥ 3 or if M = 4 √ d− 1 and the dimension d of the space is d ≥ 13, then the number of patterns N that can be stored (fixed point with attraction basin near pattern) is at least\nN = 23(d−1) . (320)\nProof. We set N = 23(d−1) and obtain for Eq. (317):\nM2 ( 1 − cos (π\n4\n)) ≥ 2\nβ 23(d−1) +\n1 β ln ( 2 β M2 ) + 1 β 6 (d− 1) ln 2 . (321)\nThis inequality is equivalent to\nβ M2 ( 1 − √ 2\n2\n) ≥ 1\n23(d−1)−1 + ln\n( 2 β M2 ) + 6 (d− 1) ln 2 . (322)\nThe last inequality can be fulfilled with M = K √ d− 1 and proper K. For β = 1, d = 13 and K = 4 the inequality is fulfilled. The left hand side minus the right hand side is 4.686292(d − 1)− 1/23(d−1)−1 − ln(32(d− 1))− 6(d− 1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 13. For β = 1, d = 3 and K = 5 the inequality is fulfilled. The left hand side minus the right hand side is 7.32233(d− 1)− 1/23(d−1)−1 − ln(50(d− 1))− 6(d− 1) ln 2. Its derivative with respect to d is strict positive. Therefore, the inequality holds for d ≥ 3.\n•Storage capacity for random patterns on the sphere.\nNext we investigate random points on the sphere. Under assumption Eq. (305) we have to show that the master inequality Eq. (311) is fulfilled for αmin, where now αmin is now a random variable. We use results on the distribution of the minimal angles between random patterns on a sphere according to Cai et al. (2013) and Brauchart et al. (2018). Theorem 2 in Cai et al. (2013) gives the distribution of the minimal angle for random patterns on the unit sphere. Proposition 3.5 in Brauchart et al. (2018) gives a lower bound on the probability of the minimal angle being larger than a given constant. We require this proposition to derive the probability of pattern having a minimal angle αmin. Proposition 3.6 in Brauchart et al. (2018) gives the expectation of the minimal angle.\nWe will prove high probability bounds for the expected storage capacity. We need the following tail-bound on αmin (the minimal angle of random patterns on a sphere):\nLemma A13 ((Brauchart et al., 2018)). Let d be the dimension of the pattern space,\nκd := 1\nd √ π\nΓ((d+ 1)/2)\nΓ(d/2) . (323)\nand δ > 0 such that κd−12 δ (d−1) 6 1. Then\nPr(N 2 d−1αmin ≥ δ) ≥ 1 − κd−1\n2 δd−1 . (324)\nProof. The statement of the lemma is Eq. (3-6) from Proposition 3.5 in Brauchart et al. (2018).\nNext we derive upper and lower bounds on the constant κd since we require them later for proving storage capacity bounds.\nLemma A14. For κd defined in Eq. (323) we have the following bounds for every d ≥ 1:\n1\nexp(1/6) √ e π d\n6 κd 6 exp(1/12)√\n2 π d < 1 . (325)\nProof. We use for x > 0 the following bound related to Stirling’s approximation formula for the gamma function, c.f. (Olver et al., 2010, (5.6.1)):\n1 < Γ(x) (2 π)− 1 2x 1 2 − x exp(x) < exp\n( 1\n12 x\n) . (326)\nUsing Stirling’s formula Eq. (326), we upper bound κd:\nκd = 1\nd √ π\nΓ((d+ 1)/2)\nΓ(d/2) <\n1\nd √ π\nexp (\n1 6(d+1)\n) exp ( − d+12 ) ( d+1 2 ) d 2\nexp ( − d2 ) ( d 2 ) d 2 − 1 2\n(327)\n= 1\nd √ π e exp\n( 1\n6(d+ 1)\n) ( 1 + 1\nd\n) d 2 √ d\n2 6\nexp (\n1 12 ) √\n2 π √ d .\nFor the first inequality, we applied Eq. (326), while for the second we used (1 + 1d ) d < e for d ≥ 1.\nNext, we lower bound κd by again applying Stirling’s formula Eq. (326):\nκd = 1\nd √ π\nΓ((d+ 1)/2)\nΓ(d/2) >\n1\nd √ π\nexp ( − d+12 ) ( d+1 2 ) d 2\nexp (\n1 6 d\n) exp ( −d2 ) ( d 2 ) d 2− 1 2\n(328)\n= 1\nd √ π e exp\n( 1\n6 d\n) (1 + 1 d ) d 2 √ d 2 ≥ 1 exp (\n1 6\n) √ e π d ,\nwhere the last inequality holds because of monotonicity of (1 + 1d ) d and using the fact that for d = 1 it takes on the value 2.\nWe require a bound on cos to bound the master inequality Eq. (311).\nLemma A15. For 0 6 x 6 π the function cos can be upper bounded by:\ncos(x) 6 1 − x 2\n5 . (329)\nProof. We use the infinite product representation of cos, c.f. (Olver et al., 2010, (4.22.2)):\ncos(x) = ∞∏ n=1 ( 1− 4 x 2 (2n− 1)2 π2 ) . (330)\nSince it holds that\n1 − 4 x 2\n(2n− 1)2 π2 6 1 (331)\nfor |x| 6 π and n ≥ 2, we can get the following upper bound on Eq. (330):\ncos(x) 6 2∏\nn=1\n( 1− 4 x 2\n(2n− 1)2π2\n) = ( 1 − 4 x 2\nπ2\n) ( 1 − 4 x 2\n9 π2\n) (332)\n= 1 − 40 x 2\n9 π2 +\n16 x4\n9 π4 6 1 − 40 x\n2\n9 π2 +\n16 x2\n9 π2\n= 1 − 24 x 2 9 π2 6 1 − x 2 5 .\nThe last but one inequality uses x 6 π, which implies x/π 6 1. Thus Eq. (329) is proven.\n•Exponential storage capacity: the base c as a function of the parameter β, the radius of the sphere M , the probability p, and the dimension d of the space.\nWe express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on he base c as a function of β, the radius of the sphere M , the probability p that all patterns can be stored, and the dimension d of the space. With β > 0, K > 0, and d ≥ 2 (to ensure a sphere), the following theorem gives our main result. Theorem A5 (Storage Capacity (Main): Random Patterns). We assume a failure probability 0 < p 6 1 and randomly chosen patterns on the sphere with radius M := K √ d− 1. We define\na := 2 d− 1 (1 + ln(2 β K2 p (d− 1))) , b := 2 K\n2 β\n5 ,\nc := b\nW0(exp(a + ln(b)) , (333)\nwhere W0 is the upper branch of the Lambert W function (Olver et al., 2010, (4.13)) and ensure c ≥ (\n2 √ p\n) 4 d−1\n. (334)\nThen with probability 1− p, the number of random patterns that can be stored is\nN ≥ √p c d−1 4 . (335)\nTherefore it is proven for c ≥ 3.1546 with β = 1, K = 3, d = 20 and p = 0.001 (a+ ln(b) > 1.27) and proven for c ≥ 1.3718 with β = 1, K = 1, d = 75, and p = 0.001 (a+ ln(b) < −0.94).\nProof. We consider the probability that the master inequality Eq. (311) is fulfilled:\nPr ( M2(1 − cos(αmin))) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 )) ≥ 1 − p . (336)\nUsing Eq. (329), we have:\n1 − cos(αmin) ≥ 1\n5 α2min . (337)\nTherefore, with probability 1− p the storage capacity is largest N that fulfills\nPr ( M2\nα2min 5 ≥ 2 β N + 1 β ln ( 2 N2 β M2 )) ≥ 1 − p . (338)\nThis inequality is equivalent to\nPr ( N 2 d−1 αmin ≥ √ 5 N 2 d−1\nM\n( 2\nβ N +\n1 β ln ( 2 N2 β M2\n)) 12) ≥ 1 − p . (339)\nWe use Eq. (324) to obtain:\nPr ( N 2 d−1 αmin ≥ √ 5 N 2 d−1\nM\n( 2\nβ N +\n1 β ln ( 2 N2 β M2\n)) 12) (340)\n≥ 1 − κd−1 2 5 d−1 2 N2 M−(d−1)\n( 2\nβ N +\n1 β ln ( 2 N2 β M2 )) d−12 .\nFor Eq. (339) to be fulfilled, it is sufficient that\nκd−1 2 5 d−1 2 N2 M−(d−1)\n( 2\nβ N +\n1 β ln ( 2 N2 βM2 )) d−12 − p 6 0 . (341)\nIf we insert the assumption Eq. (334) of the theorem into Eq. (335), then we obtain N ≥ 2. We now apply the upper bound κd−1/2 < κd−1 < 1 from Eq. (325) and the upper bound 2βN 6 1 β from N ≥ 2 to inequality Eq. (341). In the resulting inequality we insert N = √pc d−14 to check whether it is fulfilled with this special value of N and obtain:\n5 d−1 2 p c d−1 2 M−(d−1)\n( 1\nβ +\n1 β ln ( 2 p c d−1 2 βM2 )) d−12 6 p . (342)\nDividing by p, inserting M = K √ d− 1, and exponentiation of the left and right side by 2d−1 gives:\n5 c\nK2 (d− 1)\n( 1\nβ +\n1 β ln ( 2 β c d−1 2 p K2 (d− 1) )) − 1 6 0 . (343)\nAfter some algebraic manipulation, this inequality can be written as\na c + c ln(c) − b 6 0 , (344) where we used\na := 2 d− 1 (1 + ln(2 β K2 p (d− 1))) , b := 2 K\n2 β\n5 .\nWe determine the value ĉ of c which makes the inequality Eq. (344) equal to zero. We solve\na ĉ + ĉ ln(ĉ) − b = 0 (345) for ĉ:\na ĉ + ĉ ln(ĉ) − b = 0 (346) ⇔ a + ln(ĉ) = b/ĉ ⇔ a + ln(b) + ln(ĉ/b) = b/ĉ ⇔ b/ĉ + ln(b/ĉ) = a + ln(b) ⇔ b/ĉ exp(b/ĉ) = exp(a + ln(b)) ⇔ b/ĉ = W0(exp(a + ln(b)))\n⇔ ĉ = b W0(exp(a + ln(b)) ,\nwhere W0 is the upper branch of the Lambert W function (see Def. A6). Hence, the solution is\nĉ = b\nW0(exp(a + ln(b)) . (347)\nThe solution exist, since the Lambert function W0(x) (Olver et al., 2010, (4.13)) is defined for −1/e < x and we have 0 < exp(a+ ln(b). Since ĉ fulfills inequality Eq. (344) and therefore also Eq. (342), we have a lower bound on the storage capacity N :\nN ≥ √p ĉ d−1 4 . (348)\nNext we aim at a lower bound on c which does not use the Lambert W function (Olver et al., 2010, (4.13)). Therefore, we upper bound W0(exp(a+ ln(b)) to obtain a lower bound on c, therefore, also a lower bound on the storage capacity N . The lower bound is given in the next corollary. Corollary A1. We assume a failure probability 0 < p 6 1 and randomly chosen patterns on the sphere with radius M = K √ d− 1. We define\na := 2 d− 1 (1 + ln(2 β K2 p (d− 1))) , b := 2 K\n2 β\n5 .\nUsing the omega constant Ω ≈ 0.56714329 we set\nc = b ln ( Ω exp(a + ln(b)) + 1 Ω (1 + Ω) )−1 for a + ln(b) 6 0 ,\nb (a + ln(b))− a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (349)\nand ensure\nc ≥ (\n2 √ p\n) 4 d−1\n. (350)\nThen with probability 1− p, the number of random patterns that can be stored is\nN ≥ √p c d−1 4 . (351)\nExamples are c ≥ 3.1444 for β = 1, K = 3, d = 20 and p = 0.001 (a + ln(b) > 1.27) and c ≥ 1.2585 for β = 1 K = 1, d = 75, and p = 0.001 (a+ ln(b) < −0.94).\nProof. We lower bound the c defined in Theorem A5. According to (Hoorfar & Hassani, 2008, Theorem 2.3) we have for any real u and y > 1e :\nW0(exp(u)) 6 ln\n( exp(u) + y\n1 + ln(y)\n) . (352)\nTo upper bound W0(x) for x ∈ [0, 1], we set y = 1/W0(1) = 1/Ω = exp Ω = − 1/ ln Ω ≈ 1.76322 , (353)\nwhere the Omega constant Ω is\nΩ = (∫ ∞ −∞\ndt\n(et − t)2 + π2\n)−1 − 1 ≈ 0.56714329 . (354)\nSee for these equations the special values of the Lambert W function in Lemma A31. We have the upper bound on W0:\nW0(exp(u)) 6 ln\n( exp(u) + 1/Ω\n1 + ln(1/Ω)\n) = ln ( Ω exp(u) + 1\nΩ(1 + Ω)\n) . (355)\nAt the right hand side of interval [0, 1], we have u = 0 and exp(u) = 1 and get:\nln\n( Ω 1 + 1\nΩ(1 + Ω)\n) = ln ( 1\nΩ\n) = − ln (Ω) = Ω = W0(1) . (356)\nTherefore, the bound is tight at the right hand side of of interval [0, 1], that is for exp(u) = 1, i.e. u = 0. We have derived an bound forW0(exp(u)) with exp(u) ∈ [0, 1] or, equivalently, u ∈ [−∞, 0]. We obtain from Hoorfar & Hassani (2008, Corollary 2.6) the following bound on W0(exp(u)) for 1 < exp(u), or, equivalently 0 < u:\nW0(exp(u)) 6 u u 1 + u . (357)\nA lower bound on ĉ is obtained via the upper bounds Eq. (357) and Eq. (355) on W0 as W0 > 0. We set u = a+ ln(b) and obtain\nW0(exp(a + ln(b))) 6 ln ( Ω exp(a + ln(b)) + 1 Ω (1 + Ω) )−1 for a + ln(b) 6 0 ,\n(a + ln(b))− a + ln(b) a + ln(b) + 1 for a + ln(b) > 0 (358)\nWe insert this bound into Eq. (347), the solution for ĉ, to obtain the statement of the theorem.\n•Exponential storage capacity: the dimension d of the space as a function of the parameter β, the radius of the sphere M , and the probability p.\nWe express the number N of stored patterns by an exponential function with base c > 1 and an exponent linear in d. We derive constraints on the dimension d of the space as a function of β, the radius of the sphere M , the probability p that all patterns can be stored, and the base of the exponential storage capacity. The following theorem gives this result. Theorem A6 (Storage Capacity (d computed): Random Patterns). We assume a failure probability 0 < p 6 1 and randomly chosen patterns on the sphere with radius M = K √ d− 1. We define\na := ln(c) 2 − K\n2 β\n5 c , b := 1 + ln\n( 2 p β K2 ) ,\nd = { 1 + 1a W (a exp(−b)) for a 6= 0 , 1 + exp(−b) for a = 0 , (359)\nwhere W is the Lambert W function (Olver et al., 2010, (4.13)). For 0 < a the function W is the upper branch W0 and for a < 0 we use the lower branch W−1. If we ensure that\nc ≥ (\n2 √ p\n) 4 d−1\n, − 1 e 6 a exp(−b) , (360)\nthen with probability 1− p, the number of random patterns that can be stored is\nN ≥ √p c d−1 4 . (361)\nProof. We consider the probability that the master inequality Eq. (311) is fulfilled:\nPr ( M2(1 − cos(αmin))) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 )) ≥ 1 − p . (362)\nUsing Eq. (329), we have:\n1 − cos(αmin) ≥ 1\n5 α2min . (363)\nTherefore, with probability 1− p the storage capacity is largest N that fulfills\nPr ( M2\nα2min 5 ≥ 2 β N + 1 β ln ( 2 N2 β M2 )) ≥ 1 − p . (364)\nThis inequality is equivalent to\nPr ( N 2 d−1 αmin ≥ √ 5 N 2 d−1\nM\n( 2\nβ N +\n1 β ln ( 2 N2 β M2\n)) 12) ≥ 1 − p . (365)\nWe use Eq. (324) to obtain:\nPr ( N 2 d−1 αmin ≥ √ 5 N 2 d−1\nM\n( 2\nβ N +\n1 β ln ( 2 N2 β M2\n)) 12) (366)\n≥ 1 − κd−1 2 5 d−1 2 N2 M−(d−1)\n( 2\nβ N +\n1 β ln ( 2 N2 β M2 )) d−12 .\nFor Eq. (365) to be fulfilled, it is sufficient that\nκd−1 2 5 d−1 2 N2 M−(d−1)\n( 2\nβ N +\n1 β ln ( 2 N2 βM2 )) d−12 − p 6 0 . (367)\nIf we insert the assumption Eq. (360) of the theorem into Eq. (361), then we obtain N ≥ 2. We now apply the upper bound κd−1/2 < κd−1 < 1 from Eq. (325) and the upper bound 2βN 6 1 β from N ≥ 2 to inequality Eq. (367). In the resulting inequality we insert N = √pc d−14 to check whether it is fulfilled with this special value of N and obtain:\n5 d−1 2 p c d−1 2 M−(d−1)\n( 1\nβ +\n1 β ln ( 2 p c d−1 2 βM2 )) d−12 6 p . (368)\nDividing by p, inserting M = K √ d− 1, and exponentiation of the left and right side by 2d−1 gives:\n5 c\nK2 (d− 1)\n( 1\nβ +\n1 β ln ( 2 β c d−1 2 p K2 (d− 1) )) − 1 6 0 . (369)\nThis inequality Eq. (369) can be reformulated as: 1 + ln ( 2 p β c d−1 2 K2 (d− 1) ) − (d− 1) K 2 β\n5 c 6 0 . (370)\nUsing\na := ln(c) 2 − K\n2 β\n5 c , b := 1 + ln\n( 2 p β K2 ) ,\n(371)\nwe write inequality Eq. (370) as\nln(d− 1) + a (d− 1) + b 6 0 . (372)\nWe determine the value d̂ of d which makes the inequality Eq. (372) equal to zero. We solve\nln(d̂− 1) + a (d̂− 1) + b = 0 . (373)\nfor d̂\nFor a 6= 0 we have ln(d̂− 1) + a (d̂− 1) + b = 0 (374)\n⇔ a (d̂− 1) + ln(d̂− 1) = − b ⇔ (d̂− 1) exp(a (d̂− 1)) = exp(−b) ⇔ a (d̂− 1) exp(a (d̂− 1)) = a exp(−b) ⇔ a (d̂− 1) = W (a exp(−b))\n⇔ d̂ − 1 = 1 a W (a exp(−b)) ⇔ d̂ = 1 + 1 a W (a exp(−b)) ,\nwhere W is the Lambert W function (see Def. A6). For a > 0 we have to use the upper branch W0 of the Lambert W function and for a < 0 we use the lower branch W−1 of the Lambert W function (Olver et al., 2010, (4.13)). We have to ensure that −1/e 6 a exp(−b) for a solution to exist. For a = 0 we have d̂ = 1 + exp(−b). Hence, the solution is\nd̂ = 1 + 1\na W (a exp(−b)) . (375)\nSince d̂ fulfills inequality Eq. (369) and therefore also Eq. (368), we have a lower bound on the storage capacity N :\nN ≥ √p ĉ d−1 4 . (376)\nCorollary A2. We assume a failure probability 0 < p 6 1 and randomly chosen patterns on the sphere with radius M = K √ d− 1. We define\na := ln(c) 2 − K\n2 β\n5 c , b := 1 + ln\n( 2 p β K2 ) ,\nd = 1 + 1\na (− ln(−a) + b) , (377)\nand ensure\nc ≥ (\n2 √ p\n) 4 d−1\n, − 1 e 6 a exp(−b) , a < 0 , (378)\nthen with probability 1− p, the number of random patterns that can be stored is\nN ≥ √p c d−1 4 . (379)\nSetting β = 1, K = 3, c = 2 and p = 0.001 yields d < 24.\nProof. For a < 0 the Eq. (359) from Theorem (A6) can be written as\nd = 1 + W−1(a exp(−b))\na = 1 + W−1(− exp (−(− ln(−a) + b− 1)− 1)) a\n(380)\nFrom Alzahrani & Salem (2018, Theorem 3.1) we get the following bound on W−1:\n− e e− 1 (u+ 1) < W−1(− exp(−u− 1)) < − (u+ 1) . (381)\nfor u > 0. We apply Eq. (381) to Eq. (380) with u = − ln(−a) + b− 1. Since a < 0 we get\nd > 1 + − ln(−a) + b\na . (382)\n•Storage capacity for the expected minimal separation instead of the probability that all patterns can be stored. In contrast to the previous paragraph, we want to argue about the storage capacity for the expected minimal separation. Therefore, we will use the following bound on the expectation of αmin (minimal angle), which gives also a bound on the expected of ∆min (minimal separation): Lemma A16 (Proposition 3.6 in Brauchart et al. (2018)). We have the following lower bound on the expectation of αmin:\nE [ N 2 d−1 αmin ] ≥ ( Γ(d2 )\n2(d− 1) √ π Γ(d−12 )\n)− 1d−1 Γ(1 + 1\nd− 1 )\nd− 1 d−1\nΓ(2 + 1d−1 ) := Cd−1. (383)\nThe bound is valid for all N ≥ 2 and d ≥ 2.\nLet us start with some preliminary estimates. First of all we need some asymptotics for the constant Cd−1 in Eq. (383):\nLemma A17. The following estimate holds for d ≥ 2:\nCd ≥ 1 − ln(d+ 1)\nd . (384)\nProof. The recursion formula for the Gamma function is (Olver et al., 2010, (5.5.1)):\nΓ(x+ 1) = x Γ(x) . (385)\nWe use Eq. (325) and the fact that d 1 d ≥ 1 for d ≥ 1 to obtain:\nCd ≥ (2 √ d) 1 d Γ(1 + 1\nd )\n(d+ 1)− 1 d\nΓ(2 + 1d ) = (2\n√ d) 1 d (d+ 1)− 1 d\n1− 1d > (d+ 1)\n1 d (386)\n= exp(−1 d ln(d+ 1)) ≥ 1 − 1 d ln(d+ 1) ,\nwhere in the last step we used the elementary inequality exp(x) ≥ 1 + x, which follows from the mean value theorem.\nThe next theorem states the number of stored patterns for the expected minimal separation.\nTheorem A7 (Storage Capacity (expected separation): Random Patterns). We assume patterns on the sphere with radius M = K √ d− 1 that are randomly chosen. Then for all values c ≥ 1 for which\n1 5 (d− 1) K2 c−1(1 − ln(d− 1) (d− 1) )2 ≥ 2\nβ c d−1 4\n+ 1 β ln ( 2 c d−1 2 β (d− 1) K2 ) (387)\nholds, the number of stored patterns for the expected minimal separation is at least\nN = c d−1 4 . (388)\nThe inequality Eq. (387) is e.g. fulfilled with β = 1, K = 3, c = 2 and d ≥ 17.\nProof. Instead of considering the probability that the master inequality Eq. (311) is fulfilled we now consider whether this inequality is fulfilled for the expected minimal distance. We consider the expectation of the minimal distance ∆min:\nE[∆min] = E[M 2(1 − cos(αmin)))] = M2(1 − E[cos(αmin))]) . (389)\nFor this expectation, the master inequality Eq. (311) becomes\nM2(1 − E[cos(αmin))]) ≥ 2\nβ N +\n1 β ln ( 2 N2 β M2 ) . (390)\nWe want to find the largest N that fulfills this inequality.\nWe apply Eq. (329) and Jensen’s inequality to deduce the following lower bound:\n1 − E[cos(αmin)] ≥ 1 5 E [ α2min ] ≥ 1 5 E[αmin] 2 . (391)\nNow we use Eq. (383) and Eq. (384) to arrive at\nE[αmin] 2 ≥ N− 4 d−1 E[N 2 d−1 αmin] 2 ≥ N− 4 d−1 C2d−1 ≥ N− 4 d−1 (1− ln(d− 1) (d− 1) )2 , (392)\nfor sufficiently large d. Thus in order to fulfill Eq. (390), it is enough to find values that satisfy Eq. (387).\nA.1.6.2 Retrieval of Patterns with One Update and Small Retrieval Error. Retrieval of a pattern xi for fixed point x∗i and query ξ is defined via an by ‖f(ξ) − x∗i ‖ < , that is, the update is -close to the fixed point. The update rule retrieves a pattern with one update for well separated patterns, that is, ∆i is large. Theorem A8 (Pattern Retrieval with One Update). With query ξ, after one update the distance of the new point f(ξ) to the fixed point x∗i is exponentially small in the separation ∆i. The precise bounds using the Jacobian J = ∂f(ξ)∂ξ and its value J m in the mean value theorem are:\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ , (393)\n‖Jm‖2 6 2 β N M 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) .\n(394)\nFor given and sufficient large ∆i, we have ‖f(ξ) − x∗i ‖ < , that is, retrieval with one update.\nProof. From Eq. (180) we have\n‖Jm‖2 6 2 β N M 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) . (395) After every iteration the mapped point f(ξ) is closer to the fixed point x∗i than the original point xi:\n‖f(ξ) − x∗i ‖ 6 ‖Jm‖2 ‖ξ − x ∗ i ‖ . (396)\nFor given and sufficient large ∆i, we have ‖f(ξ) − x∗i ‖ < , since ‖Jm‖2 foes exponentially fast to zero with increasing ∆i.\nWe want to estimate how large ∆i is. For xi we have:\n∆i = min j,j 6=i\n( xTi xi − xTi xj ) = xTi xi − max\nj,j 6=i xTi xj . (397)\nTo estimate how large ∆i is, assume vectors x ∈ Rd and y ∈ Rd that have as components standard normally distributed values. The expected value of the separation of two points with normally distributed components is\nE [ xTx − xTy ] = d∑ j=1 E [ x2j ] + d∑ j=1 E [xj ] d∑ j=1 E [yj ] = d . (398)\nThe variance of the separation of two points with normally distributed components is Var [ xTx − xTy ] = E [( xTx − xTy )2] − d2 (399) =\nd∑ j=1 E [ x4j ] + d∑ j=1,k=1,k 6=j E [ x2j ] E [ x2k ] − 2 d∑ j=1 E [ x3j ] E [yj ] −\n2 d∑ j=1,k=1,k 6=j E [ x2j ] E [xk] E [yk] + d∑ j=1 E [ x2j ] E [ y2j ] +\nd∑ j=1,k=1,k 6=j E [xj ] E [yj ] E [xk] E [yk] − d2\n= 3 d + d (d− 1) + d − d2 = 3 d .\nThe expected value for the separation of two random vectors gives:\n‖Jm‖2 6 2 β N M 2 (N − 1) exp(− β (d − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) . (400) For the exponential storage we set M = 2 √ d− 1. We see the Lipschitz constant ‖Jm‖2 decreases exponentially with the dimension. Therefore, ‖f(ξ) − x∗i ‖ is exponentially small after just one update. Therefore, the fixed point is well retrieved after one update.\nThe retrieval error decreases exponentially with the separation ∆i.\nTheorem A9 (Exponentially Small Retrieval Error). The retrieval error ‖f(ξ) − xi‖ of pattern xi is bounded by\n‖f(ξ) − xi‖ 6 2 (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) M (401)\nand for ‖xi − x∗i ‖ 6 12 β M together with ‖xi − ξ‖ 6 1 2 β M by\n‖xi − x∗i ‖ 6 2 e (N − 1) M exp(− β ∆i) . (402)\nProof. We compute the retrieval error which is just ‖f(ξ) − xi‖. From Lemma A4 we have ‖xi − f(ξ)‖ 6 2 M , (403)\nFrom Eq. (179) we have\n= (N − 1) exp(− β (∆i − 2 max{‖ξ − xi‖, ‖x∗i − xi‖}M)) . (404)\nFor ‖xi − x∗i ‖ 6 12 β M and ‖xi − ξ‖ 6 1 2 β M Eq. (404) gives\n6 e (N − 1) M exp(− β ∆i) . (405)" }, { "heading": "A.1.7 LEARNING ASSOCIATIONS", "text": "We consider three cases of learning associations, i.e. three cases of how sets are associated. (i) Non of the sets is mapped in an associative space. The raw state pattern rn is the state (query) pattern ξn, i.e. ξn = rn, and the raw stored pattern ys is the stored pattern (key), i.e. xs = ys. (ii) Either one of the sets is mapped to the space of the other set or an association matrix is learned. (iia) The state patterns are equal to the raw patterns, i.e. ξn = rn, and raw stored patterns are mapped via W to the space of the state patterns, i.e. xs = Wys. (iib) The stored patterns are equal to the raw patterns, i.e. xs = ys, and raw state patterns are mapped viaW to the space of the stored patterns, i.e. ξn = W Trn. (iic) The matrix W is an association matrix. We will compute the derivative of the new state pattern with respect toW , which is valid for all sub-cases (iib)–(iic). (iii) Both set of patterns are mapped in a common associative space. A raw state pattern rn is mapped byWQ to a state pattern (query) ξn, that is ξn = WQrn. A raw stored pattern ys is mapped viaWK to stored pattern (key) xs, that is xs = WKys. We will compute the derivative of the new state pattern with respect to bothWQ andWK .\nA.1.7.1 Association of Raw Patterns – No Mapping in an Associative Space. The sets are associated via their raw patterns, i.e. the raw state pattern rn is the state (query) pattern ξn, i.e. ξn = rn, and raw stored pattern ys is the stored pattern (key), i.e. xs = ys. There is no mapping in an associative space.\nThe update rule is\nξnew = X p , (406)\nwhere we used\np = softmax(β XT ξ) . (407)\nThe derivative with respect to ξ is\n∂ξnew\n∂ξ = β X\n( diag(p)− ppT ) XT (408)\nThe derivative with respect toX is\n∂aT ξnew\n∂X = a pT + β X\n( diag(p)− ppT ) (ξTa) . (409)\nThese derivatives allow to apply the chain rule if a Hopfield layer is integrated into a deep neural network." }, { "heading": "A.1.7.2 Learning an Association Matrix – Only One Set is Mapped in an Associative Space.", "text": "Only one of the sets R or Y is mapped in the space of the patterns of the other set. Case (a): the state patterns are equal to the raw patterns ξn = rn and raw stored patterns are mapped via W to the space of the state patterns, i.e. xs = Wys. Case (b): the stored patterns are equal to the raw patterns xs = ys and raw state patterns are mapped via W to the space of the stored patterns, i.e. ξn = W\nTrn. Case (c): the matrix W associates the sets R and Y . This case also includes that W T = W TKWQ, which is treated in next subsection. The next subsection focuses on a low rank approximation of W by defining the dimension dk of associative space and use the matrices W TK andWQ to defineW , or equivalently to mapR and Y into the associative space.\nFrom a mathematical point of view all these case are equal as they lead to the same update rule. Therefore, we consider in the following Case (a) with xs = Wys and ξn = rn. Still, the following formula are valid for all three cases (a)–(c).\nThe update rule is\nξnew = W Y p , (410)\nwhere we used\np = softmax(β Y TW T ξ) . (411)\nWe consider the state (query) pattern ξ with result ξnew:\nξnew = W Y p = W Y softmax(β Y TW T ξ) (412)\nFor multiple updates this update rule has to be used. However for a single update, or the last update we consider a simplified update rule.\nSince new state vector ξnew is projected by a weight matrixWV to another vector, we consider the simplified update rule:\nξnew = Y p = Y softmax(β Y TW T ξ) (413)\nThe derivative with respect toW is\n∂aT ξnew\n∂W =\n∂ξnew\n∂W\n∂aT ξnew\n∂ξnew =\n∂ξnew\n∂(W T ξ)\n∂(W T ξ)\n∂W\n∂aT ξnew\n∂ξnew . (414)\n∂ξnew\n∂(W T ξ) = β Y\n( diag(p)− ppT ) Y T (415)\n∂aT ξnew\n∂ξnew = a . (416)\nWe have the product of the 3-dimensional tensor ∂(W T ξ)\n∂W with the vector a which gives a 2- dimensional tensor, i.e. a matrix:\n∂(W T ξ)\n∂W\n∂aT ξnew\n∂ξnew =\n∂(W T ξ)\n∂W a = ξTaI . (417)\n∂aT ξnew\n∂W = β Y\n( diag(p)− ppT ) Y T (ξTa) = J (ξTa) , (418)\nwhere J is the Jacobian of the update rule defined in Eq. (59).\nTo obtain the derivative of the full update rule Eq. (412) we have to add the term\na pTY T (419)\nand include the factorW to get\n∂aT ξnew\n∂W = a pTY T + βW Y\n( diag(p)− ppT ) Y T (ξTa) (420)\n= a pTY T + W J (ξTa) ." }, { "heading": "A.1.7.3 Learning Two Association Mappings – Both Sets are Mapped in an Associative Space.", "text": "Both sets R and Y are mapped in an associative space. Every raw state pattern rn is mapped via WQ to a state pattern (query) ξn = WQrn. Every raw stored pattern ys is mapped via WK to a stored pattern (key) xs = WKys. In the last subsection we considered a single matrix W . For W T = W TKWQ we have the case of the last subsection. However in this subsection we are looking for a low rank approximation ofW . Toward this end we define the dimension dk of associative space and use the matricesW TK andWQ to map to the associative space.\nThe update rule is\nξnew = X p , (421)\nwhere we used\np = softmax(β XT ξ) . (422)\nWe consider raw state patterns rn that are mapped to state patterns ξn = WQrn with QT = Ξ = WQR and raw stored pattern ys that are mapped to stored patterns xs = WKys with KT = X = WKY . The update rule is\nξnew = WK Y p = WK Y softmax(β Y TW TKWQ r) . (423)\nSince new state vector ξnew is projected by a weight matrixWV to another vector, we consider the simplified update rule:\nξnew = Y p = Y softmax(β Y TW TKWQ r) . (424)\nFor the simplified update rule, the vector ξnew does not live in the associative space but in the space of raw stored pattern y. HoweverWK would map it to the associative space.\n•Derivative with respect toWQ. The derivative with respect toWQ is\n∂aT ξnew\n∂WQ =\n∂ξnew\n∂WQ\n∂aT ξnew\n∂ξnew =\n∂ξnew\n∂(WQ r)\n∂(WQ r)\n∂WQ\n∂aT ξnew\n∂ξnew . (425)\n∂ξnew\n∂(WQ r) = β Y\n( diag(p)− ppT ) Y TW TK (426)\n∂aT ξnew\n∂ξnew = a . (427)\nWe have the product of the 3-dimensional tensor ∂(WQr)∂WQ with the vector a which gives a 2- dimensional tensor, i.e. a matrix:\n∂(WQ r)\n∂WQ\n∂aT ξnew\n∂ξnew =\n∂(WQ r)\n∂WQ a = rTa I . (428)\n∂aT ξnew\n∂WQ = β Y\n( diag(p)− ppT ) Y T W TK (r Ta) = JW TK (r Ta) , (429)\nwhere J is the Jacobian of the update rule defined in Eq. (59).\nTo obtain the derivative of the full update rule Eq. (423) we have to include the factorWK , then get\n∂aT ξnew\n∂WQ = βWK Y\n( diag(p)− ppT ) Y T W TK (r Ta) = WK JW T K (r Ta) . (430)\n•Derivative with respect toWK . The derivative with respect toWK is\n∂aT ξnew\n∂WK =\n∂ξnew\n∂WK\n∂aT ξnew\n∂ξnew =\n∂ξnew\n∂(W TKWQ r)\n∂(W TKWQ r)\n∂WK\n∂aT ξnew\n∂ξnew . (431)\n∂ξnew\n∂(W TKWQ r) = β Y\n( diag(p)− ppT ) Y T (432)\n∂aT ξnew\n∂ξnew = a . (433)\nWe have the product of the 3-dimensional tensor ∂(Wr)∂WK with the vector awhich gives a 2-dimensional tensor, i.e. a matrix:\n∂(W TKWQ r)\n∂WK\n∂aT ξnew\n∂ξnew =\n∂(W TKWQ r)\n∂WK a = W TQ r Ta I . (434)\n∂aT ξnew\n∂WK = β Y\n( diag(p)− ppT ) Y T (W TQ r Ta) = J (W TQ r Ta) , (435)\nwhere J is the Jacobian of the update rule defined in Eq. (59).\nTo obtain the derivative of the full update rule Eq. (423) we have to add the term\na pTY T (436)\nand to include the factorWK , then get\n∂aT ξnew\n∂WK = a pTY T + βWK Y\n( diag(p)− ppT ) Y T (W TQ r Ta) (437)\n= a pTY T + WK J (W T Q r Ta) .\nA.1.8 INFINITE MANY PATTERNS AND FORGETTING PATTERNS\nIn the next subsection we show how the new Hopfield networks can be used for auto-regressive tasks by causal masking. In the following subsection, we introduce forgetting to the new Hopfield networks by adding a negative value to the softmax which is larger if the pattern was observed more in the past.\nA.1.8.1 Infinite Many Patterns. The new Hopfield networks can be used for auto-regressive tasks, that is time series prediction and similar. Causal masking masks out the future by a large negative value in the softmax.\nWe assume to have infinite many stored patterns (keys) x1,x2, . . . that are represented by the infinite matrix\nX = (x1,x2, . . . , ) . (438)\nThe pattern index is now a time index, that is, we observe xt at time t.\nThe pattern matrix at time t is\nXt = (x1,x2, . . . ,xt) . (439)\nThe query at time t is ξt.\nFor Mt = max16i6t ‖xt‖, the energy function at time t is Et\nEt = − lse(β,XTt ξt) + 1\n2 ξTt ξt + β\n−1 ln t + 1\n2 M2t (440)\n= − β−1 ln ( t∑ i=1 exp(βxTi ξt) ) + 1 2 ξTt ξt + β −1 ln t + 1 2 M2t . (441)\nThe update rule is\nξnewt = Xt pt = Xt softmax(β X T t ξt) , (442)\nwhere we used\npt = softmax(β X T t ξt) . (443)\nWe can use an infinite pattern matrix with an infinite softmax when using causal masking. The pattern matrix at time t is\nXt = (x1,x2, . . . ,xt,−αξt,−αξt, . . .) , (444) with the query ξt and α→∞. The energy function at time t is Et\nEt = − lse(β,XTt ξt) + 1\n2 ξTt ξt + β\n−1 ln t + 1\n2 M2t (445)\n= − β−1 ln t∑ i=1 exp(βxTi ξt) + bαc∑ i=t+1 exp(−βα‖ξt‖2) + 1 2 ξTt ξt + (446)\nβ−1 ln t + 1\n2 M2t .\nFor α→∞ and ‖ξt‖ > 0 this becomes\nEt = − lse(β,XTt ξt) + 1\n2 ξTt ξt + β\n−1 ln t + 1\n2 M2t (447)\n= − β−1 ln ( t∑ i=1 exp(βxTi ξt) ) + 1 2 ξTt ξt + β −1 ln t + 1 2 M2t . (448)\nA.1.8.2 Forgetting Patterns. We introduce forgetting to the new Hopfield networks by adding a negative value in the softmax which increases with patterns that are more in the past.\nWe assume to have infinite many patterns x1,x2, . . . that are represented by the infinite matrix\nX = (x1,x2, . . . , ) . (449)\nThe pattern index is now a time index, that is, we observe xt at time t.\nThe pattern matrix at time t is\nXt = (x1,x2, . . . ,xt) . (450)\nThe query at time t is ξt.\nThe energy function with forgetting parameter γ at time t is Et\nEt = − lse(β,XTt ξt − γ(t− 1, t− 2, . . . , 0)T ) + 1\n2 ξTt ξt + β\n−1 ln t + 1\n2 M2t (451)\n= − β−1 ln ( T∑ i=1 exp(βxTi ξt − γ(t− i)) ) + 1 2 ξTt ξt + β −1 ln t + 1 2 M2t . (452)\nThe update rule is\nξnewt = Xt pt = Xt softmax(βX T t ξt) , (453)\nwhere we used\npt = softmax(βX T t ξt) . (454)" }, { "heading": "A.1.9 NUMBER OF SPURIOUS STATES", "text": "The energy E is defined as\nE = − lse(β,XT ξ) + 1 2 ξT ξ + β−1 lnN + 1 2 M2 (455)\n= − β−1 ln ( N∑ i=1 exp(βxTi ξ) ) + β−1 lnN + 1 2 ξT ξ + 1 2 M2 . (456)\nSince the negative exponential function is strict monotonic decreasing, exp(−E) has minima, where E has maxima, and has maxima, where as has minima E.\nexp(−E) = exp(lse(β,XT ξ)) exp(− 1 2 ξT ξ) C (457)\n= ( N∑ i=1 exp(βxTi ξ) )β−1 exp(− 1 2 ξT ξ) C\n= ( N∑ i=1 exp(βxTi ξ) )β−1 ( exp(− β 1 2 ξT ξ) )β−1 C\n= ( N∑ i=1 exp(β (xTi ξ − 1 2 ξT ξ)) )β−1 C\n= ( N∑ i=1 exp( 1 2 β xTi xi − 1 2 β (ξ − xi)T (ξ − xi)) )β−1 C\n= ( N∑ i=1 λ(xi, β) G(ξ;xi, β −1 I) )β−1 C ,\nwhere C is a positive constant, λ(xi, β) = exp( 12βx T i xi) and G(ξ;xi, β −1I) is the Gaussian with mean xi and covariance matrix β−1I .\nSince C is a positive constant and xβ −1 = exp(β−1 lnx) is strict monotonic for positive x, the minima of E are the maxima of\nN∑ i=1 λ(xi, β) G(ξ;xi, β −1 I) . (458)\nIn Carreira-Perpiñán & Williams (2003) it was shown that Eq. (458) can have more than N modes, that is, more than N maxima." }, { "heading": "A.2 PROPERTIES OF SOFTMAX, LOG-SUM-EXPONENTIAL, LEGENDRE TRANSFORM, LAMBERT W FUNCTION", "text": "For β > 0, the softmax is defined as\nDefinition A1 (Softmax).\np = softmax(βx) (459)\npi = [softmax(βx)]i = exp(βxi)∑ k exp(βxk) . (460)\nWe also need the log-sum-exp function (lse), defined as\nDefinition A2 (Log-Sum-Exp Function).\nlse(β,x) = β−1 ln ( N∑ i=1 exp(βxi) ) . (461)\nWe can formulate the lse in another base:\nβa = β\nln a , (462)\nlse(β,x) = β−1 ln ( N∑ i=1 exp(β xi) ) (463)\n= (βa ln a) −1 ln ( N∑ i=1 exp(βa ln a xi) )\n= (βa) −1 loga ( N∑ i=1 aβa xi ) .\nIn particular, the base a = 2 can be used to speed up computations.\nNext, we give the relation between the softmax and the lse function. Lemma A18. The softmax is the gradient of the lse:\nsoftmax(βx) = ∇xlse(β,x) . (464)\nIn the next lemma we report some important properties of the lse function. Lemma A19. We define\nL := zTx − β−1 N∑ i=1 zi ln zi (465)\nwith L ≥ zTx. The lse is the maximum of L on the N -dimensional simplex D with D = {z |∑ i zi = 1, 0 6 zi}:\nlse(β,x) = max z∈D zTx − β−1 N∑ i=1 zi ln zi . (466)\nThe softmax p = softmax(βx) is the argument of the maximum of L on the N -dimensional simplex D with D = {z | ∑ i zi = 1, 0 6 zi}:\np = softmax(βx) = arg max z∈D zTx − β−1 N∑ i=1 zi ln zi . (467)\nProof. Eq. (466) is obtained from Equation (8) in Gao & Pavel (2017) and Eq. (467) from Equation (11) in Gao & Pavel (2017).\nFrom a physical point of view, the lse function represents the “free energy” in statistical thermodynamics (Gao & Pavel, 2017).\nNext we consider the Jacobian of the softmax and its properties. Lemma A20. The Jacobian Js of the softmax p = softmax(βx) is\nJs = ∂softmax(βx)\n∂x = β\n( diag(p)− ppT ) , (468)\nwhich gives the elements\n[Js]ij = { βpi(1− pi) for i = j −βpipj for i 6= j . (469)\nNext we show that Js has eigenvalue 0. Lemma A21. The Jacobian Js of the softmax function p = softmax(βx) has a zero eigenvalue with eigenvector 1.\nProof.\n[Js1]i = β pi(1− pi) − ∑ j,j 6=i pipj = β pi(1 − ∑ j pj) = 0 . (470)\nNext we show that 0 is the smallest eigenvalue of Js, therefore Js is positive semi-definite but not (strict) positive definite. Lemma A22. The Jacobian Js of the softmax p = softmax(βξ) is symmetric and positive semidefinite.\nProof. For an arbitrary z, we have\nzT ( diag(p)− ppT ) z = ∑ i piz 2 i − (∑ i pizi )2 (471)\n= (∑ i piz 2 i ) (∑ i pi ) − (∑ i pizi )2 ≥ 0 .\nThe last inequality hold true because the Cauchy-Schwarz inequality says (aTa)(bT b) ≥ (aT b)2, which is the last inequality with ai = zi √ pi and bi = √ pi. Consequently ( diag(p)− ppT ) is positive semi-definite.\nAlternatively ∑ i piz 2 i − ( ∑ i pizi)\n2 can be viewed as the expected second moment minus the mean squared which gives the variance that is larger equal to zero.\nThe Jacobian is 0 < β times a positive semi-definite matrix, which is a positive semi-definite matrix.\nMoreover, the softmax is a monotonic map, as described in the next lemma. Lemma A23. The softmax softmax(βx) is monotone for β > 0, that is,\n(softmax(βx) − softmax(βx′))T (x − x′) ≥ 0 . (472)\nProof. We use the version of mean value theorem Lemma A32 with the symmetric matrix Jms =∫ 1 0 Js(λx + (1− λ)x′) dλ:\nsoftmax(x) − softmax(x′) = Jms (x − x′) . (473)\nTherefore\n(softmax(x) − softmax(x′))T (x − x′) = (x − x′)T Jms (x − x′) ≥ 0 , (474)\nsince Jms is positive semi-definite. For all λ the Jacobians Js(λx + (1 − λ)x′) are positive semi-definite according to Lemma A22. Since\nxT Jms x = ∫ 1 0 xT Js(λx + (1− λ)x′) x dλ ≥ 0 (475)\nis an integral over positive values for every x, Jms is positive semi-definite, too.\nNext we give upper bounds on the norm of Js. Lemma A24. For a softmax p = softmax(βx) with m = maxi pi(1− pi), the spectral norm of the Jacobian Js of the softmax is bounded:\n‖Js‖2 6 2 m β , (476) ‖Js‖1 6 2 m β , (477) ‖Js‖∞ 6 2 m β . (478)\nIn particular everywhere holds\n‖Js‖2 6 1\n2 β . (479)\nIf pmax = maxi pi ≥ 1− ≥ 0.5, then for the spectral norm of the Jacobian holds\n‖Js‖2 6 2 β − 2 2 β < 2 β . (480)\nProof. We consider the maximum absolute column sum norm ‖A‖1 = maxj ∑ i |aij | (481)\nand the maximum absolute row sum norm ‖A‖∞ = maxi ∑ j |aij | . (482) We have forA = Js = β ( diag(p)− ppT\n) ∑ j |aij | = β pi(1− pi) + ∑ j,j 6=i pipj = β pi (1 − 2pi + ∑ j pj) (483)\n= 2 β pi (1− pi) 6 2 m β ,∑ i |aij | = β pj (1− pj) + ∑ i,i6=j pjpi = β pj (1 − 2pj + ∑ i pi) (484)\n= 2 β pj (1− pj) 6 2 m β .\nTherefore, we have\n‖Js‖1 6 2 m β , (485) ‖Js‖∞ 6 2 m β , (486)\n‖Js‖2 6 √ ‖Js‖1‖Js‖∞ 6 2 m β . (487)\nThe last inequality is a direct consequence of Hölder’s inequality.\nFor 0 6 pi 6 1, we have pi(1− pi) 6 0.25. Therefore, m 6 0.25 for all values of pi. If pmax ≥ 1 − ≥ 0.5 ( 6 0.5), then 1 − pmax 6 and for pi 6= pmax pi 6 . The derivative ∂x(1− x)/∂x = 1− 2x > 0 for x < 0.5, therefore x(1− x) increases with x for x < 0.5. Using x = 1− pmax and for pi 6= pmax x = pi, we obtain pi(1− pi) 6 (1− ) for all i. Consequently, we have m 6 (1− ).\nUsing the bounds on the norm of the Jacobian, we give some Lipschitz properties of the softmax function. Lemma A25. The softmax function p = softmax(βx) is (β/2)-Lipschitz. The softmax function p = softmax(βx) is (2βm)-Lipschitz in a convex environment U for which m = maxx∈U maxi pi(1− pi). For pmax = minx∈U maxi pi = 1− , the softmax function p = softmax(βx) is (2β )-Lipschitz. For β < 2m, the softmax p = softmax(βx) is contractive in U on which m is defined.\nProof. The version of mean value theorem Lemma A32 states for the symmetric matrix Jms =∫ 1 0 J(λx+ (1− λ)x′) dλ:\nsoftmax(x) − softmax(x′) = Jms (x − x′) . (488)\nAccording to Lemma A24 for all x̃ = λx+ (1− λ)x′)\n‖Js(x̃)‖2 6 2 m̃ β , (489)\nwhere m̃ = maxi p̃i(1 − p̃i). Since x ∈ U and x′ ∈ U we have x̃ ∈ U , since U is convex. For m = maxx∈U maxi pi(1− pi) we have m̃ 6 m for all m̃. Therefore, we have\n‖Js(x̃)‖2 6 2 m β (490)\nwhich also holds for the mean:\n‖Jms ‖2 6 2 m β . (491)\nTherefore,\n‖softmax(x) − softmax(x′)‖ 6 ‖Jms ‖2 ‖x − x ′‖ 6 2 m β ‖x − x′‖ . (492)\nFrom Lemma A24 we know m 6 1/4 globally. For pmax = minx∈U maxi pi = 1 − we have according to Lemma A24: m 6 .\nFor completeness we present a result about cocoercivity of the softmax: Lemma A26. For m = maxx∈U maxi pi(1− pi), softmax function p = softmax(βx) is 1/(2mβ)cocoercive in U , that is,\n(softmax(x) − softmax(x′))T (x − x′) ≥ 1 2 m β ‖softmax(x) − softmax(x′)‖. (493)\nIn particular the softmax function p = softmax(βx) is (2/β)-cocoercive everywhere. With pmax = minx∈U maxi pi = 1− , the softmax function p = softmax(βx) is 1/(2β )-cocoercive in U .\nProof. We apply the Baillon-Haddad theorem (e.g. Theorem 1 in Gao & Pavel (2017)) together with Lemma A25.\nFinally, we introduce the Legendre transform and use it to describe further properties of the lse. We start with the definition of the convex conjugate. Definition A3 (Convex Conjugate). The Convex Conjugate (Legendre-Fenchel transform) of a function f from a Hilbert Space X to [−∞,∞] is f∗ which is defined as\nf∗(x∗) = sup x∈X (xTx∗ − f(x)) , x∗ ∈ X (494)\nSee page 219 Def. 13.1 in Bauschke & Combettes (2017) and page 134 in Garling (2017). Next we define the Legendre transform, which is a more restrictive version of the convex conjugate. Definition A4 (Legendre Transform). The Legendre transform of a convex function f from a convex set X ⊂ Rn to R (f : X → R) is f∗, which is defined as\nf∗(x∗) = sup x∈X (xTx∗ − f(x)) , x∗ ∈ X∗ , (495)\nX∗ = { x∗ ∈ Rn | sup\nx∈X (xTx∗ − f(x)) <∞\n} . (496)\nSee page 91 in Boyd & Vandenberghe (2009). Definition A5 (Epi-Sum). Let f and g be two functions from X to (−∞,∞], then the infimal convolution (or epi-sum) of f and g is\nf g : X → [−∞,∞] , x 7→ inf y∈X (f(y) + g(x− y)) (497)\nSee Def. 12.1 in Bauschke & Combettes (2017). Lemma A27. Let f and g be functions from X to (−∞,∞]. Then the following hold:\n1. Convex Conjugate of norm squared( 1\n2 ‖.‖2\n)∗ = 1\n2 ‖.‖2 . (498)\n2. Convex Conjugate of a function multiplied by scalar 0 < α ∈ R\n(α f) ∗ = α f∗(./α) . (499)\n3. Convex Conjugate of the sum of a function and a scalar β ∈ R\n(f + β) ∗ = f∗ − β . (500)\n4. Convex Conjugate of affine transformation of the arguments. LetA be a non-singular matrix and b a vector\n(f (Ax + b)) ∗ = f∗ ( A−Tx∗ ) − bTA−Tx∗ . (501)\n5. Convex Conjugate of epi-sums\n(f g)∗ = f∗ + g∗ . (502)\nProof. 1. Since h(t) := t 2\n2 is a non-negative convex function and h(t) = 0 ⇐⇒ t = 0 we have because of Proposition 11.3.3 in Garling (2017) that h (‖x‖)∗ = h∗ (‖x∗‖). Additionally, by example (a) on page 137 we get for 1 < p < ∞ and 1p + 1 q = 1 that(\n|t|p p\n)∗ = |t\n∗|q q . Putting all together we get the desired result. The same result can also be\ndeduced from page 222 Example 13.6 in Bauschke & Combettes (2017).\n2. Follows immediately from the definition since αf∗ ( x∗\nα\n) = α sup\nx∈X\n( xT x∗\nα − f(x) ) = sup x∈X (xTx∗ − αf(x)) = (αf)∗(x∗)\n3. (f + β)∗ := supx∈X ( xTx∗ − f(x)− β ) =: f∗ − β\n4.\n(f (Ax+ b)) ∗ (x∗) = sup x∈X\n( xTx∗ − f (Ax+ b) ) = sup x∈X ( (Ax+ b) T A−Tx∗ − f (Ax+ b) ) − bTA−Tx∗\n= sup y∈X\n( yTA−Tx∗ − f (y) ) − bTA−Tx∗\n= f∗ ( A−Tx∗ ) − bTA−Tx∗\n5. From Proposition 13.24 (i) in Bauschke & Combettes (2017) and Proposition 11.4.2 in Garling (2017) we get\n(f g)∗ (x∗) = sup x∈X\n( xTx∗ − inf\ny∈X (f(y)− g(x− y)) ) = sup x,y∈X ( xTx∗ − f(y)− g(x− y)\n) = sup x,y∈X (( yTx∗ − f(y) ) + ( (x− y)T x∗ − g(x− y) )) = f∗(x∗) + g∗(x∗)\nLemma A28. The Legendre transform of the lse is the negative entropy function, restricted to the probability simplex and vice versa. For the log-sum exponential\nf(x) = ln ( n∑ i=1 exp(xi) ) , (503)\nthe Legendre transform is the negative entropy function, restricted to the probability simplex:\nf∗(x∗) =\n{∑n i=1 x ∗ i ln(x ∗ i ) for 0 6 x ∗ i and ∑n i=1 x ∗ i = 1\n∞ otherwise . (504)\nFor the negative entropy function, restricted to the probability simplex:\nf(x) =\n{∑n i=1 xi ln(xi) for 0 6 xi and ∑n i=1 xi = 1\n∞ otherwise . (505)\nthe Legendre transform is the log-sum exponential\nf∗(x∗) = ln ( n∑ i=1 exp(x∗i ) ) , (506)\nProof. See page 93 Example 3.25 in Boyd & Vandenberghe (2009) and (Gao & Pavel, 2017). If f is a regular convex function (lower semi-continuous convex function), then f∗∗ = f according to page 135 Exercise 11.2.3 in Garling (2017). If f is lower semi-continuous and convex, then f∗∗ = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017). The log-sum-exponential is continuous and convex.\nLemma A29. LetXXT be non-singular and X a Hilbert space. We define X∗ = { a | 0 6 XT ( XXT )−1 a , 1TXT ( XXT )−1 a = 1 } . (507)\nand\nXv = { a | a = XT ξ , ξ ∈ X } . (508)\nThe Legendre transform of lse(β,XT ξ) with ξ ∈ X is( lse(β,XT ξ) )∗ (ξ∗) = (lse(β,v)) ∗ ( XT ( XXT )−1 ξ∗ ) , (509) with ξ∗ ∈ X∗ and v ∈ Xv . The domain of ( lse(β,XT ξ) )∗ is X∗.\nFurthermore we have ( lse(β,XT ξ) )∗∗ = lse(β,XT ξ) . (510)\nProof. We use the definition of the Legendre transform:( lse(β,XT ξ) )∗ (ξ∗) = sup\nξ∈X ξT ξ∗ − lse(β,XT ξ) (511)\n= sup ξ∈X\n( XT ξ )T XT ( XXT )−1 ξ∗ − lse(β,XT ξ)\n= sup v∈Xv\nvTXT ( XXT )−1 ξ∗ − lse(β,v)\n= sup v∈Xv\nvTv∗ − lse(β,v)\n= (lse(β,v)) ∗ (v∗) = (lse(β,v)) ∗ ( XT ( XXT )−1 ξ∗ ) ,\nwhere we used v∗ = XT ( XXT )−1 ξ∗.\nAccording to page 93 Example 3.25 in Boyd & Vandenberghe (2009), the equations for the maximum maxv∈Xv v Tv∗ − lse(β,v) are solvable if and only if 0 < v∗ = XT ( XXT )−1 ξ∗ and 1Tv∗ =\n1TXT ( XXT )−1 ξ∗ = 1. Therefore, we assumed ξ∗ ∈ X∗.\nThe domain of ( lse(β,XT ξ) )∗ is X∗, since on page 93 Example 3.25 in Boyd & Vandenberghe (2009) it was shown that outside X∗ the supv∈Xv v Tv∗ − lse(β,v) is not bounded.\nUsing\np = softmax(βXT ξ) , (512)\nthe Hessian of lse(β,XT ξ)\n∂2lse(β,XT ξ)\n∂ξ2 = β X\n( diag(p)− ppT ) XT (513)\nis positive semi-definite since diag(p) − ppT is positive semi-definite according to Lemma A22. Therefore, lse(β,XT ξ) is convex and continuous.\nIf f is a regular convex function (lower semi-continuous convex function), then f∗∗ = f according to page 135 Exercise 11.2.3 in Garling (2017). If f is lower semi-continuous and convex, then f∗∗ = f according to Theorem 13.37 (Fenchel-Moreau) in Bauschke & Combettes (2017). Consequently we have (\nlse(β,XT ξ) )∗∗ = lse(β,XT ξ) . (514)\nWe introduce the Lambert W function and some of its properties, since it is needed to derive bounds on the storage capacity of our new Hopfield networks. Definition A6 (Lambert Function). The LambertW function (Olver et al., 2010, (4.13)) is the inverse function of\nf(y) = yey . (515) The Lambert W function has an upper branch W0 for −1 6 y and a lower branch W−1 for y 6 −1. We use W if a formula holds for both branches. We have\nW (x) = y ⇒ yey = x . (516)\nWe present some identities for the Lambert W function (Olver et al., 2010, (4.13)): Lemma A30. Identities for the Lambert W function are\nW (x) eW (x) = x , (517) W (xex) = x , (518)\neW (x) = x\nW (x) , (519)\ne−W (x) = W (x)\nx , (520)\nenW (x) =\n( x\nW (x)\n)n , (521)\nW0 (x lnx) = lnx for x ≥ 1\ne , (522)\nW−1 (x lnx) = lnx for x 6 1\ne , (523)\nW (x) = ln x W (x) for x ≥ − 1 e , (524)\nW\n( n xn\nW (x) n−1\n) = n W (x) for n, x > 0 , (525)\nW (x) + W (y) = W ( x y ( 1\nW (x) +\n1\nW (y)\n)) for x, y > 0 , (526)\nW0\n( − lnx\nx\n) = − lnx for 0 < x 6 e , (527)\nW−1\n( − lnx\nx\n) = − lnx for x > e , (528)\ne− W (− ln x) = W (− lnx) − lnx for x 6= 1 . (529)\nWe also present some special values for the Lambert W function (Olver et al., 2010, (4.13)):" }, { "heading": "Lemma A31.", "text": "W (0) = 0 , (530) W (e) = 1 , (531)\nW ( −1 e ) = −1 , (532)\nW ( e1+e ) = e , (533)\nW (2 ln 2) = ln 2 , (534) W (1) = Ω , (535)\nW (1) = e−W (1) = ln\n( 1\nW (1)\n) = − lnW (1) , (536)\nW ( −π\n2\n) = iπ\n2 , (537)\nW (−1) ≈ −0.31813 + 1.33723i , (538)\nwhere the Omega constant Ω is\nΩ = (∫ ∞ −∞\ndt\n(et − t)2 + π2\n)−1 − 1 ≈ 0.56714329 . (539)\nWe need in some proofs a version of the mean value theorem as given in the next lemma. Lemma A32 (Mean Value Theorem). Let U ⊂ Rn be open, f : U → Rm continuously differentiable, and x ∈ U as well as h ∈ Rn vectors such that the line segment x+ th for 0 6 t 6 1 is in U . Then the following holds:\nf(x + h) − f(x) = (∫ 1\n0\nJ(x + t h) dt ) h , (540)\nwhere J is the Jacobian of f and the integral of the matrix is component-wise.\nProof. Let f1, . . . , fm denote the components of f and define gi : [0, 1]→ R by\ngi(t) = fi(x + t h) , (541)\nthen we obtain fi(x + h) − fi(x) = gi(1) − gi(0) = ∫ 1\n0\ng′(t) dt (542)\n∫ 1 0 n∑ j=1 ∂fi ∂xj (x + t h) hj dt = n∑ j=1 (∫ 1 0 ∂fi ∂xj (x + t h) dt ) hj .\nThe statement follows since the Jacobian J has as entries ∂fi∂xj ." }, { "heading": "A.3 MODERN HOPFIELD NETWORKS: BINARY STATES (KROTOV AND HOPFIELD)", "text": "" }, { "heading": "A.3.1 MODERN HOPFIELD NETWORKS: INTRODUCTION", "text": "A.3.1.1 Additional Memory and Attention for Neural Networks. Modern Hopfield networks may serve as additional memory for neural networks. Different approaches have been suggested to equip neural networks with an additional memory beyond recurrent connections. The neural Turing machine (NTM) is a neural network equipped with an external memory and an attention process (Graves et al., 2014). The NTM can write to the memory and can read from it. A memory network (Weston et al., 2014) consists of a memory together with the components: (1) input feature map (converts the incoming input to the internal feature representation) (2) generalization (updates old memories given the new input), (3) output feature map (produces a new output), (4) response\n(converts the output into the response format). Memory networks are generalized to an end-to-end trained model, where the arg max memory call is replaced by a differentiable softmax (Sukhbaatar et al., 2015a;b). Linear Memory Network use a linear autoencoder for sequences as a memory (Carta et al., 2020).\nTo enhance RNNs with additional associative memory like Hopfield networks have been proposed (Ba et al., 2016a;b). The associative memory stores hidden states of the RNN, retrieves stored states if they are similar to actual ones, and has a forgetting parameter. The forgetting and storing parameters of the RNN associative memory have been generalized to learned matrices (Zhang & Zhou, 2017). LSTMs with associative memory via Holographic Reduced Representations have been proposed (Danihelka et al., 2016).\nRecently most approaches to new memories are based on attention. The neural Turing machine (NTM) is equipped with an external memory and an attention process (Graves et al., 2014). End to end memory networks (EMN) make the attention scheme of memory networks (Weston et al., 2014) differentiable by replacing arg max through a softmax (Sukhbaatar et al., 2015a;b). EMN with dot products became very popular and implement a key-value attention (Daniluk et al., 2017) for self-attention. An enhancement of EMN is the transformer (Vaswani et al., 2017a;b) and its extensions (Dehghani et al., 2018). The transformer had great impact on the natural language processing (NLP) community as new records in NLP benchmarks have been achieved (Vaswani et al., 2017a;b). MEMO uses the transformer attention mechanism for reasoning over longer distances (Banino et al., 2020). Current state-of-the-art for language processing is a transformer architecture called “the Bidirectional Encoder Representations from Transformers” (BERT) (Devlin et al., 2018; 2019).\nA.3.1.2 Modern Hopfield networks: Overview. The storage capacity of classical binary Hopfield networks (Hopfield, 1982) has been shown to be very limited. In a d-dimensional space, the standard Hopfield model can store d uncorrelated patterns without errors but only Cd/ ln(d) random patterns with C < 1/2 for a fixed stable pattern or C < 1/4 if all patterns are stable (McEliece et al., 1987). The same bound holds for nonlinear learning rules (Mazza, 1997). Using tricks-of-trade and allowing small retrieval errors, the storage capacity is about 0.138d (Crisanti et al., 1986; Hertz et al., 1991; Torres et al., 2002). If the learning rule is not related to the Hebb rule then up to d patterns can be stored (Abu-Mostafa & StJacques, 1985). Using Hopfield networks with non-zero diagonal matrices, the storage can be increased to Cd ln(d) (Folli et al., 2017). In contrast to the storage capacity, the number of energy minima (spurious states, stable states) of Hopfield networks is exponentially in d (Tanaka & Edwards, 1980; Bruck & Roychowdhury, 1990; Wainrib & Touboul, 2013).\nRecent advances in the field of binary Hopfield networks (Hopfield, 1982) led to new properties of Hopfield networks. The stability of spurious states or metastable states was sensibly reduced by a Hamiltonian treatment for the new relativistic Hopfield model (Barra et al., 2018). Recently the storage capacity of Hopfield networks could be increased by new energy functions. Interaction functions of the form F (x) = xn lead to storage capacity of αndn−1, where αn depends on the allowed error probability (Krotov & Hopfield, 2016; 2018; Demircigil et al., 2017) (see (Krotov & Hopfield, 2018) for the non-binary case). Interaction functions of the form F (x) = xn lead to storage capacity of αn d n−1\ncn ln d for cn > 2(2n− 3)!! (Demircigil et al., 2017).\nInteraction functions of the form F (x) = exp(x) lead to exponential storage capacity of 2d/2 where all stored patterns are fixed points but the radius of attraction vanishes (Demircigil et al., 2017). It has been shown that the network converges with high probability after one update (Demircigil et al., 2017)." }, { "heading": "A.3.2 ENERGY AND UPDATE RULE FOR BINARY MODERN HOPFIELD NETWORKS", "text": "We follow (Demircigil et al., 2017) where the goal is to store a set of input data x1, . . . ,xN that are represented by the matrix\nX = (x1, . . . ,xN ) . (543)\nThe xi is pattern with binary components xij ∈ {−1,+1} for all i and j. ξ is the actual state of the units of the Hopfield model. Krotov and Hopfield (Krotov & Hopfield, 2016) defined the energy function E with the interaction function F that evaluates the dot product between patterns xi and the\nactual state ξ:\nE = − N∑ i=1 F ( ξTxi ) (544)\nwith F (a) = an, where n = 2 gives the energy function of the classical Hopfield network. This allows to store αndn−1 patterns (Krotov & Hopfield, 2016). Krotov and Hopfield (Krotov & Hopfield, 2016) suggested for minimizing this energy an asynchronous updating dynamics T = (Tj) for component ξj :\nTj(ξ) := sgn [ N∑ i=1 ( F ( xij + ∑ l 6=j xil ξl ) − F ( − xij + ∑ l 6=j xil ξl ))]\n(545)\nWhile Krotov and Hopfield used F (a) = an, Demircigil et al. (Demircigil et al., 2017) went a step further and analyzed the model with the energy function F (a) = exp(a), which leads to an exponential storage capacity of N = 2d/2. Furthermore with a single update the final pattern is recovered with high probability. These statements are given in next theorem. Theorem A10 (Storage Capacity for Binary Modern Hopfield Nets (Demircigil et al. 2017)). Consider the generalized Hopfield model with the dynamics described in Eq. (545) and interaction function F given by F (x) = ex. For a fixed 0 < α < ln(2)/2 let N = exp (αd) + 1 and let x1, . . . ,xN be N patterns chosen uniformly at random from {−1,+1}d. Moreover fix % ∈ [0, 1/2). For any i and any x̃i taken uniformly at random from the Hamming sphere with radius %d centered in xi, S(xi, %d), where %d is assumed to be an integer, it holds that\nPr (∃i ∃j : Tj (x̃i) 6= xij) → 0 , if α is chosen in dependence of % such that\nα < I(1− 2%)\n2 with\nI : a 7→ 1 2 ((1 + a) ln(1 + a) + (1− a) ln(1− a)) .\nProof. The proof can be found in Demircigil et al. (2017).\nThe number of patterns N = exp (αd) + 1 is exponential in the number d of components. The result Pr (∃i ∃j : Tj (x̃i) 6= xij) → 0\nmeans that one update for each component is sufficient to recover the pattern with high probability. The constraint α < I(1−2%)2 on α gives the trade-off between the radius of attraction %d and the number N = exp (αd) + 1 of pattern that can be stored.\nTheorem A10 in particular implies that Pr (∃i ∃j : Tj (xi) 6= xij) → 0\nas d→∞, i.e. with a probability converging to 1, all the patterns are fixed points of the dynamics. In this case we can have α→ I(1)2 = ln(2)/2. Krotov and Hopfield define the update dynamics Tj(ξ) in Eq. (545) via energy differences of the energy in Eq. (544). First we express the energy in Eq. (544) with F (a) = exp(a) (Demircigil et al., 2017) by the lse function. Then we use the mean value theorem to express the update dynamics Tj(ξ) in Eq. (545) by the softmax function. For simplicity, we set β = 1 in the following. There exists a v ∈ [−1, 1] with\nTj(ξ) = sgn [ − E(ξj = 1) + E(ξj = −1) ] = sgn [ exp(lse(ξj = 1)) − exp(lse(ξj = −1)) ] = sgn [ − (2ej)T∇ξE(ξj = v) ] = sgn [ exp(lse(ξj = v)) (2ej) T lse(ξj = v)\n∂ξ\n] (546)\n= sgn [ exp(lse(ξj = 1)) (2ej) TXsoftmax(XT ξ(ξj = v)) ]\n= sgn [ [Xsoftmax(XT ξ(ξj = v))]j ] = sgn [ [Xp(ξj = v)]j ] ,\nwhere ej is the Cartesian unit vector with a one at position j and zeros elsewhere, [.]j is the projection to the j-th component, and\np = softmax(XT ξ) . (547)" }, { "heading": "A.4 HOPFIELD UPDATE RULE IS ATTENTION OF THE TRANSFORMER", "text": "The Hopfield network update rule is the attention mechanism used in transformer and BERT models (see Fig. A.2). To see this, we assume N stored (key) patterns yi and S state (query) patterns ri that are mapped to the Hopfield space of dimension dk. We set xi = W TKyi, ξi = W T Q ri, and multiply the result of our update rule with WV . The matrices Y = (y1, . . . ,yN )T and R = (r1, . . . , rS)T combine the yi and ri as row vectors. We define the matrices XT = K = YWK , ΞT = Q = RWQ, and V = YWKWV = XTWV , where WK ∈ Rdy×dk ,WQ ∈ Rdr×dk ,WV ∈ Rdk×dv . If β = 1/ √ dk and softmax ∈ RN is changed to a row vector, we obtain for the update rule Eq. (3) multiplied byWV :\nsoftmax ( 1/ √ dk QK T ) V = softmax ( β RWQW T KY T ) YWKWV . (548)\nThe left part of Eq. (548) is the transformer attention. Besides the attention mechanism, Hopfield networks allow for other functionalities in deep network architectures, which we introduce via specific layers in the next section. The right part of Eq. (548) serves as starting point for these specific layers." }, { "heading": "A.5 EXPERIMENTS", "text": "" }, { "heading": "A.5.1 EXPERIMENT 1: ATTENTION IN TRANSFORMERS DESCRIBED BY HOPFIELD DYNAMICS", "text": "A.5.1.1 Analysis of operating modes of the heads of a pre-trained BERT model. We analyzed pre-trained BERT models from Hugging Face Inc. (Wolf et al., 2019) according to these operating classes. In Fig. A.3 in the appendix the distribution of the pre-trained bert-base-cased model is depicted (for other models see appendix Section A.5.1.4). Operating classes (II) (large metastable states) and (IV) (small metastable states) are often observed in the middle layers. Operating class (I) (averaging over a very large number of patterns) is abundant in lower layers. Similar observations have been reported in other studies (Toneva & Wehbe, 2019a;b; Tay et al., 2020). Operating class (III) (medium metastable states) is predominant in the last layers.\nA.5.1.2 Experimental Setup. Transformer architectures are known for their high computational demands. To investigate the learning dynamics of such a model and at the same time keeping training time manageable, we adopted the BERT-small setting from ELECTRA (Clark et al., 2020). It has 12 layers, 4 heads and a reduced hidden size, the sequence length is shortened from 512 to 128 tokens and the batch size is reduced from 256 to 128. Additionally, the hidden dimension is reduced from 768 to 256 and the embedding dimension is reduced from 768 to 128 (Clark et al., 2020). The training of such a BERT-small model for 1.45 million update steps takes roughly four days on a single NVIDIA V100 GPU.\nAs the code base we use the transformers repository from Hugging Face, Inc (Wolf et al., 2019). We aim to reproduce the dataset of Devlin et al. (2019) as close as possible, which consists of the English Wikipedia dataset and the Toronto BookCorpus dataset (Zhu et al., 2015). Due to recent copyright claims the later is not publicly available anymore. Therefore, the pre-training experiments use an uncased snapshot of the original BookCorpus dataset.\nA.5.1.3 Hopfield Operating Classes of Transformer and BERT Models. To better understand how operation modes in attention heads develop, we tracked the distribution of counts k (see main paper) over time in a BERT-small model. At the end of training we visualized the count distribution, grouped into four classes (see Figure A.4). The thresholds for the classes were chosen according to the thresholds of Figure 2 in the main paper. However, they are divided by a factor of 4 to adapt to the shorter sequence length of 128 compared to 512. From this plot it is clear, that the attention in heads of Class IV commit very early to the operating class of small metastable states.\nA.5.1.4 Learning Dynamics of Transformer and BERT Models. To observe this behavior in the early phase of training, we created a ridge plot of the distributions of counts k for the first 20, 000 steps (see Figure A.5 (a)). This plot shows that the attention in heads of middle layers often change the operation mode to Class IV around 9, 000 to 10, 000 steps. At the same time the second big drop in the loss occurs. The question arises whether this is functionally important or whether it is an artefact which could be even harmful. To check if the attention mechanism is still able to learn after the change in the operation mode we analyzed the gradient flow through the softmax function. For every token we calculate the Frobenius norm of the Jacobian of the softmax over multiple samples. Then, for every head we plot the distribution of the norm (see Figure A.5(b)). The gradients with respect to the weights are determined by the Jacobian J defined in Eq. (59) as can be seen in Eq. (418), Eq. (429), and Eq. (435). We can see that the attention in heads of Class IV remain almost unchanged during the rest of the training.\nA.5.1.5 Attention Heads Replaced by Gaussian Averaging Layers. The self-attention mechanism proposed in Vaswani et al. (2017a) utilizes the softmax function to compute the coefficients of a convex combination over the embedded tokens, where the softmax is conditioned on the input. However, our analysis showed that especially in lower layers many heads perform averaging over a very large number of patterns. This suggests that at this level neither the dependency on the input nor a fine grained attention to individual positions is necessary. As an alternative to the original mechanism we propose Gaussian averaging heads which are computationally more efficient. Here, the softmax function is replaced by a discrete Gaussian kernel, where the location µ and the scale σ are learned. In detail, for a sequence length of N tokens we are given a vector of location parameters µ = (µ1, . . . , µN )T and a vector of corresponding scale parameters σ = (σ1, . . . , σN )T . We subdivide the interval [−1, 1] into N equidistant supporting points {sj}Nj=1, where\nsj = (j − 1)− 0.5 (N − 1)\n0.5 (N − 1) .\nThe attention [A]i,j from the i-th token to the j-th position is calculated as\n[A]i,j = 1\nzi exp\n{ −1\n2 (sj − µi σi )2} ,\nwhere zi normalizes the i-th row of the attention matrix A to sum up to one:\nzi = N∑ j=1 exp { −1 2 (sj − µi σi )2} .\nFor initialization we uniformly sample a location vector µ ∈ [−1, 1]N and a scale vector σ ∈ [0.75, 1.25]N per head. A simple way to consider the individual position of each token at initialization is to use the supporting points µi = si (see Figure A.6). In practice no difference to the random initialization was observed.\n•Number of parameters. Gaussian averaging heads can reduce the number of parameters significantly. For an input size ofN tokens, there are 2·N parameters per head. In contrast, a standard self-attention head with word embedding dimension dy and projection dimension dk has two weight matrices\nWQ,WK ∈ Rdk×dy , which together amount to 2 · dk · dy parameters. As a concrete example, the BERT-base model from Devlin et al. (2019) has an embedding dimension dy = 768, a projection dimension dk = 64 and a sequence length of N = 512. Compared to the Gaussian head, in this case (2 · 768 · 64)/(2 · 512) = 95.5 times more parameters are trained for the attention mechanism itself. Only for very long sequences (and given that the word embedding dimension stays the same) the dependence on N may become a disadvantage. But of course, due to the independence from the input the Gaussian averaging head is less expressive in comparison to the original attention mechanism. A recently proposed input independent replacement for self-attention is the so called Random Synthesizer (Tay et al., 2020). Here the softmax-attention is directly parametrized with an N ×N matrix. This amounts to 0.5 ·N more parameters than Gaussian averaging." }, { "heading": "A.5.2 EXPERIMENT 2: MULTIPLE INSTANCE LEARNING DATASETS.", "text": "A.5.2.1 Immune Repertoire Classification. An architecture called DeepRC, is based on our modern Hopfield networks, for immune repertoire classification and compared to other machine learning approaches. For DeepRC, we consider immune repertoires as input objects, which are represented as bags of instances. In a bag, each instance is an immune receptor sequence and each bag can contain a large number of sequences. At its core, DeepRC consists of a modern Hopfield network that extracts information from each repertoire. The stored patterns (keys) are representations of the immune amino acid sequences (instances) that are obtained by an 1D convolutional network with position encoding. Each state pattern (query) is static and learned via backpropagation. For details see Widrich et al. (2020a;b).\nOur new Hopfield network has been integrated into a deep learning architecture for immune repertoire classification, a massive multiple instance learning task (Widrich et al., 2020a;b). Theorem 3 states that modern Hopfield networks possess an exponential storage capacity which enables to tackle massive multiple instance learning (MIL) problems (Dietterich et al., 1997). Immune repertoire classification (Emerson et al., 2017) typically requires to extract few patterns from a large set of sequences, the repertoire, that are indicative for the respective immune status. Most MIL methods fail due the large number of instances.\nData is obtained by experimentally observed immune receptors as well as simulated sequences sequence motifs (Akbar et al., 2019; Weber et al., 2020) with low yet varying degrees of frequency are implanted. Four different categories of datasets are constructed: (a) Simulated immunosequencing data with implanted motifs, (b) immunosequencing data generated by long short-term memory (LSTM) with implanted motifs, (c) real-world immunosequencing data with implanted motifs, and (d) real-world immunosequencing data with known immune status (Emerson et al., 2017). Categories (a), (b), and (d) contain approx. 300,000 instances per immune repertoire. With over 30 billion sequences in total, this represents one of the largest multiple instance learning experiments ever conducted (Carbonneau et al., 2018). Despite the massive number of instances as well as the low frequency\nof sequences indicative of the respective immune status, deep learning architectures with modern Hopfield networks outperform all competing methods with respect to average area under the ROC curve in all four categories, (a), (b), (c) and (d) (for details see Widrich et al. (2020a)).\nWe evaluate and compare the performance of DeepRC to a set of machine learning methods that serve as baseline, were suggested, or can readily be adapted to immune repertoire classification. The methods comprise (i) known motif, which counts how often the known implanted motifs occur, (ii) Support Vector Machine (SVM) approach that uses a fixed mapping from a bag of sequences to the corresponding k-mer counts and used the MinMax and Jaccard kernel, (iii) k-Nearest Neighbor (KNN) with k-mer representation, transforming MinMax and Jaccard kernel to distances, (iv) logistic regression on the k-mer representation, (v) burden test that first identifies sequences or k-mers and then computes a burden score per individual, and (vi) logistic multiple instance learning (lMIL). On the real-world dataset DeepRC achieved an AUC of 0.832± 0.022, followed by the SVM with MinMax kernel (AUC 0.825± 0.022) and the burden test with an AUC of 0.699± 0.041. Overall on all datasets, DeepRC outperformed all competing methods with respect to average AUC (see Widrich et al. (2020a;b)).\nTable A.1 reports the average performance in the simulated immunosequencing datasets (last column) and the performance on datasets of the remaining three categories. DeepRC outperforms all competing methods with respect to average AUC. Across categories, the runner-up methods are either the SVM for MIL problems with MinMax kernel or the burden test.\nA.5.2.2 Multiple Instance Learning Benchmark Datasets. Classical benchmarking datasets comprise UCSB breast cancer classification (Kandemir et al., 2014), and the Elephant, Fox, Tiger datasets (Andrews et al., 2003).\nElephant, Fox and Tiger are MIL datasets for image annotation which comprise color images from the Corel dataset that have been preprocessed and segmented. An image consists of a set of segments (or blobs), each characterized by color, texture and shape descriptors. The datasets have 100 positive and 100 negative example images. The latter have been randomly drawn from a pool of photos of other animals. Elephant has 1391 instances and 230 features. Fox has 1320 instances and 230 features. Tiger has 1220 instances and 230 features. Furthermore, we use the UCSB breast cancer classification (Kandemir et al., 2014) dataset, which consists of 2,002 instances across 58 input objects. An instance represents a patch of a histopathological image of cancerous or normal tissue. The layer HopfieldPooling is used, which allows for computing a per-input-object representation by\nextracting an average of instances that are indicative for one of the two classes. The input to the HopfieldPooling layer is a set of embedded instances Y and a trainable but fixed state (query) pattern Q used for averaging of class-indicative instances. This averaging enables a compression of variable-sized bags to a fixed-sized representation to discriminate the bags. We performed a manual hyperparameter search on a validation set. In detail, we used the following architecture to perform the given task on the Elephant, Fox, Tiger and UCSCB breast cancer datasets: (I) we apply fully connected linear embedding layers with ReLU activation. (II) The output of this embedding serves as the input to our HopfieldPooling layer where the above described pooling operation is performed. (III) Thereafter we use ’ReLU - Linear blocks’ as the final linear output layers that perform the classification. Among other hyperparameters, different hidden layer widths (for the fully connected pre- and post-HopfieldPooling layers), learning rates and batch sizes were tried. Additionally our focus resided on the hyperparameters of the HopfieldPooling layer. Among those were the number of heads, the head dimension and the scaling factor β. All models were trained for 160 epochs using the AdamW optimizer (Loshchilov & Hutter, 2017) with exponential learning rate decay (see Table A.2), and validated by 10-fold nested cross validation repeated five times with different splits on the data sets. The reported ROC AUC scores are the average of these repetitions. As overfitting imposed quite a problem, bag dropout was applied as the regularization technique of choice." }, { "heading": "A.5.3 EXPERIMENT 3: CLASSIFICATION ON SMALL UCI BENCHMARK DATASETS", "text": "A.5.3.1 Motivation. Datasets with a small number of samples, like the UCI benchmark datasets, are particularly difficult for neural networks to generalize on. In contrast to their performance on larger datasets, they are consistently outperformed by methods like e.g. gradient boosting, random forests (RF) and support vector machines (SVMs). Finding samples or even learning prototypes that are highly indicative for the class of a sample (query) suggest the use of Hopfield networks. We applied a modern Hopfield network via the layer Hopfield. The input vector is mapped toR using a self-normalizing net (SNN) andWK is learned, where the dimension ofWK (the number of stored fixed pattern) is a hyperparameter. The output Z of Hopfield enters the output layer.\nA.5.3.2 Methods compared. Modern Hopfield networks via the layer Hopfield are compared to 17 groups of methods (Fernández-Delgado et al., 2014; Klambauer et al., 2017a):\n1. Support Vector Machines\n2. Random Forest\n3. Multivariate adaptive regression splines (MARS)\n4. Boosting\n5. Rule-based Methods\n6. Logistic and Multinomial Regression (LMR)\n7. Discriminant Analysis (DA)\n8. Bagging\n9. Nearest Neighbor\n10. Decision Trees\n11. Other Ensembles\n12. Neural Networks (standard NN, BatchNorm, WeighNorm, MSRAinit, LayerNorm, ResNet, Self-Normalizing Nets)\n13. Bayesian Methods\n14. Other Methods\n15. Generalized linear models (GLM)\n16. Partial Least Squares and Principal Component Regression (PLSR)\n17. Stacking (Wolpert)\nA.5.3.3 Experimental design and implementation details. As specified in the main paper, we consider 75 datasets of the UC Irvine Machine Learning Repository, which contain less than 1, 000 samples per dataset, following the dataset separation into large and small dataset in Klambauer et al. (2017a). On each dataset, we performed a grid-search to determine the best hyperparameter setting and model per dataset. The hyperparameter search-space of the grid-search is listed in Table A.3. All models were trained for 100 epochs with a mini-batch size of 4 samples using the cross entropy loss and the PyTorch SGD module for stochastic gradient descent without momentum and without weight decay or dropout. After each epoch, the model accuracy was computed on a separated validation set. Using early stopping, the model with the best validation set accuracy averaged over 16 consecutive epochs was selected as final model. This final model was then evaluated against a separated test set to determine the accuracy, as reported in Tables 2 and Table uci_detailed_results.csv in the supplemental materials.\nAs network architecture, we use {0, 1, 7} fully connected embedding layers with SELU Klambauer et al. (2017a) activation functions and {32, 128, 1024} hidden units per embedding layer. These embedding layers are followed by the layer Hopfield. The number of hidden units is also used as number of dimensions for the Hopfield association space with a number of {1, 32} heads. The layer Hopfield is followed by a mapping to the output vector, which has as dimension the number of classes. Finally, the softmax function is applied to obtain the predicted probability for a class.\nA.5.3.4 Results. We compared the performance of 25 methods based on their method rank. For this we computed the rank per method per dataset based on the accuracy on the test set, which was then averaged over all 75 datasets for each method to obtain the method rank. For the baseline methods we used the scores summarized by (Klambauer et al., 2017a)." }, { "heading": "A.5.4 EXPERIMENT 4: DRUG DESIGN BENCHMARK DATASETS", "text": "A.5.4.1 Experimental design and implementation details. We test Hopfield layers on 4 classification datasets from MoleculeNet (Wu et al., 2017), which are challenging for deep learning methods. The first dataset is HIV, which was introduced by the Drug Therapeutics Program (DTP) AIDS Antiviral Screen. The second dataset is BACE, which has IC50 measurements for binding affinities of inhibitors (molecules) to the human β-secretase 1 (BACE-1). The third dataset is BBBP (blood-brain barrier permeability), which stems from modeling and predicting the blood-brain barrier permeability (Martins et al., 2012). The fourth dataset is SIDER (Side Effect Resource) Kuhn et al. (2016) and contains 1427 approved drugs. These datasets represent four areas of modeling tasks in drug discovery, concretely to develop accurate models for predicting a) new anti-virals (HIV), b) new protein inhibitors (BACE), c) metabolic effects (BBBP), and d) side effects of a chemical compound (SIDER).\nWe implemented a Hopfield layer HopfieldLayer, in which we used the training-input as storedpattern Y or key, the training-label as pattern-projection YWV or value and the input as state-pattern R or query. As described in section A.6 by concatenation of input zi and target ti the matricesWK andWV can be designed such that inside the softmax the input zi is used and outside the softmax the target ti.\nAll hyperparameters were selected on separate validation sets and we selected the model with the highest validation AUC on five different random splits.\nA.5.4.2 Results. We compared the Hopfield layer Hopfieldlayer to Support Vector Machines (SVMs) (Cortes & Vapnik, 1995; Schölkopf & Smola, 2002), Extreme Gradient Boosting (XGBoost) (Chen & Guestrin, 2016), Random Forest (RF) (Breiman, 2001), Deep Neural Networks (DNNs) (LeCun et al., 2015; Schmidhuber, 2015), and to graph neural networks (GNN) like Graph Convolutional Networks (GCNs) (Kipf & Welling, 2016), Graph Attention Networks (GATs) (Velic̆ković et al., 2018), Message Passing Neural Networks (MPNNs) (Gilmer et al., 2017), and Attentive FP (Xiong et al., 2020). Our architecture with HopfieldLayer has reached state-of-theart for predicting side effects on SIDER 0.672± 0.019 as well as for predicting β-secretase BACE 0.902± 0.023. See Table A.5 for all results, where the results of other methods are taken from Jiang et al. (2020)." }, { "heading": "A.6 PYTORCH IMPLEMENTATION OF HOPFIELD LAYERS", "text": "The implementation is available at: https://github.com/ml-jku/hopfield-layers\nA.6.1 INTRODUCTION\nIn this section, we describe the implementation of Hopfield layers in PyTorch (Paszke et al., 2017; 2019) and, additionally, provide a brief usage manual. Possible applications for a Hopfield layer in a deep network architecture comprise:\n• multiple instance learning (MIL) (Dietterich et al., 1997),\n• processing of and learning with point sets (Qi et al., 2017a;b; Xu et al., 2018),\n• set-based and permutation invariant learning (Guttenberg et al., 2016; Ravanbakhsh et al., 2016; Zaheer et al., 2017; Korshunova et al., 2018; Ilse et al., 2018; Zhai et al., 2020),\n• attention-based learning (Vaswani et al., 2017a),\n• associative learning,\n• natural language processing,\n• sequence analysis and time series prediction, and\n• storing and retrieving reference or experienced data, e.g. to store training data and retrieve it by the model or to store experiences for reinforcement learning.\nThe Hopfield layer in a deep neural network architecture can implement:\n• a memory (storage) with associative retrieval (Danihelka et al., 2016; Ba et al., 2016a),\n• conditional pooling and averaging operations (Wang et al., 2018; Ilse et al., 2020),\n• combining data by associations (Agrawal et al., 1993),\n• associative credit assignment (e.g. Rescorla-Wagner model or value estimation) (Sutton & Barto, 2018), and\n• attention mechanisms (Vaswani et al., 2017a; Bahdanau et al., 2014).\nIn particular, a Hopfield layer can substitute attention layers in architectures of transformer and BERT models. The Hopfield layer is designed to be used as plug-in replacement for existing layers like\n• pooling layers (max-pooling or average pooling),\n• permutation equivariant layers (Guttenberg et al., 2016; Ravanbakhsh et al., 2016),\n• GRU & LSTM layers, and\n• attention layers.\nIn contrast to classical Hopfield networks, the Hopfield layer is based on the modern Hopfield networks with continuous states that have increased storage capacity, as discussed in the main paper. Like classical Hopfield networks, the dynamics of the single heads of a Hopfield layer follow a energy minimization dynamics. The energy minimization empowers our Hopfield layer with several advantages over other architectural designs like memory cells, associative memory, or attention mechanisms. For example, the Hopfield layer has more functionality than a transformer self-attention layer (Vaswani et al., 2017a) as described in Sec. A.6.2. Possible use cases are given in Sec. A.6.3. Source code will be provided under github." }, { "heading": "A.6.2 FUNCTIONALITY", "text": "Non-standard functionalities that are added by a Hopfield layer are\n• Association of two sets,\n• Multiple Updates for precise fixed points,\n• Variable Beta that determines the kind of fixed points,\n• Dimension of the associative space for controlling the storage capacity,\n• Static Patterns for fixed pattern search, and\n• Pattern Normalization to control the fixed point dynamics by norm of the patterns and shift of the patterns.\nA functional sketch of our Hopfield layer is shown in Fig. A.7.\n•Association of two sets. The Hopfield layer makes it possible to associate two sets of vectors. This general functionality allows\n• for transformer-like self-attention,\n• for decoder-encoder attention,\n• for time series prediction (maybe with positional encoding),\n• for sequence analysis,\n• for multiple instance learning,\n• for learning with point sets,\n• for combining data sources by associations,\n• for constructing a memory,\n• for averaging and pooling operations, and\n• for many more.\nThe first set of vectors consists of S raw state patterns R = (r1, . . . , rS)T with rs ∈ Rdr and the second set of vectors consists of N raw stored patterns Y = (y1, . . . ,yN )T with yi ∈ Rdy . Both the S raw state patterns and N raw stored patterns are mapped to an associative space in Rdk via the matrices WQ ∈ Rdr×dk and WK ∈ Rdy×dk , respectively. We define a matrix Q (ΞT ) of state patterns ξn = WQrn in an associative space Rdk and a matrix K (XT ) of stored patterns xi = WKys in the associative space Rdk :\nQ = ΞT = RWQ , (549)\nK = XT = Y WK . (550)\nIn the main paper, Eq. (3) defines the novel update rule:\nξnew = f(ξ) = X softmax(β XT ξ) , (551)\nFor multiple patterns, Eq. (3) becomes:\nΞnew = f(Ξ) = X softmax(β XTΞ) , (552)\nwhere Ξ = (ξ1, . . . , ξN ) is the matrix of N state (query) patterns, X is the matrix of stored (key) patterns, and Ξnew is the matrix of new state patterns, which are averages over stored patterns. A new state pattern can also be very similar to a single stored pattern, in which case we call the stored pattern to be retrieved.\nThese matrices allow to rewrite Eq. (552) as:\n(Qnew) T = KT softmax(β K QT ) . (553)\nFor β = 1/ √ dk and changing in Eq. (553) softmax ∈ RN to a row vector (and evaluating a row vector), we obtain:\nQnew = softmax(1/ √ dk QK T )K , (554)\nwhere Qnew is again the matrix of new state patterns. The new state patterns Ξnew are projected via WV to the result patterns Z = ΞnewWV , where WV ∈ Rdk×dv . With the pattern projection V = KWV , we obtain the update rule Eq. (10) from the main paper:\nZ = softmax(1/ √ dk QK T ) V . (555)\n•Multiple Updates. The update Eq. (553) can be iteratively applied to the initial state ξ of every Hopfield layer head. After the last update, the new states Ξnew are projected via WV to the result patterns Z = ΞnewWV . Therefore, the Hopfield layer allows multiple update steps in the forward pass without changing the number of parameters. The number of update steps can be given for every Hopfield head individually. Furthermore, it is possible to set a threshold for the number of updates of every Hopfield head based on ‖ξ − ξnew‖2. In the general case of multiple initial states Ξ, the maximum over the individual norms is taken.\n•Variable β. In the main paper, we have identified β as a crucial parameter for the fixed point dynamics of the Hopfield network, which governs the operating mode of the attention heads. In appendix, e.g. in Lemma A7 or in Eq. (102) and Eq. (103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), mmax (spread of the similar patterns), and ‖mx‖ (center of the similar patterns). Low values of β induce global averaging and higher values of β metastable states. In the transformer attention, the β parameter is set to β = 1/ √ dk as in Eq. (555). The Hopfield layer, however, allows to freely choose β > 0, since the fixed point dynamics does not only depend on the dimension of the associative space dk. Additionally, β heavily influences the gradient flow to the matricesWQ andWK . Thus, finding the right β for the respective application can be crucial.\n•Variable dimension of the associative space. Theorem A5 says that the storage capacity of the modern Hopfield network grows exponentially with the dimension of the associative space. However higher dimension of the associative space also means less averaging and smaller metastable states. The dimension of the associative space trades off storage capacity against the size of metastable states, e.g. over how many pattern is averaged. In Eq. (550) and in Eq. (549), we assumed N raw state patternsR = (r1, . . . , rN )T and S raw stored patterns Y = (y1, . . . ,yS)T that are mapped to a dk-dimensional associative space via the matricesWQ ∈ Rdr×dk andWK ∈ Rdy×dk , respectively. In the associative space Rdk , we obtain the state patternsQ = ΞT = RWQ and the stored patterns K = XT = Y WK . The Hopfield view relates the dimension dk to the number of input patterns N that have to be processed. The storage capacity depends exponentially on the dimension dk (the dimension of the associative space) and the size to metastable states is governed by this dimension, too. Consequently, dk should be chosen with respect to the number N of patterns one wants to store and the desired size of metastable states, which is the number of patterns one wants to average over. For example, if the input consists of many low dimensional input patterns, it makes sense to project the patterns into a higher dimensional space to allow a proper fixed point dynamics. Intuitively, this coincides with the construction of a richer feature space for the patterns.\n•Static Patterns. In Eq. (550) and Eq. (549), the N raw state patterns R = (r1, . . . , rN )T and S raw stored patterns Y = (y1, . . . ,yS)T are mapped to an associative space via the matrices WQ ∈ Rdr×dk andWK ∈ Rdy×dk , which gives the state patternsQ = ΞT = RWQ and the stored patterns K = XT = Y WK . We allow for static state and static stored patterns. Static pattern means that the pattern does not depend on the network input, i.e. it is determined by the bias weights and remains constant across different network inputs. Static state patterns allow to determine whether\nparticular fixed patterns are among the stored patterns and vice versa. The static pattern functionality is typically needed if particular patterns must be identified in the data, e.g. as described for immune repertoire classification in the main paper, where a fixed dk-dimensional state vector ξ is used.\n•Pattern Normalization. In the appendix, e.g. in Lemma A7 or in Eq. (102) and Eq. (103), we showed that the characteristics of the fixed points of the new modern Hopfield network are determined by: β, M (maximal pattern norm), mmax (spread of the similar patterns), and ‖mx‖ (center of the similar patterns). We already discussed the parameter β while the spread of the similar patterns mmax is given by the data. The remaining variables M and mx that both control the fixed point dynamics are adjusted pattern normalization. M is the maximal pattern norm andmx the center of the similar patterns. Theorem A5 says that larger M allows for more patterns to be stored. However, the size of metastable states will decrease with increasing M . The vector mx says how well the (similar) patterns are centered. If the norm ‖mx‖ is large, then this leads to smaller metastable states. The two parameters M andmx are controlled by pattern normalization and determine the size and convergence properties of metastable states. These two parameters are important for creating large gradients if heads start with global averaging which has small gradient. These two parameters can shift a head towards small metastable states which have largest gradient as shown in Fig. A.5(b). We allow for three different pattern normalizations:\n• pattern normalization of the input patterns, • pattern normalization after mapping into the associative space, • no pattern normalization.\nThe default setting is a pattern normalization of the input patterns." }, { "heading": "A.6.3 USAGE", "text": "As outlined in Sec. A.6.1, there are a variety of possible use cases for the Hopfield layer, e.g. to build memory networks or transformer models. The goal of the implementation is therefore to provide an easy to use Hopfield module that can be used in a wide range of applications, be it as part of a larger architecture or as a standalone module. Consequently, the focus of the Hopfield layer interface is set on its core parameters: the association of two sets, the scaling parameter β, the maximum number of updates, the dimension of the associative space, the possible usage of static patterns, and the pattern normalization. The integration into the PyTorch framework is built such that with all the above functionalities disabled, the “HopfieldEncoderLayer” and the “HopfieldDecoderLayer”, both extensions of the Hopfield module, can be used as a one-to-one plug-in replacement for the TransformerEncoderLayer and the TransformerDecoderLayer, respectively, of the PyTorch transformer module.\nThe Hopfield layer can be used to implement or to substitute different layers:\n• Pooling layers: We consider the Hopfield layer as a pooling layer if only one static state (query) pattern exists. Then, it is de facto a pooling over the sequence, which results from the softmax values applied on the stored patterns. Therefore, our Hopfield layer can act as a pooling layer.\n• Permutation equivariant layers: Our Hopfield layer can be used as a plug-in replacement for permutation equivariant layers. Since the Hopfield layer is an associative memory it assumes no dependency between the input patterns.\n• GRU & LSTM layers: Our Hopfield layer can be used as a plug-in replacement for GRU & LSTM layers. Optionally, for substituting GRU & LSTM layers, positional encoding might be considered.\n• Attention layers: Our Hopfield layer can act as an attention layer, where state (query) and stored (key) patterns are different, and need to be associated.\n• Finally, the extensions of the Hopfield layer are able to operate as a self-attention layer (HopfieldEncoderLayer) and as cross-attention layer (HopfieldDecoderLayer), as described in (Vaswani et al., 2017a). As such, it can be used as building block of transformer-based or general architectures." } ]
2,021
null
SP:66b06c7dee715568ea863831c610f1e6ebca05be
[ "This paper proposes a technique to reduce parameter count and improve training stability for dynamic convolutions using matrix decomposition. It looks at prior work (CondConv, DyConv) which aggregate multiple convolutional kernels via an attention score, and suggests this vanilla formulation is redundant since it sums $KC$ rank-1 matrices to create a rank-$C$ residual. The technique proposed in the paper (dynamic convolution decomposition) reduces the number of parameters via dimension reduction in the intermediate space, instead summing $L^2$ rank-1 matrices where $L^2 < C$. The authors provide experimental results on MobileNetV2 and ResNet comparing to prior works, as well as several ablations and possible extensions on their approach." ]
Recent research in dynamic convolution shows substantial performance boost for efficient CNNs, due to the adaptive aggregation of K static convolution kernels. It has two limitations: (a) it increases the number of convolutional weights by Ktimes, and (b) the joint optimization of dynamic attention and static convolution kernels is challenging. In this paper, we revisit it from a new perspective of matrix decomposition and reveal the key issue is that dynamic convolution applies dynamic attention over channel groups after projecting into a higher dimensional latent space. To address this issue, we propose dynamic channel fusion to replace dynamic attention over channel groups. Dynamic channel fusion not only enables significant dimension reduction of the latent space, but also mitigates the joint optimization difficulty. As a result, our method is easier to train and requires significantly fewer parameters without sacrificing accuracy. Source code is at https://github.com/liyunsheng13/dcd.
[ { "affiliations": [], "name": "Yunsheng Li" }, { "affiliations": [], "name": "Yinpeng Chen" }, { "affiliations": [], "name": "Xiyang Dai" }, { "affiliations": [], "name": "Mengchen Liu" }, { "affiliations": [], "name": "Dongdong Chen" }, { "affiliations": [], "name": "Ye Yu" }, { "affiliations": [], "name": "Lu Yuan" }, { "affiliations": [], "name": "Zicheng Liu" }, { "affiliations": [], "name": "Mei Chen" }, { "affiliations": [], "name": "Nuno Vasconcelos" } ]
[ { "authors": [ "Han Cai", "Chuang Gan", "Song Han" ], "title": "Once for all: Train one network and specialize it for efficient", "venue": "deployment. ArXiv,", "year": 2019 }, { "authors": [ "Hanting Chen", "Yunhe Wang", "Chunjing Xu", "Boxin Shi", "Chao Xu", "Qi Tian", "Chang Xu" ], "title": "Addernet: Do we really need multiplications in deep learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a", "year": 2020 }, { "authors": [ "Jyun-Ruei Chen", "Xijun Wang", "Zichao Guo", "X. Zhang", "J. Sun" ], "title": "Dynamic region-aware convolution", "venue": "ArXiv, abs/2003.12243,", "year": 2020 }, { "authors": [ "Yinpeng Chen", "Xiyang Dai", "Mengchen Liu", "Dongdong Chen", "Lu Yuan", "Zicheng Liu" ], "title": "Dynamic convolution: Attention over convolution kernels", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Emily L Denton", "Wojciech Zaremba", "Joan Bruna", "Yann LeCun", "Rob Fergus" ], "title": "Exploiting linear structure within convolutional networks for efficient evaluation", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Kai Han", "Yunhe Wang", "Qi Tian", "Jianyuan Guo", "Chunjing Xu", "Chang Xu" ], "title": "Ghostnet: More features from cheap operations", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan", "Quoc V. Le", "Hartwig Adam" ], "title": "Searching for mobilenetv3", "venue": "URL http://arxiv.org/abs/1905", "year": 1905 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Kevin Jarrett", "Koray Kavukcuoglu", "Marc’Aurelio Ranzato", "Yann LeCun" ], "title": "What is the best multistage architecture for object recognition", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2009 }, { "authors": [ "Yong-Deok Kim", "Eunhyeok Park", "Sungjoo Yoo", "Taelim Choi", "Lu Yang", "Dongjun Shin" ], "title": "Compression of deep convolutional neural networks for fast and low power mobile applications", "venue": "arXiv preprint arXiv:1511.06530,", "year": 2015 }, { "authors": [ "Jean Kossaifi", "Antoine Toisoul", "Adrian Bulat", "Yannis Panagakis", "Timothy M Hospedales", "Maja Pantic" ], "title": "Factorized higher-order cnns with an application to spatio-temporal emotion estimation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "L.D. Lathauwer", "B.D. Moor", "J. Vandewalle" ], "title": "A multilinear singular value decomposition", "venue": "In SIAM J. Matrix Anal. Appl,", "year": 2000 }, { "authors": [ "Vadim Lebedev", "Yaroslav Ganin", "Maksim Rakhuba", "Ivan Oseledets", "Victor Lempitsky" ], "title": "Speeding-up convolutional neural networks using fine-tuned cp-decomposition", "venue": "arXiv preprint arXiv:1412.6553,", "year": 2014 }, { "authors": [ "Xiang Li", "Wenhai Wang", "Xiaolin Hu", "Jian Yang" ], "title": "Selective kernel networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In The European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Ningning Ma", "X. Zhang", "J. Huang", "J. Sun" ], "title": "Weightnet: Revisiting the design space of weight networks", "venue": null, "year": 2007 }, { "authors": [ "Vinod Nair", "Geoffrey E. Hinton" ], "title": "Rectified linear units improve restricted boltzmann machines", "venue": "In ICML,", "year": 2010 }, { "authors": [ "Anh-Huy Phan", "Konstantin Sobolev", "Konstantin Sozykin", "Dmitry Ermilov", "Julia Gusak", "Petr Tichavsky", "Valeriy Glukhov", "Ivan Oseledets", "Andrzej Cichocki" ], "title": "Stable low-rank tensor decomposition for compression of convolutional neural network", "venue": null, "year": 2008 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: A simple way to prevent neural networks from overfitting", "venue": "Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Zhuo Su", "Linpu Fang", "Wen xiong Kang", "D. Hu", "M. Pietikäinen", "Li Liu" ], "title": "Dynamic group convolution for accelerating convolutional neural networks", "venue": null, "year": 2020 }, { "authors": [ "Mingxing Tan", "Quoc Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Mingxing Tan", "Quoc V. Le" ], "title": "Mixconv: Mixed depthwise convolutional kernels", "venue": "In 30th British Machine Vision Conference", "year": 2019 }, { "authors": [ "Mingxing Tan", "Ruoming Pang", "Quoc V. Le" ], "title": "Efficientdet: Scalable and efficient object detection", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Zhi Tian", "Chunhua Shen", "Hao Chen" ], "title": "Conditional convolutions for instance segmentation", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Brandon Yang", "Gabriel Bender", "Quoc V. Le", "Jiquan Ngiam" ], "title": "Condconv: Conditionally parameterized convolutions for efficient inference", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Jiahui Yu", "Linjie Yang", "Ning Xu", "Jianchao Yang", "Thomas Huang" ], "title": "Slimmable neural networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Daquan Zhou", "Qi-Bin Hou", "Y. Chen", "Jiashi Feng", "S. Yan" ], "title": "Rethinking bottleneck structure for efficient mobile network design", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Dynamic convolution (Yang et al., 2019; Chen et al., 2020c) has recently become popular for the implementation of light-weight networks (Howard et al., 2017; Zhang et al., 2018b). Its ability to achieve significant performance gains with negligible computational cost has motivated its adoption for multiple vision tasks (Su et al., 2020; Chen et al., 2020b; Ma et al., 2020; Tian et al., 2020). The basic idea is to aggregate multiple convolution kernels dynamically, according to an input dependent attention mechanism, into a convolution weight matrix\nW (x) = K∑ k=1 πk(x)Wk s.t. 0 ≤ πk(x) ≤ 1, K∑ k=1 πk(x) = 1, (1)\nwhere K convolution kernels {Wk} are aggregated linearly with attention scores {πk(x)}. Dynamic convolution has two main limitations: (a) lack of compactness, due to the use ofK kernels, and (b) a challenging joint optimization of attention scores {πk(x)} and static kernels {Wk}. Yang et al. (2019) proposed the use of a sigmoid layer to generate attention scores {πk(x)}, leading to a significantly large space for the convolution kernel W (x) that makes the learning of attention scores {πk(x)} difficult. Chen et al. (2020c) replaced the sigmoid layer with a softmax function to compress the kernel space. However, small attention scores πk output by the softmax make the corresponding kernels Wk difficult to learn, especially in early training epochs, slowing training convergence. To mitigate these limitations, these two methods require additional constraints. For instance, Chen et al. (2020c) uses a large temperature in the softmax function to encourage nearuniform attention.\nIn this work, we revisit the two limitations via matrix decomposition. To expose the limitations, we reformulate dynamic convolution in terms of a set of residuals, re-defining the static kernels as\nWk = W0 + ∆Wk, k ∈ {1, . . . ,K} (2)\nwhere W0 = 1K ∑K\nk=1 Wk is the average kernel and ∆Wk = Wk −W0 a residual weight matrix. Further decomposing the latter with an SVD, ∆Wk = UkSkV Tk , leads to\nW (x) = K∑\nk=1\nπk(x)W0 + K∑\nk=1\nπk(x)UkSkV T k = W0 + UΠ(x)SV T , (3)\nwhere U = [U1, . . . ,UK ], S = diag(S1, . . . ,SK), V = [V1, . . . ,VK ], and Π(x) stacks attention scores diagonally as Π(x) = diag(π1(x)I, . . . , πK(x)I), where I is an identity matrix. This decomposition, illustrated in Figure 1, shows that the dynamic behavior of W (x) is implemented by the dynamic residual UΠ(x)SV T , which projects the input x to a higher dimensional space SV Tx (fromC toKC channels), applies dynamic attention Π(x) over channel groups, and reduces the dimension back to C channels, through multiplication by U . This suggests that the limitations of vanilla dynamic convolution are due to the use of attention over channel groups, which induces a high dimensional latent space, leading to small attention values that may suppress the learning of the corresponding channels.\nTo address this issue, we propose a dynamic convolution decomposition (DCD), that replaces dynamic attention over channel groups with dynamic channel fusion. The latter is based on a full dynamic matrix Φ(x), of which each element φi,j(x) is a function of input x. As shown in Figure 1-(right), the dynamic residual is implemented as the product PΦ(x)QT of Φ(x) and two static matrices P ,Q, such that Q compresses the input into a low dimensional latent space, Φ(x) dynamically fuses the channels in this space, and P expands the number of channels to the output space. The key innovation is that dynamic channel fusion with Φ(x) enables a significant dimensionality reduction of the latent space (QTx ∈ RL, L C). Hence the number of parameters in P ,Q is significantly reduced, when compared to U ,V of Eq. 3, resulting in a more compact model. Dynamic channel fusion also mitigates the joint optimization challenge of vanilla dynamic convolution, as each column of P ,Q is associated with multiple dynamic coefficients of Φ(x). Hence, a few dynamic coefficients of small value are not sufficient to suppress the learning of static matrices P ,Q. Experimental results show that DCD both significantly reduces the number of parameters and achieves higher accuracy than vanilla dynamic convolution, without requiring the additional constraints of (Yang et al., 2019; Chen et al., 2020c)." }, { "heading": "2 RELATED WORK", "text": "Efficient CNNs: MobileNet (Howard et al., 2017; Sandler et al., 2018; Howard et al., 2019) decomposes k × k convolution into a depthwise and a pointwise convolution. ShuffleNet (Zhang et al., 2018b; Ma et al., 2018) uses group convolution and channel shuffle to further simplify pointwise convolution. Further improvements of these architectures have been investigated recently. EfficientNet (Tan & Le, 2019a; Tan et al., 2020) finds a proper relationship between input resolution and width/depth of the network. Tan & Le (2019b) mix up multiple kernel sizes in a single convolution. Chen et al. (2020a) trades massive multiplications for much cheaper additions. Han et al. (2020) applies a series of cheap linear transformations to generate ghost feature maps. Zhou et al. (2020) flips the structure of inverted residual blocks to alleviate information loss. Yu et al. (2019) and Cai et al. (2019) train one network that supports multiple sub-networks of different complexities.\nMatrix Decomposition: Lebedev et al. (2014) and Denton et al. (2014) use Canonical Polyadic decomposition (CPD) of convolution kernels to speed up networks, while Kim et al. (2015) investigates Tucker decompositions for the same purpose. More recently, Kossaifi et al. (2020) combines tensor decompositions with MobileNet to design efficient higher-order networks for video tasks, while Phan et al. (2020) proposes a stable CPD to deal with degeneracies of tensor decompositions during network training. Unlike DCD, which decomposes a convolutional kernel dynamically by adapting the core matrix to the input, these works all rely on static decompositions.\nDynamic Neural Networks: Dynamic networks boost representation power by adapting parameters or activation functions to the input. Ha et al. (2017) uses a secondary network to generate parameters for the main network. Hu et al. (2018) reweights channels by squeezing global context. Li et al. (2019) adapts attention over kernels of different sizes. Dynamic convolution (Yang et al., 2019; Chen et al., 2020c) aggregates multiple convolution kernels based on attention. Ma et al. (2020) uses grouped fully connected layer to generate convolutional weights directly. Chen et al. (2020b) extends dynamic convolution from spatial agnostic to spatial specific. Su et al. (2020) proposes dynamic group convolution that adaptively selects input channels to form groups. Tian et al. (2020) applies dynamic convolution to instance segmentation. Chen et al. (2020d) adapts slopes and intercepts of two linear functions in ReLU (Nair & Hinton, 2010; Jarrett et al., 2009)." }, { "heading": "3 DYNAMIC CONVOLUTION DECOMPOSITION", "text": "In this section, we introduce the dynamic convolution decomposition proposed to address the limitations of vanilla dynamic convolution. For conciseness, we assume a kernel W with the same number of input and output channels (Cin = Cout = C) and ignore bias terms. We focus on 1 × 1 convolution in this section and generalize the procedure to k×k convolution in the following section." }, { "heading": "3.1 REVISITING VANILLA DYNAMIC CONVOLUTION", "text": "Vanilla dynamic convolution aggregates K convolution kennels {Wk} with attention scores {πk(x)} (see Eq. 1). It can be reformulated as adding a dynamic residual to a static kernel, and the dynamic residual can be further decomposed by SVD (see Eq. 3), as shown in Figure 1. This has two limitations. First, the model is not compact. Essentially, it expands the number of channels by a factor of K and applies dynamic attention over K channel groups. The dynamic residual UΠ(x)SV T is a C × C matrix, of maximum rank C, but sums KC rank-1 matrices, since\nW (x) = W0 + UΠ(x)SV T = W0 + KC∑ i=1 πdi/Ce(x)uisi,iv T i , (4)\nwhere ui is the ith column vector of matrix U , vi is the ith column vector of matrix V , si,i is the ith diagonal entry of matrix S and d·e is ceiling operator. The static basis vectors ui and vi are not shared across different rank-1 matrices (πdi/Ce(x)uisi,ivTi ). This results in model redundancy. Second, it is difficult to jointly optimize static matrices U , V and dynamic attention Π(x). This is because a small attention score πdi/Ce may suppress the learning of corresponding columns ui, vi in U and V , especially in early training epochs (as shown in Chen et al. (2020c))." }, { "heading": "3.2 DYNAMIC CHANNEL FUSION", "text": "We propose to address the limitations of the vanilla dynamic convolution with a dynamic channel fusion mechanism, implemented with a full matrix Φ(x), where each element φi,j(x) is a function of input x. Φ(x) is a L × L matrix, dynamically fusing channels in the latent space RL. The key idea is to significantly reduce dimensionality in the latent space, L C, to enable a more compact model. Dynamic convolution is implemented with dynamic channel fusion using\nW (x) = W0 + PΦ(x)Q T = W0 + L∑ i=1 L∑ j=1 piφi,j(x)q T j , (5)\nwhere Q ∈ RC×L compresses the input into a low dimensional space (QTx ∈ RL), the resulting L channels are fused dynamically by Φ(x) ∈ RL×L and expanded to the number of output channels\nby P ∈ RC×L. This is denoted as dynamic convolution decomposition (DCD). The dimension L of the latent space is constrained by L2 < C. The default value of L in this paper is empirically set to b C\n2blog2 √\nCc c, which means dividing C by 2 repeatedly until it is less than √ C.\nWith this new design, the number of static parameters is significantly reduced (i.e. LC parameters in P or Q v.s. KC2 parameters in U or V , L < √ C), resulting in a more compact model. Mathematically, the dynamic residual PΦ(x)QT sums L2 rank-1 matrices piφi,j(x)qTj , where pi is the ith column vector of P , and qj is the jth column vector of Q. The constraint L2 < C, guarantees that this number (L2) is much smaller than the counterpart (KC) of vanilla dynamic convolution (see Eq. 4). Nevertheless, due to the use of a full matrix, dynamic channel fusion Φ(x) retains the representation power needed to achieve good classification performance.\nDCD also mitigates the joint optimization difficulty. Since each column of P (or Q) is associated with multiple dynamic coefficients (e.g. pi is related to φi,1, . . . , φi,L), it is unlikely that the learning of pi is suppressed by a few dynamic coefficients of small value.\nIn summary, DCD performs dynamic aggregation differently from vanilla dynamic convolution. Vanilla dynamic convolution uses a shared dynamic attention mechanism to aggregate unshared static basis vectors in a high dimensional latent space. In contrast, DCD uses an unshared dynamic channel fusion mechanism to aggregate shared static basis vectors in a low dimensional latent space." }, { "heading": "3.3 MORE GENERAL FORMULATION", "text": "So far, we have focused on the dynamic residual and shown that dynamic channel fusion enables a compact implementation of dynamic convolution. We next discuss the static kernel W0. Originally, it is multiplied by a dynamic scalar ∑ k πk(x), which is canceled in Eq. 3 as attention scores sum\nto one. Relaxing the constraint ∑\nk πk(x) = 1 results in the more general form\nW (x) = Λ(x)W0 + PΦ(x)Q T , (6)\nwhere Λ(x) is a C ×C diagonal matrix and λi,i(x) a function of x. In this way, Λ(x) implements channel-wise attention after the static kernel W0, generalizing Eq. 5 where Λ(x) is an identity matrix. Later, we will see that this generalization enables additional performance gains.\nRelation to Squeeze-and-Excitation (SE) (Hu et al., 2018): The dynamic channel-wise attention mechanism implemented by Λ(x) is related to but different from SE. It is parallel to a convolution and shares the input with the convolution. It can be thought of as either a dynamic convolution kernel y = (Λ(x)W0)x or an input-dependent attention mechanism applied to the output feature map of the convolution y = Λ(x)(W0x). Thus, its computational complexity is min(O(C2),O(HWC)), where H and W are height and width of the feature map.\nIn contrast, SE is placed after a convolution and uses the output of the convolution as input. It can only apply channel attention on the output feature map of the convolution as y = Λ(z)z, where z = W0x. Its computational complexity isO(HWC). Clearly, SE requires more computation than dynamic channel-wise attention Λ(x) when the resolution of the feature map (H ×W ) is high." }, { "heading": "3.4 DYNAMIC CONVOLUTION DECOMPOSITION LAYER", "text": "Implementation: Figure 2 shows the diagram of a dynamic convolution decomposition (DCD) layer. It uses a light-weight dynamic branch to generate coefficients for both dynamic channel-wise attention Λ(x) and dynamic channel fusion Φ(x). Similar to Squeeze-and-Excitation (Hu et al., 2018), the dynamic branch first applies average pooling to the input x. This is followed by two fully connected (FC) layers with an activation layer between them. The first FC layer reduces the number of channels by r and the second expands them into C + L2 outputs (C for Λ and L2 for Φ). Eq. 6 is finally used to generate convolutional weights W (x). Similarly to a static convolution, a DCD layer also includes a batch normalization and an activation (e.g. ReLU) layer.\nParameter Complexity: DCD has similar FLOPs to the vanilla dynamic convolution. Here, we focus on parameter complexity. Static convolution and vanilla dynamic convolution require C2 and KC2 parameters, respectively. DCD requires C2, CL, and CL parameters for static matrices W0, P and Q, respectively. An additional (2C + L2)Cr parameters are required by the dynamic branch\n𝑾!𝚲(𝒙)\n𝑷\" 𝑸!\"𝚽\"(𝒙)\n𝑷# 𝑸#\"𝚽#(𝒙)\n𝑷$ 𝑸$\"𝚽$(𝒙)\n+\nSparse Dynamic Residual\nFigure 3: Sparse dynamic residual, which is represented as a diagonal block matrix. Each diagonal block is decomposed separately as PbΦbQTb . Note that the static kernel W0 is still a full size matrix.\nto generate Λ(x) and Φ(x), where r is the reduction rate of the first FC layer. The total complexity is C2 + 2CL + (2C + L2)Cr . Since L is constrained as L\n2 < C, the complexity upper bound is (1 + 3r )C 2 + 2C √ C. When choosing r = 16, the complexity is about 1 316C\n2. This is much less than what is typical for vanilla dynamic convolution (4C2 in Chen et al. (2020c) and 8C2 in Yang et al. (2019))." }, { "heading": "4 EXTENSIONS OF DYNAMIC CONVOLUTION DECOMPOSITION", "text": "In this section, we extend the dynamic decomposition of 1 × 1 convolution (Eq. 6) in three ways: (a) sparse dynamic residual where PΦ(x)QT is a diagonal block matrix, (b) k × k depthwise convolution, and (c) k × k convolution. Here, k refers to the kernel size." }, { "heading": "4.1 DCD WITH SPARSE DYNAMIC RESIDUAL", "text": "The dynamic residual PΦ(x)QT can be further simplified into a block-diagonal matrix of blocks PbΦb(x)Q T b , b ∈ {1, . . . , B}, leading to\nW (x) = Λ(x)W0 + B⊕ b=1 PbΦb(x)Q T b , (7)\nwhere ⊕n\ni=1Ai = diag(A1, . . . , An). This form has Eq. 6 as a special case, where B = 1. Note that the static kernel W0 is still a full matrix and only the dynamic residual is sparse (see Figure 3). We will show later that keeping as few as 18 of the entries of the dynamic residual non-zero (B = 8) has a minimal performance degradation, still significantly outperforming a static kernel.\n4.2 DCD OF k × k DEPTHWISE CONVOLUTION\nThe weights of a k×k depthwise convolution kernel form a C×k2 matrix. DCD can be generalized to such matrices by replacing in Eq. 6 the matrix Q (which squeezes the number of channels) with a matrix R (which squeezes the number of kernel elements)\nW (x) = Λ(x)W0 + PΦ(x)R T , (8)\nwhere W (x) and W0 are C × k2 matrices, Λ(x) is a diagonal C × C matrix that implements channel-wise attention, R is a k2 × Lk matrix that reduces the number of kernel elements from k2 to Lk, Φ(x) is a Lk ×Lk matrix that performs dynamic fusion along Lk latent kernel elements and P is a C ×Lk weight matrix for depthwise convolution over Lk kernel elements. The default value of Lk is bk2/2c. Since depthwise convolution is channel separable, Φ(x) does not fuse channels, fusing instead Lk latent kernel elements.\n4.3 DCD OF k × k CONVOLUTION\nJoint fusion of channels and kernel elements: A k × k convolution kernel forms a C × C × k2 tensor. DCD can be generalized to such tensors by extending Eq. 6 into a tensor form (see Figure 4)\nW (x) = W0 ×2 Λ(x) + Φ(x)×1 Q×2 P ×3 R, (9)\nwhere ×n refers to n-mode multiplication (Lathauwer et al., 2000), W0 is a C × C × k2 tensor, Λ(x) is a diagonal C ×C matrix that implements channel-wise attention, Q is a C ×L matrix that reduces the number of input channels from C to L, R is a k2×Lk matrix that reduces the number of kernel elements from k2 to Lk, Φ(x) is a L×L×Lk tensor that performs joint fusion of L channels over Lk latent kernel elements, and P is a C × L matrix that expands the number of channels from L to C. The numbers of latent channels L and latent kernel elements Lk are constrained by Lk < k2 and L2Lk ≤ C. Their default values are set empirically to Lk = bk2/2c, L = b C/Lk 2blog2 √ C/Lkc c.\nChannel fusion alone: We found that the fusion of channels Φ(x)×1 Q is more important than the fusion of kernel elements Φ(x) ×3 R. Therefore, we reduce Lk to 1 and increase L accordingly. R is simplified into a one-hot vector [0, . . . , 0, 1, 0, . . . , 0]T , where the ‘1’ is located at the center (assuming that k is an odd number). As illustrated in Figure 4-(b), the tensor of dynamic residual Φ(x) ×1 Q ×2 P ×3 R only has one non-zero slice, which is equivalent to a 1 × 1 convolution. Therefore, the DCD of a k× k convolution is essentially adding a 1× 1 dynamic residual to a static k × k kernel." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we present the results of DCD on ImageNet classification (Deng et al., 2009). ImageNet has 1,000 classes with 1,281,167 training and 50, 000 validation images. We also report ablation studies on different components of the approach.\nAll experiments are based on two network architectures: ResNet (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). DCD is implemented on all convolutional layers of ResNet and all 1 × 1 convolutional layers of MobileNetV2. The reduction ratio r is set to 16 for ResNet and MobileNetV2 ×1.0, and to 8 for smaller models (MobileNetV2 ×0.5 and ×0.35). All models are trained by SGD with momentum 0.9. The batch size is 256 and remaining training parameters are as follows.\nResNet: The learning rate starts at 0.1 and is divided by 10 every 30 epochs. The model is trained with 100 epochs. Dropout (Srivastava et al., 2014) 0.1 is used only for ResNet-50.\nMobileNetV2: The initial learning rate is 0.05 and decays to 0 in 300 epochs, according to a cosine function. Weight decay of 2e-5 and a dropout rate of 0.1 are also used. For MobileNetV2 ×1.0, Mixup (Zhang et al., 2018a) and label smoothing are further added to avoid overfitting.\nTable 1: Different formulations of dynamic convolution decomposition on ImageNet classification.\nModel Params MAdds Top-1 W0 (static) 2.0M 97.0M 65.4 ΛW0 2.4M 97.4M 68.2 W0 + PΦQ\nT 2.7M 104.4M 69.2 ΛW0 + PΦQ T 2.9M 104.6M 69.8\n(a) MobileNet V2 ×0.5\nModel Params MAdds Top-1 W0 (static) 11.1M 1.81G 70.4 ΛW0 11.7M 1.81G 71.5 W0 + PΦQ\nT 13.6M 1.83G 72.8 ΛW0 + PΦQ T 14.0M 1.83G 73.1\n(b) ResNet-18" }, { "heading": "5.1 INSPECTING DIFFERENT DCD FORMULATIONS", "text": "Table 1 summarizes the influence of different components (e.g. dynamic channel fusion Φ(x), dynamic channel-wise attention Λ(x)) of DCD on MobileNet V2×0.5 and ResNet-18 performance. The table shows that both dynamic components, Λ(x) and Φ(x) of Eq. 6. enhance accuracy substantially (+2.8% and +3.8% for MobileNetV2 ×0.5, +1.1% and +2.4% for ResNet-18), when compared to the static baseline. Using dynamic channel fusion only (W0 + PΦQT ) has slightly more parameters, FLOPs, and accuracy than using dynamic channel-wise attention only (ΛW0). The combination of the two mechanisms provides additional improvement." }, { "heading": "5.2 ABLATIONS", "text": "A number of ablations were performed on MobileNet V2 ×0.5 to analyze DCD performance in terms of two questions.\n1. How does the dimension (L) of the latent space affect performance?\n2. How do three DCD variants perform?\nThe default configuration is the general form of DCD (Eq. 6) with a full size dynamic residual (B = 1) for all pointwise convolution layers. The default latent space dimension isL = b C\n2blog2 √\nCc c.\nLatent Space Dimension L: The dynamic channel fusion matrix Φ(x) has size L × L. Thus, L controls both the representation and the parameter complexity of DCD. We adjust it by applying different multipliers to the default value of L. Table 2 shows the results of MobileNetV2 ×0.5 for four multiplier values ranging from ×1.0 to ×0.25. As L decreases, fewer parameters are required and the performance degrades slowly. Even with a very low dimensional latent space (L×0.25), DCD still outperforms the static baseline by 3.3% top-1 accuracy.\nNumber of Diagonal Blocks B in the Dynamic Residual: Table 3-(a) shows classification results for four values ofB. The dynamic residual is a full matrix whenB = 1, while only 18 of its entries are non-zero forB = 8. Accuracy degrades slowly as the dynamic residual becomes sparser (increasing B). The largest performance drop happens when B is changed from 1 to 2, as half of the weight matrix W (x) becomes static. However, performance is still significantly better than that of the static baseline. The fact that even the sparsest B = 8 outperforms the static baseline by 2.9% (from 65.4% to 68.3%) demonstrates the representation power of the dynamic residual. In all cases, dynamic channel-wise attention Λ(x) enables additional performance gains.\nDCD at Different Layers: Table 3-(b) shows the results of implementing DCD for three different types of layers (a) DW: depthwise convolution (Eq. 8), (b) PW: pointwise convolution (Eq. 6), and (c) CLS: fully connected classifier, which is a special case of pointwise convolution (the input resolution is 1 × 1). Using DCD in any type of layer improves on the performance of the static baseline (+2.9% for depthwise convolution, +4.4% for pointwise convolution, and +1.2% for classifier). Combining DCD for both pointwise convolution and classifier achieves the best performance\n(+4.8%). We notice a performance drop (from 70.2% to 70.0%) when using DCD in all three types of layers. We believe this is due to overfitting, as it has higher training accuracy.\nExtension to 3× 3 Convolution: We use ResNet-18, which stacks 16 layers of 3× 3 convolution, to study the 3×3 extension of DCD (see Section 4.3). Compared to the static baseline (70.4% top-1 accuracy), DCD with joint fusion of channels and kernel elements (Eq. 9) improves top-1 accuracy (71.3%) by 0.9%. The top-1 accuracy is further improved by 1.8% (73.1%), when using DCD with channel fusion alone, which transforms the dynamic residual as a 1 × 1 convolution matrix (see Figure 4-(b)). This demonstrates that dynamic fusion is more effective across channels than across kernel elements.\nSummary: Based on the ablations above, DCD should be implemented with both dynamic channel fusion Φ and dynamic channel-wise attention Λ, the default latent space dimension L, and a full size residual B = 1. DCD is recommended for pointwise convolution and classifier layers in MobileNetV2. For 3 × 3 convolutions in ResNet, DCD should be implemented with channel fusion alone. The model can be made more compact, for a slight performance drop, by (a) removing dynamic channel-wise attention Λ, (b) reducing the latent space dimension L, (c) using a sparser dynamic residual (increasing B), and (d) implementing DCD in depthwise convolution alone." }, { "heading": "5.3 MAIN RESULTS", "text": "DCD was compared to the vanilla dynamic convolution (Yang et al., 2019; Chen et al., 2020c) for MobileNetV2 and ResNet, using the settings recommended above, with the results of\nTable 41. DCD significantly reduces the number of parameters while improving the performance of both network architectures. For MobileNetV2 ×1.0, DCD only requires 50% of the parameters of (Chen et al., 2020c) and 25% of the parameters of (Yang et al., 2019). For ResNet-18, it only requires 33% of the parameters of (Chen et al., 2020c), while achieving a 0.4% gain in top-1 accuracy. Although DCD requires slightly more MAdds than (Chen et al., 2020c), the increment is negligible. These results demonstate that DCD is more compact and effective.\nFigure 5 compares DCD to DY-Conv (Chen et al., 2020c) in terms of training convergence. DY-Conv uses a large temperature in its softmax to alleviate the joint optimization difficulty and make training more efficient. Without any additional parameter tuning, DCD converges even faster than DY-Conv with a large temperature and achieves higher accuracy.\n5.4 ANALYSIS OF DYNAMIC CHANNEL FUSION\nTo validate the dynamic property, Φ(x) should have different values over different images. We measure this by averaging the variance of each entry σΦ = ∑ i,j σi,j/L\n2, where σi,j is the variance of φi,j(x), over all validation images. To compare σΦ across layers, we normalize it by the variance of the corresponding input feature map. Figure 6 shows the normalized variance σΦ across layers in MobileNetV2. Clearly, the dynamic coefficients vary more in the higher layers. We believe this is because the higher layers encode more context information, providing more clues to adapt convolution weights." }, { "heading": "5.5 INFERENCE TIME", "text": "We use a single-threaded core AMD EPYC CPU 7551P (2.0 GHz) to measure running time (in milliseconds) on MobileNetV2 ×0.5 and ×1.0. Running time is calculated by averaging the inference time of 5,000 images with batch size 1. Both static baseline and DCD are implemented in PyTorch. Compared with the static baseline, DCD consumes about 8% more MAdds (97.0M vs 104.8M) and 14% more running time (91ms vs 104ms) for MobileNetV2 ×0.5. For MobileNetV2 ×1.0, DCD consumes 9% more MAdds (300.0M vs 326.0M) and 12% more running time (146ms vs 163ms). The overhead is higher in running time than MAdds. We believe this is because the optimizations of global average pooling and fully connected layers are not as efficient as convolution. This small penalty in inference time is justified by the DCD gains of 4.8% and 3.2% top-1 accuracy over MobileNetV2 ×0.5 and ×1.0 respectively." }, { "heading": "6 CONCLUSION", "text": "In this paper, we have revisited dynamic convolution via matrix decomposition and demonstrated the limitations of dynamic attention over channel groups: it multiplies the number of parameters by K and increases the difficulty of joint optimization. We proposed a dynamic convolution decomposition to address these issues. This applies dynamic channel fusion to significantly reduce the dimensionality of the latent space, resulting in a more compact model that is easier to learn with often improved accuracy. We hope that our work provides a deeper understanding of the gains recently observed for dynamic convolution.\n1The baseline results are from the original papers. Our implementation, under the setup used for DCD, has either similar or slightly lower results, e.g. for MobileNetV2×1.0 the original paper reports 72.0%, while our implementation achieves 71.8%." } ]
2,021
null
SP:8f15ab1eb05ed48f21dd35a118eb299040960074
[ "In this paper, the authors study the connection between GNNs and local clustering, and find that short random-walks in GNNs have a high probability to be stuck at a local cluster. Based on this, they propose a light and scalable GNN learning framework called LCGNN, which first adopts the local clustering method PPR-Nibble to partition full graph into subgraphs, then use GNN modules on subgraphs for training and inference. The authors evaluate LCGNN on six OGB datasets, and the proposed approach outperforms the competitors on node classification and link prediction tasks." ]
Graph Neural Networks (GNNs), which benefit various real-world problems and applications, have emerged as a powerful technique for learning graph representations. The depth of a GNN model, denoted by K, restricts the receptive field of a node to its K-hop neighbors and plays a subtle role in the performance of GNNs. Recent works demonstrate how different choices of K produce a trade-off between increasing representation capacity and avoiding over-smoothing. We establish a theoretical connection between GNNs and local clustering, showing that short random-walks in GNNs have a high probability to be stuck at a local cluster. Based on the theoretical analysis, we propose Local Clustering Graph Neural Networks (LCGNN), a GNN learning paradigm that utilizes local clustering to efficiently search for small but compact subgraphs for GNN training and inference. Compared to full-batch GNNs, sampling-based GNNs and graph partition-based GNNs, LCGNN performs comparably or even better, achieving state-of-the-art results on four Open Graph Benchmark (OGB) datasets. The locality of LCGNN allows it to scale to graphs with 100M nodes and 1B edges on a single GPU.
[]
[ { "authors": [ "Reid Andersen", "Fan Chung", "Kevin Lang" ], "title": "Local graph partitioning using pagerank vectors", "venue": "In FOCS", "year": 2006 }, { "authors": [ "Lukas Biewald" ], "title": "Experiment tracking with weights and biases, 2020. Software available from wandb.com", "venue": null, "year": 2020 }, { "authors": [ "Jianfei Chen", "Jun Zhu", "Le Song" ], "title": "Stochastic training of graph convolutional networks with variance reduction", "venue": "In ICML ’18,", "year": 2018 }, { "authors": [ "Jie Chen", "Tengfei Ma", "Cao Xiao" ], "title": "Fastgcn: fast learning with graph convolutional networks via importance sampling", "venue": "ICLR ’18,", "year": 2018 }, { "authors": [ "Ming Chen", "Zhewei Wei", "Zengfeng Huang", "Bolin Ding", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "ICML ’20,", "year": 2020 }, { "authors": [ "Wei-Lin Chiang", "Xuanqing Liu", "Si Si", "Yang Li", "Samy Bengio", "Cho-Jui Hsieh" ], "title": "Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks", "venue": "In KDD", "year": 2019 }, { "authors": [ "Fan Chung" ], "title": "The heat kernel as the pagerank of a graph", "venue": "PNAS ’07,", "year": 2007 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NeurIPS", "year": 2016 }, { "authors": [ "Nima Dehmamy", "Albert-László Barabási", "Rose Yu" ], "title": "Understanding the representation power of graph neural networks in learning graph topology", "venue": "In NeurIPS", "year": 2019 }, { "authors": [ "Chenhui Deng", "Zhiqiang Zhao", "Yongyu Wang", "Zhiru Zhang", "Zhuo Feng" ], "title": "Graphzoom: A multilevel spectral approach for accurate and scalable graph embedding", "venue": "ICLR ’20,", "year": 2020 }, { "authors": [ "Robin Dunbar" ], "title": "Grooming, gossip, and the evolution of language", "venue": null, "year": 1998 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks", "venue": "arXiv preprint arXiv:2003.00982,", "year": 2020 }, { "authors": [ "Kimon Fountoulakis", "David F Gleich", "Michael W Mahoney" ], "title": "A short introduction to local graph clustering methods and software", "venue": "Book of Abstracts for 7th International Conference on Complex Networks and Their Applications,", "year": 2018 }, { "authors": [ "Kimon Fountoulakis", "Farbod Roosta-Khorasani", "Julian Shun", "Xiang Cheng", "Michael W Mahoney" ], "title": "Variational perspective on local graph clustering", "venue": "Mathematical Programming,", "year": 2019 }, { "authors": [ "Michelle Girvan", "Mark EJ Newman" ], "title": "Community structure in social and biological networks", "venue": "PNAS ’02,", "year": 2002 }, { "authors": [ "Xavier Glorot", "Antoine Bordes", "Yoshua Bengio" ], "title": "Deep sparse rectifier neural networks", "venue": "In AISTATS", "year": 2011 }, { "authors": [ "Aditya Grover", "Jure Leskovec" ], "title": "node2vec: Scalable feature learning for networks", "venue": "In KDD", "year": 2016 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NeurIPS", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR", "year": 2016 }, { "authors": [ "Weihua Hu", "Matthias Fey", "Marinka Zitnik", "Yuxiao Dong", "Hongyu Ren", "Bowen Liu", "Michele Catasta", "Jure Leskovec" ], "title": "Open graph benchmark: Datasets for machine learning on graphs", "venue": "NeurIPS", "year": 2020 }, { "authors": [ "Wenbing Huang", "Tong Zhang", "Yu Rong", "Junzhou Huang" ], "title": "Adaptive sampling towards fast graph representation learning", "venue": "In NeurIPS", "year": 2018 }, { "authors": [ "Jiawei Jiang", "Pin Xiao", "Lele Yu", "Xiaosen Li", "Jiefeng Cheng", "Xupeng Miao", "Zhipeng Zhang", "Bin Cui" ], "title": "Psgraph: How tencent trains extremely large-scale graphs with spark", "venue": "In ICDE", "year": 2020 }, { "authors": [ "George Karypis", "Vipin Kumar" ], "title": "A fast and high quality multilevel scheme for partitioning irregular graphs", "venue": "SIAM Journal on scientific Computing,", "year": 1998 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "ICLR ’17,", "year": 2017 }, { "authors": [ "Kyle Kloster", "David F Gleich" ], "title": "Heat kernel based community detection", "venue": "In KDD ’14,", "year": 2014 }, { "authors": [ "Isabel M Kloumann", "Jon M Kleinberg" ], "title": "Community membership identification from small seed sets", "venue": "In KDD", "year": 2014 }, { "authors": [ "Jure Leskovec", "Christos Faloutsos" ], "title": "Sampling from large graphs", "venue": "In KDD ’06,", "year": 2006 }, { "authors": [ "Jure Leskovec", "Jon Kleinberg", "Christos Faloutsos" ], "title": "Graphs over time: densification laws, shrinking diameters and possible explanations", "venue": "In KDD", "year": 2005 }, { "authors": [ "Jure Leskovec", "Kevin J Lang", "Anirban Dasgupta", "Michael W Mahoney" ], "title": "Community structure in large networks: Natural cluster sizes and the absence of large well-defined clusters", "venue": "Internet Mathematics,", "year": 2009 }, { "authors": [ "Guohao Li", "Matthias Muller", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Can gcns go as deep as cnns", "venue": "In ICCV", "year": 2019 }, { "authors": [ "Guohao Li", "Chenxin Xiong", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepergcn: All you need to train deeper gcns", "venue": "arXiv preprint arXiv:2006.07739,", "year": 2020 }, { "authors": [ "Yixuan Li", "Kun He", "David Bindel", "John E Hopcroft" ], "title": "Uncovering the small community structure in large networks: A local spectral approach", "venue": "In WWW", "year": 2015 }, { "authors": [ "Meng Liu", "Hongyang Gao", "Shuiwang Ji" ], "title": "Towards deeper graph neural networks", "venue": "In KDD", "year": 2020 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Decoupled weight decay regularization", "venue": "In ICLR", "year": 2019 }, { "authors": [ "Wenjuan Luo", "Han Zhang", "Xiaodi Yang", "Lin Bo", "Xiaoqing Yang", "Zang Li", "Xiaohu Qie", "Jieping Ye" ], "title": "Dynamic heterogeneous graph neural network for real-time event prediction", "venue": "In KDD", "year": 2020 }, { "authors": [ "Lingxiao Ma", "Zhi Yang", "Youshan Miao", "Jilong Xue", "Ming Wu", "Lidong Zhou", "Yafei Dai" ], "title": "Neugraph: parallel deep neural network computation on large graphs", "venue": "In USENIX ATC", "year": 2019 }, { "authors": [ "Mark EJ Newman" ], "title": "Modularity and community structure in networks", "venue": "PNAS ’06,", "year": 2006 }, { "authors": [ "Bryan Perozzi", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In KDD", "year": 2014 }, { "authors": [ "Omri Puny", "Heli Ben-Hamu", "Yaron Lipman" ], "title": "From graph low-rank global attention to 2-fwl approximation", "venue": "ICML 2020 Workshop of Graph Representation Learning and Beyond,", "year": 2020 }, { "authors": [ "Yunsheng Shi", "Zhengjie Huang", "Shikun Feng", "Yu Sun" ], "title": "Masked label prediction: Unified massage passing model for semi-supervised classification", "venue": "arXiv preprint arXiv:2009.03509,", "year": 2020 }, { "authors": [ "Jiřı́ Šı́ma", "Satu Elisa Schaeffer" ], "title": "On the np-completeness of some graph cluster measures", "venue": "In International Conference on Current Trends in Theory and Practice of Computer Science,", "year": 2006 }, { "authors": [ "Daniel A Spielman", "Shang-Hua Teng" ], "title": "A local clustering algorithm for massive graphs and its application to nearly linear time graph partitioning", "venue": "SIAM Journal on computing,", "year": 2013 }, { "authors": [ "Damien Teney", "Lingqiao Liu", "Anton van Den Hengel" ], "title": "Graph-structured representations for visual question answering", "venue": "In CVPR", "year": 2017 }, { "authors": [ "Amanda L Traud", "Peter J Mucha", "Mason A Porter" ], "title": "Social structure of facebook networks", "venue": "Physica A: Statistical Mechanics and its Applications,", "year": 2012 }, { "authors": [ "Luis M Vaquero", "Felix Cuadrado", "Dionysios Logothetis", "Claudio Martella" ], "title": "Adaptive partitioning for large-scale dynamic graphs", "venue": "In ICDCS", "year": 2014 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In NeurIPS", "year": 2017 }, { "authors": [ "Minjie Wang", "Da Zheng", "Zihao Ye", "Quan Gan", "Mufei Li", "Xiang Song", "Jinjing Zhou", "Chao Ma", "Lingfan Yu", "Yu Gai", "Tianjun Xiao", "Tong He", "George Karypis", "Jinyang Li", "Zheng Zhang" ], "title": "Deep graph library: A graph-centric, highly-performant package for graph neural networks", "venue": "arXiv preprint arXiv:1909.01315,", "year": 2019 }, { "authors": [ "Duncan J Watts", "Steven H Strogatz" ], "title": "Collective dynamics of ‘small-world", "venue": null, "year": 1998 }, { "authors": [ "Joyce Jiyoung Whang", "David F Gleich", "Inderjit S Dhillon" ], "title": "Overlapping community detection using seed set expansion", "venue": "In CIKM", "year": 2013 }, { "authors": [ "Felix Wu", "Amauri Souza", "Tianyi Zhang", "Christopher Fifty", "Tao Yu", "Kilian Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": "In ICML", "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge networks", "venue": null, "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "ICLR ’19,", "year": 2019 }, { "authors": [ "Ning Xu", "Lei Chen", "Bin Cui" ], "title": "Loggp: a log-based dynamic graph partitioning method", "venue": "VLDB ’14,", "year": 2014 }, { "authors": [ "Hao Yin", "Austin R Benson", "Jure Leskovec", "David F Gleich" ], "title": "Local higher-order graph clustering", "venue": "In KDD", "year": 2017 }, { "authors": [ "Rex Ying", "Ruining He", "Kaifeng Chen", "Pong Eksombatchai", "William L Hamilton", "Jure Leskovec" ], "title": "Graph convolutional neural networks for web-scale recommender systems", "venue": "In KDD", "year": 2018 }, { "authors": [ "Jiaxuan You", "Bowen Liu", "Zhitao Ying", "Vijay Pande", "Jure Leskovec" ], "title": "Graph convolutional policy network for goal-directed molecular graph generation", "venue": "In NeurIPS", "year": 2018 }, { "authors": [ "Hanqing Zeng", "Hongkuan Zhou", "Ajitesh Srivastava", "Rajgopal Kannan", "Viktor Prasanna" ], "title": "Graphsaint: Graph sampling based inductive learning method", "venue": "ICLR ’20,", "year": 2020 }, { "authors": [ "Dalong Zhang", "Xin Huang", "Ziqi Liu", "Zhiyang Hu", "Xianzheng Song", "Zhibang Ge", "Zhiqiang Zhang", "Lin Wang", "Jun Zhou", "Yuan Qi" ], "title": "Agl: a scalable system for industrial-purpose graph machine learning", "venue": "VLDB ’20,", "year": 2020 }, { "authors": [ "Jiani Zhang", "Xingjian Shi", "Junyuan Xie", "Hao Ma", "Irwin King", "Dit-Yan Yeung" ], "title": "Gaan: Gated attention networks for learning on large and spatiotemporal graphs", "venue": "UAI ’18,", "year": 2018 }, { "authors": [ "Rong Zhu", "Kun Zhao", "Hongxia Yang", "Wei Lin", "Chang Zhou", "Baole Ai", "Yong Li", "Jingren Zhou" ], "title": "Aligraph: a comprehensive graph neural network", "venue": "platform. VLDB", "year": 2019 }, { "authors": [ "Zeyuan Allen Zhu", "Silvio Lattanzi", "Vahab Mirrokni" ], "title": "A local algorithm for finding wellconnected clusters", "venue": "In ICML ’13,", "year": 2013 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recent emergence of the Graph Neural Networks (GNNs), exemplified by models like ChebyNet (Defferrard et al., 2016), GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017), GAT (Veličković et al., 2018), and GIN (Xu et al., 2019), has drastically reshaped the landscape of the graph learning research. These methods generalize traditional deep learning algorithms to model graph-structured data by combining graph propagation and neural networks. Despite its conceptual simplicity, GNNs have reestablished the new state-of-the-art methods in various graph learning tasks, such as node classification, link prediction, and graph classification (Hu et al., 2020; Dwivedi et al., 2020), also served as key contributors to many real-world applications, such as recommendation system (Ying et al., 2018), smart transportation (Luo et al., 2020), visual question answering (Teney et al., 2017) and molecular de-novo design (You et al., 2018).\nWith the growth of real-world social and information networks (Leskovec et al., 2005), there is an urgent need to scale GNNs to massive graphs. For example, the recommendation systems in Alibaba (Zhu et al., 2019) and Pinterest (Ying et al., 2018) require training and inferring GNNs on graphs with billions of edges. Building such large-scale GNNs, however, is a notoriously expensive process. For instance, the GNN models in Pinterest are trained on a 500GB machine with 16 Tesla K80 GPUs, and served on a Hadoop cluster with 378 d2.8xlarge Amazon AWS machines.\nAlthough one may think model parameters are the main contributors to the huge resource consumption of GNNs, previous work (Ma et al., 2019) suggests the main bottleneck actually comes from the entanglement between graph propagation and neural networks, which leads to a large and irregular computation graph for GNNs. This problem is further exacerbated by the small-world phenomenon (Watts & Strogatz, 1998), i.e., even a relatively small number of graph propagation can involve full-graph computation. For example, in Facebook college graphs of John Hopkins (Traud et al., 2012), the 2-hop neighbors of node 1, as shown in Fig. 1a, covers 74.5% of the whole graph.\nA common strategy to reduce the overhead of GNNs is to make the graph smaller but may bring side effects. For instance, graph sampling techniques, such as neighborhood sampling in GraphSAGE (Hamilton et al., 2017), may lead to the high variance issue (Chen et al., 2018a). Alternatively, graph partition techniques, such as METIS (Karypis & Kumar, 1998) that adopted by Cluster-GCN (Chiang et al., 2019) and AliGraph (Zhu et al., 2019), essentially involves extra full-\ngraph computation for preprocessing. Besides reducing graph size, a recent attempt (Wu et al., 2019) to scale up GNNs is to decouple graph propagation and neural networks. However, such simplification may sacrifice certain performances.\nIn this work, we explore a theoretically guaranteed strategy, local clustering (Spielman & Teng, 2013; Andersen et al., 2006), intending to design a lightweight, effective and scalable GNN framework. We establish a connection between GNNs and local clustering theory, showing that the graph propagation in GNNs (i.e., short random-walk) has a high probability to be stuck at a local cluster (a.k.a, community), and the escaping probability is proportional to the conductance of the local cluster. We propose Local Clustering Graph Neural Networks (LCGNN), which utilizes local clustering algorithms to seek local and dense subgraphs (e.g., Fig. 1b) for GNN training and inference. Different from full-batch and graph partition-based methods, LCGNN does not incur full-graph processing and can be conducted locally. Compared to various baselines, LCGNN achieves stateof-the-art results in four Open Graph Benchmark (Hu et al., 2020) datasets. Moreover, the locality nature of LCGNN allows it to scale to graphs with 100M nodes and 1B edges on a single GPU.\nThe rest of the paper is organized as follows. Section 2 gives a brief background summary followed by a survey of related works in section 3. In section 4 and section 5, we establish the connection between GNNs and local clustering, and then describe our LCGNN framework. Section 6 presents the experimental results and ablation study. Finally, we concludes this work in section 7." }, { "heading": "2 BACKGROUND", "text": "In this section, we bring the necessary background about graph, graph convolutional networks (GCN), (lazy) random walk on graphs, and graph conductance.\nGraph Notations The graph G = (V,E,A) consists of |V | = n nodes and |E| = m edges. A ∈ Rn×n+ is the adjacency matrix where its entry A(i, j), if nonzero, denote there is an edge between node i and j with edge weight Aij . In this work, we assume the input graph is undirected and unweighted, and our analysis can be generalized to the weighted graph case easily. For undirected graph, the degree matrix D , diag(d(1), · · · , d(n)) is a diagonal matrix where d(i) , ∑ jA(i, j) is the degree of node i. Moreover, each node inG is associated with a F -dimensional feature vector, denoted by xi ∈ RF . The entire feature matrix X ∈ Rn×F is the concatenation of node feature vectors. There are two matrices that play importance roles in the design and analysis of GCN (Kipf & Welling, 2017) — the normalized graph Laplacian L , D−1/2AD−1/2 and the random walk transition probability matrix P , AD−1. Note that the entry P (i, j) indicates the probability that the random walk goes from node j to node i.\nGraph Convolutional Networks (GCN) GCN (Kipf & Welling, 2017) initializes the node representation as the input feature matrix H(0) ←X , and iteratively apply non-linear transformation and graph propagation on node representation: H(k) ← ReLU ( LH(k−1)W (k) ) , where left multiplying H(k−1) by normalized graph Laplacian L acts as the graph propagation, and right multiplying H(k−1) by W as well as the ReLU (Glorot et al., 2011) activation acts as the non-linear transformation. For the node classification task, a K-layer GCN predicts node labels Y with a softmax\nclassifier: Y ← softmax ( LH(K−1)W (K) ) . Take a two-layer GCN (K = 2) as a running exam-\nple, the predicted node labels Y is defined as Y ← softmax ( LReLU ( LH(0)W (1) ) W (2) ) .\nLazy Random Walk In practice, many GNNs add (weighted) self-loops to the graphs (A ← A+αI , Kipf & Welling (2017); Xu et al. (2019)) or create residual connections (He et al., 2016) in neural networks (Li et al., 2019; Dehmamy et al., 2019). Such techniques can be viewed as variants of lazy random walk on graphs — at every step, with probability 1/2 the walker stays at the current node (through a self-loop) and with probability 1/2 the walker travels to a neighbor. The transition matrix of a lazy random walk is M , (I + AD−1)/2. In this work, we mainly consider lazy random walk. Because it has several desired properties and it is consistent with the actual situation.\nGraph Conductance For an undirected unweighted graph G = (V,E,A), the graph volume of any non-empty node set S ⊂ V is defined as vol(S) , ∑ i∈S d(i), which measures the total number of edges incident from S. The conductance of a non-empty node set S ⊂ V is defined as Φ(S) , ∑ i∈S ∑ j∈V−S A(i,j)\nmin (vol(S),vol(V−S)) . Roughly speaking, conductance Φ(S) is the ratio of the number of edges across S and V − S to the number of edges incident from S, measuring the clusterability of a subset S. Low conductance indicates a good cluster because its internal connections are significantly richer than its external connections. Although it is NP-hard to minimize conductance (Šı́ma & Schaeffer, 2006), there have been theoretically-guaranteed approximation algorithms that identify clusters near a given node that satisfy a target conductance condition, such as Spielman & Teng (2013); Andersen et al. (2006); Chung (2007)." }, { "heading": "3 RELATED WORK", "text": "The design of scalable GNNs has attracted wide attention from the machine learning community. We review related work from three perspectives: (1) full-batch GNNs with co-design of systems and algorithms; (2) sampling-based GNNs; (3) graph partition-based GNNs.\nFull-batch GNNs A full-batch GNN takes a whole graph as input for forward and backward. Consequently, its computational cost is proportional to the graph size. Earlier GNN models (Kipf & Welling, 2017; Veličković et al., 2018) evaluated on relatively small graphs, thus can be trained in a full-batch manner. Scaling full-batch GNNs to large graphs requires the co-design of ML systems and ML algorithms (Jiang et al., 2020; Zhang et al., 2020; Ma et al., 2019). For example, NeuGraph (Ma et al., 2019) runs full-batch GNN models on a graph with 8.6M nodes and 231.6M edges on an eight-P100-GPU server. SGC (Wu et al., 2019) is another attempt at full-batch GNN. It simplifies GCN by conducting graph propagation and classification separately and efficiently. However, such simplification may sacrifice performance in some downstream tasks.\nGNNs based on Graph Sampling GraphSAGE (Hamilton et al., 2017) first proposed the idea of neighborhood sampling, and later it was applied in a real-world recommendation system by PinSAGE (Ying et al., 2018). At each GNN layer, GraphSAGE computes node representation by first down-sampling its neighborhoods and then aggregating the sampled ones. As a randomized algorithm, Neighborhood Sampling was further improved by FastGCN (Chen et al., 2018b), Stochastic GCN (Chen et al., 2018a) and Adaptive Sampling (Huang et al., 2018) for variance reduction. A recent work about sampling-based GNN is GraphSAINT (Zeng et al., 2020), which samples subgraphs (Leskovec & Faloutsos, 2006) and run full-batch GNN on sampled subgraphs.\nGNNs based on Graph Partition Cluster-GCN (Chiang et al., 2019) is the most related work to ours. Cluster-GCN adopts global graph partition algorithms, METIS (Karypis & Kumar, 1998), to partition the input graph into subgraphs, and run a GNN on each subgraph. A similar idea was also proposed in AliGraph (Zhu et al., 2019). However, global graph partition algorithms involve additional whole graph computation. Moreover, global graph partition algorithms are vulnerable to dynamic and evolving graphs (Xu et al., 2014; Vaquero et al., 2014), with nodes and edges being constantly added and removed, which are very common in real-world applications." }, { "heading": "4 SHORT RANDOM WALK AS LOCAL CLUSTERING", "text": "Most GNNs adopt short random walks to explore a graph. For example, the default 2-layer GCN in Kipf & Welling (2017) can be viewed as enumerating all length-2 paths and aggregating them with a neural network; Hamilton et al. (2017) uses a 2-hop neighborhood sampling method, a variant of 2- hop random walk, to sample neighbors in each GraphSAGE layer. GraphSAINT (Zeng et al., 2020) samples subgraphs by 2-hop random walks and then build a full-batch GCN on them. SGC (Wu et al., 2019) conducts 2-hop feature propagation and then apply node-wise logistic regression.\nWe reveal the theoretical connection between short random walk and local clustering. To be more formal, let q(K) be the K-th step lazy random-walk distribution starting from an arbitrary node u according to transition probability matrix M , i.e., q(K) ← MK1u. We want to study the probability vector q(K) in terms of K, especially when K is small (e.g., K = 2). Due to the the small world phenomenon (Watts & Strogatz, 1998), for most social/information networks, q(K) can have O(n) non-zeros, even K is small, e.g., K = 2 or 3. However, the following theorem shows that the probability that a random walk escaping from a local cluster can be bounded by its conductance:\nTheorem 1 (Escaping Mass, Proposition 2.5 in Spielman & Teng (2013)). For all K ≥ 0 and all S ⊂ V , the probability that any K-step lazy random walk staring in S escapes S is at most KΦ(S)/2. I.e., the escaping probability satisfies q(K)(V − S) ≤ KΦ(S)/2.\nThe key point of Theomre 1 is to relate theK-th step random-walk probability to graph conductance — for a node u, suppose there exists a subset S such that (1) u ∈ S and (2) Φ(S) is small (low conductance), Theorem 1 guarantees that the probability that a lazy random walk starting from node u is very likely to be stuck at S, revealing the following facts and potential problems of existing GNNs: (1) for full-batch GNNs, although its receptive field induced by K-hop neighbors may cover the whole graph, most probability mass still concentrates around a local cluster (if exists), and the remaining probabilities (i.e., escaping mass) are small and bounded. Consequently, the computation cost of full-batch GNNs can be largely reduced; (2) Sampling-based methods can be viewed as a randomized and implicit version of finding a local clustering, however, with their sample-efficiency and variance non-guaranteed. The above facts encourage us to design local clustering-based GNNs.\nA crucial question about the above analysis is the existence of a low-conductance S for every node u (or most nodes in the graph). This is generally not true for arbitrary graphs, e.g., a complete graph. However, evidence from network science and social science agrees with our assumption. For example, (1) Many networks of interest in the sciences are found to divide naturally into communities (Girvan & Newman, 2002; Newman, 2006); (2) Real-world social networks consist of compact communities with size scale of around 100 nodes (Leskovec et al., 2009); (3) Roughly 150 individuals are the upper limit on the size of a well-functioning human community (Dunbar, 1998)." }, { "heading": "5 LOCAL CLUSTERING GRAPH NEURAL NETWORKS (LCGNN)", "text": "The analysis in section 4 lays the theoretical foundation of the design of our LCGNN framework. In the section, we formally introduce LCGNN. Roughly speaking, our framework consists of two steps. In the first step, for each node u ∈ V , we run local clustering to produces a local cluster Su surrounding it. In the second step, we feed the subgraph induced by Su to a GNN encoder." }, { "heading": "5.1 LOCAL CLUSTERING", "text": "Local clustering algorithms find a small cluster near given seed(s). Different from global graph partition methods involving full-graph computation, local clustering conducts local exploration in the graph and its running time depends only on the size of the output cluster. Over the past two decades, many local clustering algorithms have been developed (Spielman & Teng, 2013; Andersen et al., 2006; Chung, 2007; Li et al., 2015; Kloster & Gleich, 2014; Kloumann & Kleinberg, 2014; Whang et al., 2013; Yin et al., 2017; Fountoulakis et al., 2019). In this works, we mainly focus on PPR-Nibble (Andersen et al., 2006), one of the most popular spectral-based local clustering algorithms among the above methods. As its name indicates, PPR-Nibble adopts the personalized PageRank (PPR) vector for local clustering. The PPR vector pu of a node u is given by equation pu = α1u+(1−α)Ppu, which is the stationary distribution of the following random walk: at each\nAlgorithm 1: Approximate-PPR. 1 Input Graph G = (V,E,A), seed node u, teleportation parameter α, tolerance ; 2 Output An -approximate PPR vector p̃u; 3 p̃u ← 0; r ← 1u; 4 while r(v)/d(v) ≥ for some v ∈ V do 5 ρ← r(v)−\n2 d(v); p̃u(v)← p̃u(v) + αρ; r(v)← 2d(v);\n6 for each (v, u) ∈ E do 7 r(u)← r(u) + A(v,u)\nd(v) (1− α)ρ;\n8 return p̃u;\nAlgorithm 2: PPR-Nibble. 1 Input Graph G = (V,E,A), seed node u, teleportation parameter α, tolerance ; 2 Output A local cluster S ⊂ V ; 3 p̃u ← Approximate-PPR(G, u, α, ); 4 σi ← i−th largest entry of D−1p̃u; 5 return S ← arg minS` Φ(S`), where S` = {σ1, · · · , σ`};\nstep of the random walk, with probability α the walker teleports back to the node u, and with probability 1−α the walker performs a normal random walk. However, PPR vector pu is a dense vector and thus computationally expensive. Andersen et al. (2006) developed an efficient algorithm, named Approximate-PPR to compute its sparse approximation p̃u so that |pu(v)/d(v)− p̃u(v)/d(v)| ≤ for each node v. As shown in Algorithm 1, the key idea is to gradually push probabilities from a residual vector r to approximate PPR vector p̃u (Line 5-7 of Algorithm 1). After computing the approximate PPR vector p̃u, a sweep procedure is then adopted to extract a cluster S with small conductance Φ(S). More formally, the sweep procedure first sort nodes according to D−1p̃u in descending order (Line 4 of Algorithm 2), and then evaluate the conductance of each node prefix in the sorted list and output the one with smallest conductance (Line 5 of Algorithm 2).\nNote that PPR-Nibble is a local algorithm (Spielman & Teng, 2013) with theoretical guarantee — (1) The input to the algorithm is a starting node u; (2) At each step of Approximate-PPR in Algorithm 1, it only examines nodes connected to those it has seen before. The following theorems characterize the complexity and error bounds of Approximate-PPR of PPR-Nibble, respectively.\nTheorem 2 (Lemma 2 in Andersen et al. (2006)). Algorithm 1 runs in timeO (\n1 α\n) . and the number\nof non-zeros in p̃u satisfies nnz(p̃u) ≤ 1α . Theorem 3 (Theorem 1 in Zhu et al. (2013); Theorem 4.3 in Yin et al. (2017)). Let S ⊂ V be some unknown targeted cluster, we are trying to retrieve from an unweighted graph. Let η be the inverse mixing time of the random walk on the subgraph induced by S. Then there exists Sg ⊆ S with vol(Sg) ≥ vol(S)/2, such that for any seed u ∈ Sg , Algorithm 2 with α = Θ(η) and ∈[\n1 10 vol(T ) , 1 5 vol(T )\n] outputs a set S with Φ(S) ≤ Õ ( min {√ Φ(T ),Φ(T )/ √ η }) ." }, { "heading": "5.2 LOCAL CLUSTER ENCODER", "text": "For each node u ∈ V , PPR-Nibble in Algorithm 2 produces a local cluster Su ⊂ V with |Su| ≤ 1α . We denote Gu to be the subgraph induced by the cluster Su, which is then encoded to a hidden representation via an encoder (usually a GNN model): hu ← ENCODER(Gu). The encoded hidden representation can be further used for various graph learning tasks. For the node classification task, we predict the label of node u with a softmax classifier: yu ← softmax(Whu + b); For the link prediction task, we measure the likelihood of a link e = (u, v) by first element-wisely multiplying hu and hv and then feeding it to a MLP encoder, i.e., ye ← MLP(hu hv). The choice of the encoder is flexible. In this work, we mainly examine four candidate encoders:\nGCN/GAT/GraphSAGE encoders Our first candidate encoders are traditional GNNs such as GCN, GAT, and GraphSAGE. We denote them as LCGNN-GCN/-GAT/-SAGE, respectively.\nTransformer Encoder We also examine a more complex and powerful encoder based on Transformer (Vaswani et al., 2017). Our hypothesis is that low conductance subgraphs extracted by local clustering have such rich internal connections that we can almost treat them as complete graphs. Thus we adopt the Transformer encoder whose attention mechanism allows dense interaction within a subgraph. We initialize the positional embedding in Transfomer as the pre-trained Node2vec embedding on the input graph. We denote the Transformer-based encoder as LCGNN-Transformer." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we conduct experiments on two major tasks of graph learning, node classification and link prediction. For each task, we use the datasets from Open Graph Benchmark (OGB) (Hu et al., 2020), which presents significant challenges of scalability to large-scale graphs and out-ofdistribution generalization. The dataset statistics are summarized in Table 1, Another graph task, graph classification, is not explored in our experiments because it is unnecessary to utilize local clustering for small graphs with only hundreds of nodes. The average and standard deviation of test performance under 10 different seeds are reported in all experiments. For the local clustering algorithm, we use the software provided by Fountoulakis et al. (2018). We set α = 0.15 in Approximate-PPR, and constraint the maximum cluster size to be 64 or 128 in the PPR-Nibble step, i.e., the sweep procedure only examines the prefix of first 64 (128) nodes in Algorithm 2. Detailed hyper-parameter configuration of LCGNN can be found in Appendix A.2." }, { "heading": "6.1 NODE CLASSIFICATION", "text": "Node classification datasets include products, arxiv, and papers100M at different scales. We train LCGNN on a single GPU on all three datasets. Limited by space, the results of the arxiv dataset are reported in the Appendix A.1 because relatively small datasets are not our target scenario.\nBaselines. The OGB team provides MLP, Node2vec (Grover & Leskovec, 2016), GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017) as the common baselines for products and arxiv datasets. For the large-scale papers100M dataset, the OGB team only provides MLP, Node2vec, and SGC (Wu et al., 2019). Other teams and researchers also contribute numerous models to the leaderboards: For the products dataset, three GAT-based models with different mini-batch training techniques are also reported. DeeperGCN (Li et al., 2020) explores how to design and train deep GCNs. UniMP (Shi et al., 2020) is a most recent model1 which combines feature propagation and label propagation.\nResults. The results of products and papers100M datasets are listed in the Table 2 and Table 3, respectively. In papers100M dataset, SGC (Wu et al., 2019) is the only reported GNN model that can handle this large-scale dataset with more than 1 billion edges. SGC gets better performance than Node2vec and MLP due to the expressive power of (simplified) graph convolution. Compared with SGC, LCGNN uses a semi-supervised manner and can learn feature transformation in the training procedure. Our proposed LCGNN obtains better performance than SGC with 2.73% absolute improvement, which shows stronger expressiveness of our model. In products dataset, our LCGNN (rank 2 in Table 2) gets comparable results with other state-of-the-art GNN models. The arxiv dataset is relatively small and well-tuned full-bath GNNs achieve the best results. Our\n1UniMP was submitted to OGB leaderboard on Sep 8, 2020, in one month before ICLR 2021 deadline.\nLCGNN gets comparable results to full-batch GNNs and achieves better results than sampling-based GNNs (such as GAT with neighbor sampling), as shown in the Table 7 in the Appendix A.1.\nAblation Study. Table 2 suggests that LCGNN-GCN and LCGNN-SAGE surpass the corresponding full-batch GCN and GraphSAGE. Furthermore, LCGNN-SAGE and LCGNN-GAT perform competitively or even better on products dataset comparing to corresponding GraphSAGE and GAT models with other training and sampling techniques, including Neighborhood Sampling (Hamilton et al., 2017), ClusterGCN (Chiang et al., 2019), and GraphSAINT (Zeng et al., 2020)." }, { "heading": "6.2 LINK PREDICTION", "text": "We evaluate LCGNN on three link prediction tasks — ppa, collab, and citation. We use a single GPU to train on the collab dataset and use multi-GPUs to train on the ppa and citation datasets (5 GPUs for ppa and 4 GPUs for citation).\nBaselines. The OGB team provides Matrix Factorization, Node2vec (Grover & Leskovec, 2016), GCN (Kipf & Welling, 2017), GraphSAGE (Hamilton et al., 2017) as the common baselines. For the citation dataset, GCN/SAGE-based models with three different mini-batch training techniques are also provided by the OGB team. Other researchers also contribute some state-of-the-art models to the leaderboards. DeepWalk (Perozzi et al., 2014) is submitted by other researchers using DGL (Wang et al., 2019). LRGA+GCN (Puny et al., 2020) is a recently proposed model which aligns 2-folklore Weisfeiler-Lehman algorithm to improve the generalization of GNNs.\nResults. The results of ppa, collab, and citation datasets are listed in the Table 4, 5, and 6, respectively. We compare LCGNN with a recently developed model, LRGA+GCN (Puny et al., 2020), as well as traditional baselines. For all three datasets for link prediction, our proposed LCGNN achieves the best results over state-of-the-art models with 0.68% ∼ 2.64% absolute improvements, showing the ability of local clustering and Transfomer encoder to boost link-prediction performance.\nAblation Study. We report the results on the collab when the Transformer encoder is replaced with GCN and GraphSAGE encoders. Compared with full-batch GCN and GraphSAGE, our LCGNNGCN and LCGNN-SAGE obtains much better performance, which suggests the significance of graph local clustering. LCGNN-Transformer gets better results than LCGNN-GCN and LCGNNSAGE due to the powerful expressiveness of the Transformer encoder.\nOverall, not only LCGNN achieves four first places (ogbn-paper100m, ogbl-ppa, ogbl-collab, and ogbl-citation) and one second place (ogbn-products) on OGB datasets, it also improves the scalability of GNN models for large-scale graphs." }, { "heading": "7 CONCLUSION", "text": "In this work, we present Local Clustering Graph Neural Networks (LCGNN), a lightweight, effective, and scalable GNN framework with theoretical guarantees. LCGNN combines local clustering algorithms and graph neural network models to achieve state-of-the-art performance on four Open Graph Benchmark (OGB) datasets. By incorporating local clustering algorithms, LCGNN can run on compact and small subgraphs without conducting full-graph computation, scaling to graphs with 100 million nodes and 1 billion edges on a single GPU. In the future, it would be interesting to try more advanced local clustering algorithms other than the PPR-Nibble. Applying LCGNN in real-world applications, such as the recommendation system, is also a promising direction." }, { "heading": "A APPENDIX", "text": "A.1 EXPERIMENTAL RESULTS\nWe report the results of the ogbn-arxiv dataset in the Table 7. There are some models that are only evaluated on the smallest dataset (i.e., ogbn-arxiv), including GraphZoom (Deng et al., 2020), GaAN (Zhang et al., 2018), DAGNN (Liu et al., 2020), JKNet (Xu et al., 2018), GCNII (Chen et al., 2020). Most of these models cannot handle ogbn-products with millions of nodes. Our LCGNN models get comparable results with most state-of-the-art GNN models with and without mini-batch training techniques.\nA.2 EXPERIMENTAL SETUP\nA.2.1 RUNNING ENVIRONMENT\nWe run our experiments on a single machine with Intel Xeon CPUs (Platinum 8163 @ 2.50GHz), 330GB memory, and 8 NVIDIA Tesla V100 (16GB). The code is written in Python 3.6. We use PyTorch 1.5.1 on CUDA 10.1 to train our models.\nA.2.2 HYPERPARAMETER CONFIGURATION\nFor our models, the optimizer used in our experiments is AdamW (Loshchilov & Hutter, 2019) with β1 = 0.9, β2 = 0.999, and eps = 10−8. For LCGNN-GCN/SAGE/GAT, we use this optimizer with no warmup steps. But for LCGNN, we use the following learning rate scheduler with warmup steps, similar to Transformer (Vaswani et al., 2017) except an extra hyper-parameter lr scale:\nlr = lr scale · d−0.5model ·min(step num −0.5, step num · warmup steps−1.5)\nWe use the wandb (Biewald, 2020) tool to help track experiments and search the hyperparameters. The final hyper-parameters used for our models are listed in the Table 8 and Table 9." } ]
2,020
null
SP:e9d2702f8ac6f04fd5429fc1236aa163d179a0aa
[ "The paper proposes AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, the study aims to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly for any input noise, denoted as non-constrained adversarial examples. Some experiments and visualizations show that AT-GAN can generate some diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models. " ]
With the rapid development of adversarial machine learning, numerous adversarial attack methods have been proposed. Typical attacks are based on a search in the neighborhood of input image to generate a perturbed adversarial example. Since 2017, generative models are adopted for adversarial attacks, and most of them focus on generating adversarial perturbations from input noise or input image. Thus the output is restricted by input for these works. A recent work targets “unrestricted adversarial example” using generative model but their method is based on a search in the neighborhood of input noise, so actually their output is still constrained by input. In this work, we propose AT-GAN (Adversarial Transfer on Generative Adversarial Net) to train an adversarial generative model that can directly produce adversarial examples. Different from previous works, we aim to learn the distribution of adversarial examples so as to generate semantically meaningful adversaries. AT-GAN achieves this goal by first learning a generative model for real data, followed by transfer learning to obtain the desired generative model. Once trained and transferred, AT-GAN could generate adversarial examples directly and quickly for any input noise, denoted as non-constrained adversarial examples. Extensive experiments and visualizations show that AT-GAN can efficiently generate diverse adversarial examples that are realistic to human perception, and yields higher attack success rates against adversarially trained models.
[]
[ { "authors": [ "Anish Athalye", "Nicholas Carlini", "David Wagner" ], "title": "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples", "venue": "International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Tao Bai", "Jun Zhao", "Jinlin Zhu", "Shoudong Han", "Jiefeng Chen", "Bo Li" ], "title": "AI-GAN: Attack-inspired generation of adversarial examples", "venue": "arXiv preprint arXiv:2002.02196,", "year": 2020 }, { "authors": [ "Shumeet Baluja", "Ian Fischer" ], "title": "Adversarial transformation networks: Learning to generate adversarial examples", "venue": "arXiv preprint arXiv:1703.09387,", "year": 2017 }, { "authors": [ "Arjun Bhagoji", "Warren He", "Bo Li", "Dawn Song" ], "title": "Exploring the Space of Black-box Attacks on Deep Neural Networks", "venue": "arXiv Preprint", "year": 2017 }, { "authors": [ "Jacob Buckman", "Aurko Roy", "Colin Raffel", "Ian Goodfellow" ], "title": "Thermometer Encoding: One Hot Way To Resist Adversarial Examples", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Yulong Cao", "Chaowei Xiao", "Dawei Yang", "Jing Fang", "Ruigang Yang", "Mingyan Liu", "Bo Li" ], "title": "Adversarial Objects Against LiDAR-Based Autonomous Driving Systems", "venue": "arXiv Preprint", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards Evaluating the Robustness of Neural Networks", "venue": "IEEE Symposium on Security and Privacy,", "year": 2017 }, { "authors": [ "Xi Chen", "Yan Duan", "Rein Houthooft", "John Schulman", "Ilya Sutskever", "Pieter Abbeel" ], "title": "InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets", "venue": "Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Kevin Eykholt", "Ivan Evtimov", "Earlence Fernandes", "Bo Li", "Amir Rahmati", "Chaowei Xiao", "Atul Prakash", "Tadayoshi Kohno", "Dawn Song" ], "title": "Robust Physical-World Attacks on Deep Learning Models", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "Neural Information Processing Systems,", "year": 2014 }, { "authors": [ "Ian Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and Harnessing Adversarial Examples", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron Courville" ], "title": "Improved Training of Wasserstein GANs", "venue": "Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Chuan Guo", "Mayank Rana", "Moustapha Cisse", "Laurens Maaten" ], "title": "Countering Adversarial Images using Input Transformations", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Warren He", "Bo Li", "Dawn Song" ], "title": "Decision Boundary Analysis of Adversarial Examples", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Tero Karras", "Miika Aittala", "Janne Hellsten", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Training generative adversarial networks with limited data", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "S. Kullback", "R.A. Leibler" ], "title": "On information and sufficiency", "venue": "Ann. Math. Statist.,", "year": 1951 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial Machine Learning at Scale", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Yann LeCun", "Bernhard Boser", "John Denker", "Donnie Henderson", "Richard Howard", "Wayne Hubbard", "Lawrence Jackel" ], "title": "Backpropagation applied to handwritten zip code recognition", "venue": "Neural computation,", "year": 1989 }, { "authors": [ "Yandong Li", "Lijun Li", "Liqiang Wang", "Tong Zhang", "Boqing Gong" ], "title": "Nattack: Learning the distributions of adversarial examples for an improved black-box attack on deep neural networks", "venue": "International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Fangzhou Liao", "Ming Liang", "Yinpeng Dong", "Tianyu Pang", "Xiaolin Hu", "Jun Zhu" ], "title": "Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser", "venue": "IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Aishan Liu", "Xianglong Liu", "Jiaxin Fan", "Yuqing Ma", "Anlan Zhang", "Huiyuan Xie", "Dacheng Tao" ], "title": "Perceptual-sensitive GAN for generating adversarial patches", "venue": "In The Thirty-Third AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Yanpei Liu", "Xinyun Chen", "Chang Liu", "Dawn Song" ], "title": "Delving into Transferable Adversarial Examples and Black-box Attacks", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Ziwei Liu", "Ping Luo", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deep Learning Face Attributes in the Wild", "venue": "International Conference on Computer Vision, pp", "year": 2015 }, { "authors": [ "Laurens Maaten", "Geoffrey Hinton" ], "title": "Visualizing Data using t-SNE", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jan Metzen", "Hendrik", "Tim Genewein", "Volker Fischer", "Bastian Bischoff" ], "title": "On Detecting Adversarial Perturbations", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Augustus Odena", "Christopher Olah", "Jonathon Shlens" ], "title": "Conditional Image Synthesis With Auxiliary Classifier GANs", "venue": "International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Poursaeed Omid", "Katsman Isay", "Gao Bicheng", "Serge Belongie" ], "title": "Generative adversarial perturbations", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Alec Radford", "Luke Metz", "Soumith Chintala" ], "title": "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", "venue": "International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Konda Reddy Mopuri", "Utkarsh Ojha", "Utsav Garg", "R Venkatesh Babu" ], "title": "NAG: Network for adversary generation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Olaf Ronneberger", "Philipp Fischer", "Thomas Brox" ], "title": "U-Net: Convolutional networks for biomedical image segmentation", "venue": "In International Conference on Medical Image Computing and ComputerAssisted Intervention,", "year": 2015 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-GAN: Protecting Classifiers Against Adversarial Attacks", "venue": "Using Generative Models. International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Adi Shamir", "Itay Safran", "Eyal Ronen", "Orr Dunkelman" ], "title": "A Simple Explanation for the Existence of Adversarial Examples with Small Hamming Distance", "venue": "arXiv Preprint", "year": 2019 }, { "authors": [ "Shiwei Shen", "Guoqing Jin", "Ke Gao", "Yongdong Zhang" ], "title": "APE-GAN: Adversarial Perturbation Elimination with GAN", "venue": "IEEE International Conference on Acoustics,Speech and SP,", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "venue": "International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Chuanbiao Song", "Kun He", "Liwei Wang", "John Hopcroft" ], "title": "Improving the Generalization of Adversarial Training with Domain Adaptation", "venue": "International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yang Song", "Rui Shu", "Nate Kushman", "Stefano Ermon" ], "title": "Constructing unrestricted adversarial examples with generative models", "venue": "Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Florian Tramèr", "Alexey Kurakin", "Nicolas Papernot", "Ian Goodfellow", "Dan Boneh", "Patrick McDaniel" ], "title": "Ensemble Adversarial Training: Attacks and Defenses", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Chaowei Xiao", "Bo Li", "Jun-Yan Zhu", "Warren He", "Mingyan Liu", "Dawn Song" ], "title": "Generating Adversarial Examples with Adversarial Networks", "venue": "International Joint Conferences on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms", "venue": "arXiv preprint arXive:1708.07747,", "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Rand FGSM (R+FGSM" ], "title": "R+FGSM (Tramèr et al., 2018) first applies a small random perturbation on the benign image with a parameter α (α < ), then it uses FGSM to generate an adversarial example based on the perturbed image", "venue": null, "year": 2018 }, { "authors": [ "sign(∇xJ(θ", "ytrue" ], "title": "Ensemble adversarial training. Tramèr et al. (2018) propose an ensemble adversarial training method, in which DNN is trained with adversarial examples transferred from a number of fixed pre-trained models. Iterative adversarial training", "venue": "Madry et al", "year": 2018 }, { "authors": [ "Arjovsky" ], "title": "Wasserstein GAN (WGAN) that uses Wassertein distance so that the loss function has more desirable properties", "venue": "Gulrajani et al", "year": 2017 }, { "authors": [ "Chen" ], "title": "hyper-parameters. Model Architectures for AT-GAN. We first describe the neural network architectures used for AT-GAN in experiments. The abbreviations for components in the network are described in Table 4. The architecture of AC-WGAN_GP for MNIST and Fashion-MNIST is shown in Table 5 where the generator and discriminator", "venue": null, "year": 2016 }, { "authors": [ "Gulrajani" ], "title": "AC_WGAN_GP for CelebA", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, Deep Neural Networks (DNNs) have been found vulnerable to adversarial examples (Szegedy et al., 2014), which are well-crafted samples with tiny perturbations imperceptible to humans but can fool the learning models. Despite the great success of the deep learning empowered applications, many of them are safety-critical, for example under the scenario of self-driving cars (Eykholt et al., 2018; Cao et al., 2019), raising serious concerns in academy and industry.\nNumerous works of adversarial examples have been developed on adversarial attacks (Goodfellow et al., 2015; Carlini & Wagner, 2017; Madry et al., 2018), adversarial defenses (Goodfellow et al., 2015; Kurakin et al., 2017; Song et al., 2019) and exploring the property of adversarial examples (He et al., 2018; Shamir et al., 2019). For adversarial attacks, most studies focus on the perturbation-based adversarial examples constrained by input images, which is also the generally accepted conception of adversarial examples. Generative models are also adopted recently to generate adversarial perturbations from an input noise (Reddy Mopuri et al., 2018; Omid et al., 2018) or from a given image (Xiao et al., 2018; Bai et al., 2020), and such perturbations are added to the original image to craft adversarial examples. Song et al. (2018) propose to search a neighborhood noise around the input noise of a Generative Adversarial Net (GAN) (Goodfellow et al., 2014) such that the output is an adversarial example, which they denoted as unrestricted adversarial example as there is no original image in their method. However, their output is still constrained by the input noise, and the search is time-consuming.\nIn this work, we propose an adversarial generative model called AT-GAN (Adversarial Transfer on Generative Adversarial Net), which aims to learn the distribution of adversarial examples. Unlike previous works that constrain the adversaries in the neighborhood of input image or input noise, including the prominent work of Song et al. (2018) that searches over the neighborhood of the input noise of a pre-trained GAN in order to find a noise whose output image is misclassified by the target classifier, AT-GAN is an adversarial generative model that could produce semantically meaningful\nadversarial examples directly from any input noise, and we call such examples the non-constrained adversarial examples.\nSpecifically, we first develop a normal GAN to learn the distribution of benign data so that it can produce plausible images that the classifier and a human oracle will classify in the same way. Then we transfer the pre-trained GAN into an adversarial GAN called AT-GAN that can fool the target classifier while being still well recognized by the human oracle. AT-GAN is a conditional GAN that has learned to estimate the distribution of adversarial examples for the target classifier, so AT-GAN can directly generate adversarial examples from any random noise, leading to high diversity and efficiency.\nWe implement AT-GAN by adopting AC-GAN (Odena et al., 2017) and WGAN-GP (Gulrajani et al., 2017) in the pre-training stage, then do transfer learning for the adversary generation. Here we develop AT-GAN on three benchmark datasets, namely MNIST, Fashion-MNIST and CelebA, and apply typical defense methods to compare AT-GAN with existing search-based attacks. Empirical results show that the non-constrained adversarial examples generated by AT-GAN yield higher attack success rates, and state-of-the-art adversarially trained models exhibit little robustness against ATGAN, indicating the high diversity of our adversaries. In addition, AT-GAN, as a generation-based adversarial attack, is more efficient than the search-based adversarial attacks.\nNote that all conditional GANs that can craft realistic examples could be used for the implementation of AT-GAN. For another demonstration, we adopt StyleGAN2-ada (Karras et al., 2020a) and develop AT-GAN on CIFAR-10 benchmark dataset using wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) as the target classifier. Empirical results show that AT-GAN can produce plausible adversarial images, and yield higher attack success rates on the adversarially trained models." }, { "heading": "2 PRELIMINARIES", "text": "In this section, we provide definitions on several types of adversarial examples and adversarial attacks, and give a brief overview of adversarial attacks using GAN. Other related works on typical adversarial attacks and defenses (Goodfellow et al., 2015; Madry et al., 2018; Tramèr et al., 2018), as well as some typical GANs (Goodfellow et al., 2014; Radford et al., 2016; Odena et al., 2017; Arjovsky et al., 2017; Gulrajani et al., 2017) are introduced in Appendix A." }, { "heading": "2.1 DEFINITIONS ON ADVERSARIES", "text": "Let X be the set of all digital images under consideration for a learning task, Y ∈ R be the output label space and pz ∈ Rm be an arbitrary probability distribution (e.g. Gaussian distribution) where m is the dimension of pz . A deep learning classifier f : X → Y takes an image x ∈ X and predicts its label f(x). Suppose px and padv are the distributions of benign images and adversarial examples, respectively. Assume we have an oracle classifier o : X → Y , which could always predict the correct label for any image x ∈ X , we define several types of adversarial examples as follows. For perturbation-based adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2015; MoosaviDezfooli et al., 2016), tiny perturbations are added to the input images, which are imperceptible to humans but can cause the target classifier to make wrong predictions.\nDefinition 1. Perturbation-based Adversarial Examples. Given a subset (trainset or testset) images T ⊂ X and a small constant > 0, the perturbation-based adversarial examples can be defined as: Ap = {xadv ∈ X |∃x ∈ T , ‖x− xadv‖p < ∧ f(xadv) 6= o(xadv) = f(x) = o(x)}.\nSong et al. (2018) define a new type of adversarial examples called unrestricted adversarial examples, which is not related to the subset (trainset or testset) images, by adding adversarial perturbation to the input noise of a mapping, such as GAN, so that the output of the perturbed noise is an adversary to the target classifier.\nDefinition 2. Unrestricted Adversarial Examples. Given a mappingG from z ∼ pz toG(z, y) ∼ pθ, where pθ is an approximated distribution of px, and a small constant > 0, the unrestricted adversarial examples can be defined as: Au = {G(z∗, ys) ∈ X |∃z ∼ pz, z∗ ∼ pz, ‖z − z∗‖p < ∧ f(G(z∗, ys)) 6= o(G(z∗, ys)) = f(G(z, ys)) = o(G(z, ys)) = ys} where ys is the source label.\nIn this work, we train a conditional GAN to learn the distribution of adversarial examples and output the corresponding adversary directly from any input noise. To clarify the difference with Song et al. (2018), we call our generated adversaries the non-constrained adversarial examples. Definition 3. Non-constrained Adversarial Examples. If there is a mapping G∗ from z ∼ pz to G∗(z, y) ∼ qθ, where qθ is an approximated distribution of padv, the non-constrained adversarial examples can be defined as An = {G∗(z, ys) ∈ X |f(G∗(z, ys)) 6= o(G∗(z, ys)) = ys} where ys is the source label.\nHere we need to find a mapping G∗, e.g. a generative model, such that for z ∼ pz , G∗(z, y) is an image in X and the output distribution is an approximated distribution of padv , for example using the Kullback-Leibler divergence (Kullback & Leibler, 1951), KL(qθ||padv) < for a small constant . In summary, perturbation-based adversarial examples are based on perturbing an image x ∈ X , and unrestricted adversarial examples (Song et al., 2018) perturbs an input noise z ∼ pz for an existing mapping G. Most perturbation-based adversarial attacks and Song et al. (2018) fall into the search-based adversarial attack. Definition 4. Search-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the search-based adversarial attack searches a vector v′ : ‖v− v′‖p < where v′ leads to an adversarial example for the target classifier.\nIn contrast, non-constrained adversarial examples are more generalized so that we need to learn a mapping G∗ such that for any input noise sampled from distribution pz , the output is an adversarial image. Such a mapping to be learned is called an adversarial generative model, and our method falls into the generation-based adversarial attack. Definition 5. Generation-based Adversarial Attack. Given an input vector v ∈ V (either benign image x or random vector z), the generation-based adversarial attack generates adversarial perturbation or adversarial example directly from v, usually adopting generative models." }, { "heading": "2.2 GENERATIVE MODELS FOR ADVERSARIAL ATTACK", "text": "Generative models have been adopted for adversarial attack in recent works (Baluja & Fischer, 2017). Reddy Mopuri et al. (2018) propose a Network for Adversary Generation (NAG) that models the distribution of adversarial perturbations for a target classifier so that their NAG can craft adversarial perturbations from any given random noise, which will be added to the natural image to fool the target classifier. Omid et al. (2018) propose to generate universal or image-dependent adversarial perturbations using U-Net (Ronneberger et al., 2015) or ResNet Generator (He et al., 2016) from any given random noise. Xiao et al. (2018) propose to train AdvGAN that takes an original image as the input and generate adversarial perturbation for the input to craft an adversarial example. Bai et al. (2020) further propose AI-GAN that adopts projected gradient descent (PGD) (Madry et al., 2018) in the training stage to train a GAN to generate target adversarial perturbation for the input image and target class. The above attack methods all fall into the generation-based adversarial attack, and their crafted examples fall into the perturbation-based adversarial examples. Another recent work called PS-GAN (Liu et al., 2019) pre-processes an input seed patch (a small image) to adversarial patch that will be added to a natural image to craft an adversarial example, and an attention model is used to locate the attack area on the natural image.\nDifferent from the above methods that generate adversarial perturbations or patches, Song et al. (2018) propose to search a random noise z∗ around the input noise z of AC-GAN (Odena et al., 2017) such that the corresponding output of AC-GAN is an adversarial example for the target classifier. Their method falls into the search-based adversarial attack, and their crafted examples fall into the unrestricted adversarial examples as there is no original image in their method.\nAT-GAN falls into the generation-based adversarial attack, and the crafted examples fall into the non-constrained adversarial examples. To clearly distinguish our work, we highlight the differences with most related works as follows:\nNAG, AdvGAN and AI-GAN vs. AT-GAN. NAG (Reddy Mopuri et al., 2018), AdvGAN (Xiao et al., 2018) and AI-GAN (Bai et al., 2020) focus on crafting adversarial perturbations by GANs. NAG takes random noise as input and crafts image-agnostic adversarial perturbation. AdvGAN and AI-GAN both use natural images as inputs, and generate the corresponding adversarial perturbations\nfor the input image. AI-GAN uses adversarial examples generated by PGD for the training. In contrast, AT-GAN does not use any natural image as the input, and generates adversarial examples directly from any random noise. Further, compared with AI-GAN, we do not use any adversarial examples for the training.\nSong’s vs. AT-GAN. Song’s method (Song et al., 2018) searches over the neighborhood of the input noise for the pre-trained AC-GAN in order to find a noise whose output image is misclassified by the target classifier. They define such adversaries as the unrestricted adversarial examples, however, their adversaries are still constrained by the original input noise. Their method is essentially based on search, while AT-GAN is trained as an adversarial generative model, and our output is not constrained by any neighborhood." }, { "heading": "3 AT-GAN: AN ADVERSARIAL GENERATIVE MODEL", "text": "Here we first introduce the estimation on the distribution of adversarial examples, then propose the AT-GAN framework, a generation-based adversarial attack for crafting non-constrained adversarial examples. Further analysis is provided that AT-GAN could learn the adversary distribution." }, { "heading": "3.1 ESTIMATING THE ADVERSARIAL DISTRIBUTION", "text": "In order to generate non-constrained adversarial examples, we need to estimate the distribution of adversarial examples padv(xadv|ytrue) where ytrue is the true label. Given the parameterized estimated distribution of adversarial examples qθ(x|ytrue), we can define the estimation problem as:\nqθ∗(xadv|ytrue) = arg min θ∈Ω KL(qθ(xadv|ytrue)‖padv(xadv|ytrue)), (1)\nwhere θ indicates trainable parameters and Ω is the parameter space.\nIt is hard to calculate equation 1 directly as padv(xadv|ytrue) is unknown. Inspired by the perturbationbased adversarial examples, as shown in Figure 1, we postulate that for each adversarial example xadv , there exists some benign examples x where ‖x−xadv‖p < . In other words, padv(xadv|ytrue) is close to p(x|ytrue) to some extent and we can obtain p(x|ytrue) by Bayes’ theorem, p(x|ytrue) = p(ytrue|x)·p(x)\np(ytrue) , where p(ytrue|x), p(x) and p(ytrue) can be obtained directly from the trainset. Thus, we can approximately solve equation 1 in two stages: 1) Fit the distribution of benign data pθ. 2) Transfer pθ to estimate the distribution of adversarial examples qθ.\nSpecifically, we propose an adversarial generative model called AT-GAN to learn the distribution of adversarial examples. The overall architecture of AT-GAN is illustrated in Figure 2. Corresponding to the above two stages, we implement AT-GAN by first training a GAN model called AC-WGAN_GP, which combines AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017) to get a generator Goriginal, to learn pθ (See Appendix B), then transfering Goriginal to attack the target classifier f for the learning of qθ. We adopt AC-GAN and WGAN-GP for the AT-GAN implementation as they could build a powerful generative model on three evaluated datasets, and Song et al. (2018) also utilize the same combination. But AT-GAN is not limited to the above GANs, and we also implement AT-GAN using StyleGAN2-ada (Karras et al., 2020a) on a different dataset." }, { "heading": "3.2 TRANSFERRING THE GENERATOR FOR ATTACK", "text": "After the original generator Goriginal is trained, we transfer the generator Goriginal to learn the distribution of adversarial examples in order to attack the target model. As illustrated in Figure 2\n(b), there are three neural networks, including the original generator Goriginal, the attack generator Gattack to be transferred that is initialized by the weights of Goriginal, and the classifier f to be attacked. The goal of the second stage can be described as:\nG∗attack = arg min Gattack ||Goriginal(z, ys)−Gattack(z, ys)||p s. t. f(G(z, ys)) = yt 6= ys, (2)\nwhere yt denotes the target label, ‖ · ‖p denotes the `p norm and we focus on p = 2 in this work. To optimize equation 2, we construct the loss function by L1 and L2, where L1 aims to assure that f yields the target label yt that is fixed for target attack for each category:\nL1 = Ez∼pz [H(f(Gattack(z, ys)), yt)]. (3)\nHere H(·, ·) denotes the cross entropy between the two terms and ys is sampled from Y . L2 aims to assure that the adversarial generator Gattack generates realistic examples:\nL2 = Ez∼pz [||Goriginal(z, ys) + ρ−Gattack(z, ys)||p]. (4) Here ρ is a small uniform random noise constrained by both l0 and l∞ norm. We add ρ to constrain Gattack(z, ys) to be in the neighborhood of Goriginal(z, ys) rather than be exactly the same as Goriginal(z, ys).\nThe objective function for transferring Goriginal to Gattack can be formulated as L = 〈αL1, βL2〉, where α and β are hyper-parameters to control the training process. Note that in the case that α = 1 and β →∞, the objective function is similar to that of the perturbation-based attacks (Goodfellow et al., 2015; Tramèr et al., 2018; Madry et al., 2018). For the untargeted attack, we can replace yt in La with the maximum confidence of prediction label y except for ys, maxy 6=ys f(y|Gattack(z, ys))." }, { "heading": "3.3 THEORETICAL ANALYSIS ON AT-GAN", "text": "This subsection provides theoretical analysis on why AT-GAN can generate as realistic and diverse non-constrained adversarial examples as real data. We will prove that under ideal condition, AT-GAN can estimate the distribution of adversarial examples, which is close to that of real data.\nSuppose pdata is the distribution of real data, pg and pa are the distribution learned by the generator of AC-WGAN_GP and AT-GAN respectively. For the optimization of equation 4, L2 aims to constrain the image generated by Gattack in the -neighborhood of Goriginal. We prove that under the ideal condition that L2 guaranteesGattack(z, ys) to be close enough toGoriginal(z, ys) for any input noise z, the distribution of AT-GAN almost coincides the distribution of AC-WGAN_GP. Formally, we state our result for the two distributions as follows. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0.\nThe proof of Theorem 1 is in Appendix C. Samangouei et al. (2018) prove that the global optimum of WGAN is pg = pdata and we show that the optimum of AC-WGAN_GP has the same property. We formalize the property as follows. Theorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata.\nThe proof of Theorem 2 is in Appendix C. According to Theorem 1 and 2, under the ideal condition, we conclude pa ≈ pg = pdata, which indicates that the distribution of non-constrained adversarial examples learned by AT-GAN is very close to that of real data as discussed in Section 3.1, so that the non-constrained adversarial instances are as realistic and diverse as the real data." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we provide two implementations of AT-GAN to validate the effectiveness and efficiency of the proposed approach. Empirical experiments demonstrate that AT-GAN yields higher attack success rates against adversarially trained models with higher efficiency. Besides, AT-GAN can learn a distribution of adversarial examples which is close to the real data distribution, and generate realistic and diverse adversarial examples." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Datasets. We consider four standard datasets, namely MNIST (LeCun et al., 1989), Fashion-MNIST (Xiao et al., 2017), CelebA (Liu et al., 2015) on the AT-GAN implementation using AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), and CIFAR-10 dataset (Krizhevsky et al., 2009) on the AT-GAN implementation of StyleGAN2-ada (StyleGAN2 with adaptive discriminator augmentation) (Karras et al., 2020a). MNIST is a dataset of hand written digits from 0 to 9. FashionMNIST is similar to MNIST with 10 categories of fashion clothes. CelebA contains more than 200, 000 celebrity faces. We group them according to female/male and focus on gender classification as in Song et al. (2018). CIFAR-10 consists of 32× 32 color images in 10 classes, with 6, 000 images per class. For all datasets, we normalize the pixel values into range [0, 1].\nBaselines. We compare AT-GAN with the search-based attack methods, including Song’s (Song et al., 2018) for unrestricted adversarial examples, as well as FGSM (Goodfellow et al., 2015), PGD (Madry et al., 2018) and R+FGSM (Tramèr et al., 2018) for perturbation-based adversarial examples. Note that although the perturbation-based results are not directly comparable to ours as they are limited to small perturbations on real images, they can provide a good sense on the model robustness.\nModels. For MNIST and Fashion-MNIST, we adopt four models used in Tramèr et al. (2018), denoted as Model A to D. For CelebA, we consider three models, i.e. CNN, VGG16 (Simonyan & Zisserman, 2015) and ResNet (He et al., 2016). Details of Model A to D and CNN are described in Table 1. The ResNet is same as in Song et al. (2018). For CIFAR-10, we adopt the wide ResNet w32-10 (Zagoruyko & Komodakis, 2016). Details about the architectures of AT-GAN are provided in Appendix D.\nEvaluation Setup. We consider normal training and existing advanced defenses, namely adversarial training (Goodfellow et al., 2015), ensemble adversarial training (Tramèr et al., 2018) and iterative adversarial training (Madry et al., 2018). All experiments are conducted on a single Titan X GPU and the hyper-parameters used for attacks are described in Appendix D." }, { "heading": "4.2 EVALUATION RESULTS", "text": "For evaluation, we report the comparisons on attack success rate, attack efficiency and visualize some adversarial examples for AT-GAN and the baselines. More evaluation results on the transferability, ablation study, human evaluation, and the attack results on CIFAR-10, are provided in Appendix D." }, { "heading": "4.2.1 COMPARISON ON ATTACK SUCCESS RATE", "text": "To validate the attack effectiveness, we compare AT-GAN with the baselines under white-box setting. Since Athalye et al. (2018) show that the currently most effective defense method is adversarial training, we consider adversarially trained models as the defense models. The attack success rates are reported in Table 2.\nOn MNIST, AT-GAN achieves the highest Attack Success Rate (ASR) against the baselines on all defense models. As for normal training, AT-GAN achieves the highest ASR on Model D, and the second highest ASR of over 98% on the other models. On Fashion-MNIST, AT-GAN achieves the highest ASR on average. On CelebA, AT-GAN achieves the highest ASR on almost all the models, with two exceptions under normal training but the results of AT-GAN are close to the highest.\nIn general, AT-GAN achieves the highest attack performance above 90% on all the defense models. As AT-GAN aims to estimate the distribution of adversarial examples, adversarial training with some specific attacks has little robustness against AT-GAN, raising a new security issue for the development of more generalized adversarial training models." }, { "heading": "4.2.2 COMPARISON ON ATTACK EFFICIENCY", "text": "There are many scenarios where one needs a large amount of adversarial examples, such as adversarial training or exploring the property of adversarial examples. Thus, the efficiency of generating adversarial examples is very important, but such metric is ignored in most existing works.\nAs an adversarial generative model, once trained, AT-GAN can generate adversarial examples very quickly. Here we evaluate the efficiency of each attack method for Model A on MNIST. The average time of generating/searching 1000 adversarial examples is summarized in Table 3. Among the five attack methods, AT-GAN is the fastest as it could craft adversarial examples without target classifier and gradient calculation. Note that Song’s needs much longer time than others as it needs multiple searches and queries to generate one adversarial example. It takes about 8 minutes for transferring the generator of AT-GAN. Here we only focus on the efficiency of generating adversarial examples after AT-GAN is transferred, i.e. we have already found the generator G∗, as in such case we could generate as many adversarial examples as we need." }, { "heading": "4.2.3 VISUALIZATION ON ADVERSARIAL EXAMPLES", "text": "Since the goal of adversarial examples is to fool target neural networks but not to fool human oracle, in Figure 3 we illustrate some adversarial examples generated by different attacks for Modle A on MNIST and Fashion-MNIST, and CNN on CelebA.\nOn MNIST, AT-GAN generates slightly more realistic images than Song’s, e.g. “0” and “3”. On Fashion-MNIST and CelebA, some adversarial examples generated by Song’s method are not as realistic as AT-GAN to human perception, for example “t-shirt/top (0) ”, “sandal (5)” and some facial details. Note that Song’s method tends to distort the foreground that makes the images on MNIST more clean but some images are not realistic while AT-GAN tends to distort the background. As for perturbation-based attacks, their adversarial examples are not clear enough, especially on MNIST and Fashion-MNIST, due to the adversarial perturbations. There are also some unnatural samples generated by AT-GAN due to the limitation of GAN and we hope some better generative models can solve such issue. For target attack, please see more examples crafted by AT-GAN in Appendix D.\nIn general, AT-GAN can generate realistic and diverse adversarial examples as equation 1 forces the generated non-constrained adversarial examples to be close to the benign examples generated by the original generator." }, { "heading": "4.3 VISUALIZATION ON ADVERSARIAL DISTRIBUTION", "text": "As discussed in Section 3.3, we provide a brief analysis that AT-GAN can learn a distribution of adversarial examples close to the distribution of real image data. To identify it empirically, we randomly choose 5, 000 benign images and 5, 000 adversarial examples generated by different attack methods, and merge these images according to their real label for MNIST and Fashion-MNIST. Then we use t-SNE (Maaten & Hinton, 2008) on these images to illustrate the distributions in two dimensions. t-SNE models each high-dimensional object in such a way that similar objects are\nmodeled by nearby points and dissimilar objects are modeled by distant points with high probability. It indicates that, if the adversarial examples have different distribution to the benign data, t-SNE could not deal with them well and the points with different categories will overlap with each other after the dimension reduction, i.e. the results will be in chaos.\nThe results are illustrated in Figure 4. For AT-GAN, different categories are separated as that of the test set while those of other methods are mixed with each other, especially on MNIST (top). It indicates the distribution AT-GAN learned is indeed very close to the distribution of real data.\nTo further validate that AT-GAN learns a different distribution from the original GAN rather than just adding some constant universal perturbation vector. In Appendix E, we illustrate some instances generated by the original generator and AT-GAN for the same input. We find that for different inputs, the original generator outputs different images and the difference between the instances generated by the original generator and AT-GAN is also different, indicating that AT-GAN indeed learns a different distribution from the original GAN." }, { "heading": "5 CONCLUSION", "text": "In this work, we propose a generation-based adversarial attack method, called AT-GAN (Adversarial Transfer on Generative Adversarial Net), that aims to learn the distribution of adversarial examples for the target classifier. The generated adversaries are “non-constrained” as we do no search at all in the neighborhood of the input, and once trained AT-GAN can output adversarial examples directly for any input noise drawn from arbitrary distribution (e.g. Gaussian distribution). Extensive experiments and visualizations show that AT-GAN achieves highest attack success rates against adversarially trained models and can generate diverse and realistic adversarial examples efficiently.\nOur work also suggests that adversarial training, a popular defense method based on perturbationbased adversarial examples, could not guarantee robustness against non-constrained adversarial examples. A possible reason is that AT-GAN learns a more complete version of the adversarial example distribution, which is much more diverse than that of the perturbation-based method.\nNote that any conditional GANs that craft realistic examples could be used for the implementation of AT-GAN. In this work, we provide two implementations on four datasets. In future work we plan to try advanced GANs for generating high resolution images. Our method also suggests a new way of adversarial attack by designing an adversarial generative model directly. There are several other interesting questions related to our work that can be explored in future work. For instance, what is the distribution of adversarial examples really like? Is it a continuous or smooth manifold? How close could we learn such distribution through GAN? We hope our work could inspire more researches in this direction." }, { "heading": "APPENDIX", "text": "In the appendix, we provide additional related work on gradient-based adversarial attack methods, adversarial training methods and typical generative adversarial nets. Then we describe how to obtain the original generator and provide theoretical analysis, as well as experimental details and additional results. In the end, we visualize the examples generated by original GAN and AT-GAN." }, { "heading": "A ADDITIONAL RELATED WORK", "text": "" }, { "heading": "A.1 GRADIENT-BASED ATTACKS", "text": "Numerous adversarial attacks have been proposed in recent years (Carlini & Wagner, 2017; Liu et al., 2017; Bhagoji et al., 2017; Li et al., 2019). In this part, we will introduce three typical adversarial attack methods. Here the components of all adversarial examples are clipped in [0, 1].\nFast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) adds perturbation in the gradient direction of the training loss J on the input x to generate adversarial examples.\nxadv = x+ · sign(∇xJ(θ, x, ytrue)),\nwhere ytrue is the true label of a sample x, θ is the model parameter and specifies the `∞ distortion between x and xadv .\nProjected Gradient Descent (PGD). PGD adversary (Madry et al., 2018) is a multi-step variant of FGSM, which applies FGSM for k iterations with a budget α.\nxadvt+1 = clip(xadvt+αsign(∇xJ(θ, xadvt , ytrue)), xadvt − , xadvt + ) xadv0 = x, xadv = xadvk\nHere clip(x′, p, q) forces its input x′ to reside in the range of [p, q].\nRand FGSM (R+FGSM). R+FGSM (Tramèr et al., 2018) first applies a small random perturbation on the benign image with a parameter α (α < ), then it uses FGSM to generate an adversarial example based on the perturbed image.\nxadv = x ′ + ( − α) · sign(∇x′J(θ, x′, ytrue)) where x′ = x+ α · sign(N (0, I))." }, { "heading": "A.2 ADVERSARIAL TRAINING", "text": "There are many defense strategies, such as detecting adversarial perturbations (Metzen et al., 2017), obfuscating gradients (Buckman et al., 2018; Guo et al., 2018) and eliminating perturbations (Shen et al., 2017; Liao et al., 2018), among which adversarial training is the most effective method (Athalye et al., 2018). We list several adversarial training methods as follows.\nAdversarial training. Goodfellow et al. (2015) first introduce the method of adversarial training, where the standard loss function f for a neural network is modified as:\nJ̃(θ, x, ytrue) = αJf (θ, x, ytrue) + (1− α)Jf (θ, xadv, ytrue).\nHere ytrue is the true label of a sample x and θ is the model’s parameter. The modified objective is to make the neural network more robust by penalizing it to count for adversarial samples. During the training, the adversarial samples are calculated with respect to the current status of the network. Taking FGSM for example, the loss function could be written as:\nJ̃(θ, x, ytrue) =αJf (θ, x, ytrue) + (1− α)Jf (θ, x+ sign(∇xJ(θ, x, ytrue)), ytrue).\nEnsemble adversarial training. Tramèr et al. (2018) propose an ensemble adversarial training method, in which DNN is trained with adversarial examples transferred from a number of fixed pre-trained models.\nIterative adversarial training. Madry et al. (2018) propose to train a DNN with adversarial examples generated by iterative methods such as PGD." }, { "heading": "A.3 GENERATIVE ADVERSARIAL NET", "text": "Generative Adversarial Net (GAN) (Goodfellow et al., 2014) consists of two neural networks, G and D, trained in opposition to each other. The generator G is optimized to estimate the data distribution and the discriminator D aims to distinguish fake samples from G and real samples from the training data. The objective of D and G can be formalized as a min-max value function V (G,D):\nmin G max D V (G,D) = Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z)))].\nDeep Convolutional Generative Adversarial Net (DCGAN) (Radford et al., 2016) is the convolutional version of GAN, which implements GAN with convolutional networks and stabilizes the training process. Auxiliary Classifier GAN (AC-GAN) (Odena et al., 2017) is another variant that extends GAN with some conditions by an extra classifier C. The objective function of AC-GAN can be formalized as follows:\nmin G max D min C V (G,D,C) =Ex∼px [logD(x)] + Ez∼pz [log(1−D(G(z, ys)))]\n+ Ex∼px [log(1− C(x, ys))] + Ez∼pz [log(1− C(G(z, ys), ys))].\nTo make GAN more trainable in practice, Arjovsky et al. (2017) propose Wasserstein GAN (WGAN) that uses Wassertein distance so that the loss function has more desirable properties. Gulrajani et al. (2017) introduce WGAN with gradient penalty (WGAN_GP) that outperforms WGAN in practice. Its objective function is formulated as:\nmin G max D\nV (D,G) = Ex∼px [D(x)]− Ez∼pz [D(G(z))]− λEx̂∼px̂ [(‖∇x̂D(x̂)‖2 − 1)2],\nwhere px̂ is uniformly sampled along straight lines between pairs of points sampled from the data distribution px and the generator distribution pg ." }, { "heading": "B TRAINING THE ORIGINAL GENERATOR", "text": "Figure 2 (a) illustrates the overall architecture of AC-WGAN_GP that we used as the normal GAN. AC-WGAN_GP is the combination of AC-GAN (Odena et al., 2017) and WGAN_GP (Gulrajani et al., 2017), composed by three neural networks: a generator G, a discriminator D and a classifier f . The generator G takes a random noise z and a source label ys as the inputs and generates an image G(z, ys). It aims to generate an image G(z, ys) that is indistinguishable to discriminator D and makes the classifier f to output label ys. The loss function of G can be formulated as:\nLG = Ez∼pz(z)[H(f(G(z, ys)), ys)]− Ez∼pz(z)[D(G(z, ys))].\nHere H(a, b) is the entropy between a and b. The discriminator D takes the training data x or the generated data G(z, ys) as the input and tries to distinguish them. The loss function of D with gradient penalty for samples x̂ ∼ px̂ can be formulated as:\nLD = −Ex∼pdata(x)[D(x)] + Ez∼pz(z)[D(G(z, ys))] + λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2].\nThe classifier f takes the training data x or the generated data G(z, ys) as the input and predicts the corresponding label. The loss function is:\nLf =Ex∼pdata(x)[H(f(x), ytrue)] + Ez∼pz(z)[H(f(G(z, ys)), ys)].\nDifferent from AC-WGAN_GP, styleGAN2-ada (Karras et al., 2020a) trains styleGAN2 (Karras et al., 2020b) with adaptive discriminator augmentation. We obtain the network and weights from Karras et al. (2020a)." }, { "heading": "C THEORETICAL ANALYSIS OF AT-GAN", "text": "In this section, we provide proofs for theorems in Section 3.3. Theorem 1. Suppose maxz,y L2 < , we have KL(pa‖pg)→ 0 when → 0.\nProof. We first consider that for a distribution p(x) in space X , we construct another distribution q(x) by selecting points p (x) in the -neighborhood of p(x) for any x ∈ X . Obviously, when p (x) is close enough to p(x), q(x) has almost the same distribution as p(x). Formally, we have the following lemma.\nLemma 1. Given two distributions P and Q with probability density function p(x) and q(x) in space X , if there exists a constant that satisfies ‖q(x) − p(x)‖ < for any x ∈ X , we could get KL(P‖Q)→ 0 when → 0.\nProof. For two distributions P and Q with probability density function p(x) and q(x), we could get q(x) = p(x) + r(x) where ‖r(x)‖ < .\nKL(P‖Q) = ∫ p(x) log p(x)\nq(x) dx\n= ∫ p(x) log p(x)dx− ∫ p(x) log q(x)dx\n= ∫ (q(x)− r(x)) log p(x)dx− ∫ (q(x)− r(x)) log q(x)dx\n= ∫ q(x) log p(x)dx− ∫ q(x) log q(x)dx− ∫ r(x) log p(x)dx+ ∫ r(x) log q(x)dx\n= ∫ r(x) log q(x)\np(x) dx−KL(Q‖P ) ≤ ∫ log(1 +\np(x) )dx Obviously, when → 0, we could get ∫ log(1 + p(x) )dx→ 0, which means DL(P‖Q)→ 0.\nNow, we get back to Theorem 1. For two distributions pa and pg, maxy,z L2 < indicates ∀z ∼ pz, ‖pa(z, ·)− pg(z, ·)‖ < . According to Lemma 1, we have KL(pa‖pg)→ 0 when → 0. This concludes the proof.\nTheorem 2. The global minimum of the virtual training of AC-WGAN_GP is achieved if and only if pg = pdata.\nProof. To simplify the analysis, we choose a category y of AC-WGAN_GP and denote pg(x|y) and pdata(x|y) the distribution that the generator learns and the distribution of real data respectively. Then for each category, the loss function is equivalent to WGAN_GP. We refers to Samangouei et al. (2018) to prove this property. The WGAN_GP min-max loss is given by:\nmin G max D\nV (D,G) = Ex∼pdata(x)[D(x)]− Ez∼pz(z)[D(G(z))]− λEx̂∼px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1) 2]\n= ∫ x pdata(x)D(x)dx− ∫ z pz(z)D(G(z))dz − λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂\n= ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂\n(5)\nFor a fixed G, the optimal discriminator D that maximizes V (D,G) should be:\nD∗G(x) = { 1 if pdata(x) ≥ pg(x) 0 otherwise (6)\nAccording to equation 5 and equation 6, we could get:\nV (D,G) = ∫ x [pdata(x)− pg(x)]D(x)dx− λ ∫ x̂ px̂(x̂)[(‖∇x̂D(x̂)‖2 − 1)2]dx̂\n= ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ ∫ x̂ px̂(x̂)dx̂\n= ∫ {x|pdata(x)≥pg(x)} (pdata(x)− pg(x))dx− λ\n(7)\nLet X = {x|pdata(x) ≥ pg(x)}, in order to minimize equation 7, we set pdata(x) = pg(x) for any x ∈ X . Then, since both pg and pdata integrate to 1, we could get:∫\nX c pg(x)dx = ∫ X c pdata(x)dx.\nHowever, this contradicts equation 6 where pdata(x) < pg(x) for x ∈ X c, unless µ(X c) = 0 where µ is the Lebesgue measure.\nTherefore, for each category we have pg(x|y) = pdata(x|y), which means pg(x) = pdata(x) for AC-WGAN_GP." }, { "heading": "D ADDITIONAL DETAILS ON EXPERIMENTS", "text": "In this section, we provide more details on experimental setup, report results on transferability, do ablation study on hyper-parameters, investigate the generating capacity by human evaluation, and show details for another implementation of AT-GAN on CIFAR-10 dataset. In the end, we illustrate some non-constrained adversarial examples generated by AT-GAN on MNIST, Fashion-MNIST and CelebA for the target attack." }, { "heading": "D.1 MORE EXPERIMENTAL SETUP", "text": "We first provide more details on the experimental setup, including the model architectures and attack hyper-parameters.\nModel Architectures for AT-GAN. We first describe the neural network architectures used for AT-GAN in experiments. The abbreviations for components in the network are described in Table 4. The architecture of AC-WGAN_GP for MNIST and Fashion-MNIST is shown in Table 5 where the generator and discriminator are the same as in Chen et al. (2016), while the architecture of AC_WGAN_GP for CelebA is the same as in Gulrajani et al. (2017) and the architecture of styleGAN2-ada for CIFAR-10 is the same as in Karras et al. (2020a).\nHyper-parameters for Attacks. The hyper-parameters used in experiments for each attack method are described in Table 6 for MNIST, Fashion-MNIST and CelebA datasets. For CIFAR-10 dataset, we set = 0.03 for FGSM, = 0.03, α = 0.0075 and epochs= 20 for PGD, α = 3, β = 2 and epochs= 1, 000 for AT-GAN." }, { "heading": "D.2 TRANSFERABILITY OF AT-GAN", "text": "Another important issue for adversarial examples is the transferability across different models. To demonstrate the transferability of non-constrained adversarial examples, we use adversarial examples generated by attacking Model A (MNIST and Fashion-MNIST) and CNN (CelebA), to evaluate the attack success rates on Model C (MNIST and Fashion-MNIST) and VGG16 (CelebA). As shown in Table 7, non-constrained adversarial examples generated by AT-GAN exhibit moderate transferability." }, { "heading": "D.3 ABLATION STUDY", "text": "In this subsection, we investigate the impact of using different ρ in the loss function. As ρ could be constrained by both `0 and `∞ norm, we test various bounds, using Model A on MNIST dataset, for ρ in `0 and `∞, respectively.\nWe first fix ‖ρ‖∞ = 0.5 and try various values for ‖ρ‖0, i.e. 0, 100, 200, 300, 400 (the maximum possible value is 784 for 28*28 input). The attack success rates are in Table 8. We can observe that different values of ‖ρ‖0 only have a little impact on the attack success rates, and the performances are very close for ‖ρ‖0 = 0, 100, 200. Figure 5 further illustrates some generated adversarial examples, among which we can see that there exist some slight differences on the examples. When ‖ρ‖0 = 0, AT-GAN tends to change the foreground (body) of the digits. When we increase the value of ‖ρ‖0 (100 and 200), AT-GAN is more likely to add tiny noise to the background and the crafted examples are more realistic to humans (for instance, smoother on digit 4). But if we continue to increase ‖ρ‖0 (300 or 400), AT-GAN tends to add more noise and the quality of the generated examples decays. To have a good tradeoff on attack performance and generation quality, we set ‖ρ‖0 = 200.\nWe then fix ‖ρ‖0 = 200 and test different values for ‖ρ‖∞, i.e. 0, 0.1, 0.2, 0.3, 0.4, 0.5 (the maximum possible value is 1). The attack success rates are in Table 9. We can observe that different values of\n‖ρ‖∞ have very little impact on the attack performance. Figure 6 further illustrates some generated adversarial examples, among which we can see that a little bit more noises are added for bigger ‖ρ‖∞ but the differences are very tiny when ‖ρ‖∞ = 0.2 to 0.5. So we simply set ‖ρ‖∞ = 0.5 in experiments, but other values of ‖ρ‖∞ (0.2, 0.3, 0.4) also work." }, { "heading": "D.4 HUMAN EVALUATION", "text": "To investigate the generating capacity of AT-GAN, we use the same input, and randomly pick 100 images for each category of MNIST generated by AT-GAN and the original generator, respectively. We then conduct human evaluation to determine whether each example is realistic. The evaluation results are in Table 10. We see that adversarial examples in some categories (e.g. 2, 4) are harder to be semantically meaningful than other categories (e.g. 0, 1). On average, however, the generating capability is close to that of the original generator." }, { "heading": "D.5 AT-GAN ON CIFAR-10 DATASET", "text": "To further demonstrate the flexibility of AT-GAN, we implement AT-GAN on CIFAR-10 dataset using StyleGAN2-ada (Karras et al., 2020a), a recently proposed conditional GAN. The target classifier is wide ResNet w32-10 (Zagoruyko & Komodakis, 2016) by normal training (Nor.) and Iterative adversarial training (Iter.). The attack success rates are in Table 11. On normally trained models, PGD achieves the attack success rate of 100% while AT-GAN achieves the attack success rate of 93.5%. However, the adversarially trained model exhibits little robustness against AT-GAN and AT-GAN achieves attack success rate of 73.0%. In Figure 7, we illustrate some generated adversarial examples on CIFAR-10 dataset." }, { "heading": "D.6 AT-GAN ON TARGET ATTACK", "text": "Here we show some non-constrained adversarial examples generated by AT-GAN for the target attack. The results are illustrated in Figure 8 for MNIST and Fashion-MNIST, and Figure 9 for CelebA. Instead of adding perturbations to the original images, AT-GAN transfers the generative model (GAN) so that the generated adversarial instances are not in the same shape of the initial examples (in diagonal) generated by the original generator. Note that for CelebA, the target adversarial attack is equivalent to the untarget adversarial attack as it is a binary classification task.\nE VISUALIZATIONS FOR THE ORIGINAL GAN AND AT-GAN\nHere we provide some instances generated by the original GAN and AT-GAN with the same input noise and their difference on MNIST and Fashion-MNIST. The results are depicted in Figure 10 and 11. For different input noise, both the original GAN and AT-GAN output different instances. For each category with the same input noise, the difference between original GAN and AT-GAN is mainly related to the main content of image. For two different input noises, the differences between the original GAN and AT-GAN are not the same with each other, indicating that AT-GAN learns a distribution of adversarial examples different from the original GAN rather than just adds some universal perturbation vectors on the original GAN." } ]
2,020
AT-GAN: AN ADVERSARIAL GENERATIVE MODEL
SP:02ad24f0c92d8f6203be90ff0c173036f76c9959
[ "This paper proposes an advanced masking strategy for CutMix augmentation based on the low-pass filter. The authors provide an interesting mutual information analysis for different augmentation strategies to describe their motivation. The experiments include many vision tasks (CIFAR-10, CIFAR-100, Fashion-MNIST, Tiny-ImageNet, ImageNet, Bengali datasets) and language tasks (Toxic, IMDb, Yelp)." ]
Mixed Sample Data Augmentation (MSDA) has received increasing attention in recent years, with many successful variants such as MixUp and CutMix. We analyse MSDA from an information theoretic perspective, characterising learned models in terms of how they impact the models’ perception of the data. Ultimately, our analyses allow us to decouple two complementary properties of augmentations that are useful for reasoning about MSDA. From insight on the efficacy of CutMix in particular, we subsequently propose FMix, an MSDA that uses binary masks obtained by applying a threshold to low frequency images sampled from Fourier space. FMix improves performance over MixUp and CutMix for a number of models across a range of data sets and problem settings, obtaining new state-of-the-art results on CIFAR-10 and Fashion-MNIST.
[]
[ { "authors": [ "Nicholas Carlini", "Anish Athalye", "Nicolas Papernot", "Wieland Brendel", "Jonas Rauber", "Dimitris Tsipras", "Ian Goodfellow", "Aleksander Madry", "Alexey Kurakin" ], "title": "On evaluating adversarial robustness", "venue": null, "year": 1902 }, { "authors": [ "Luigi Carratino", "Moustapha Cissé", "Rodolphe Jenatton", "Jean-Philippe Vert" ], "title": "On mixup regularization", "venue": "arXiv preprint arXiv:2006.06049,", "year": 2020 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Léon Bottou", "Vladimir Vapnik" ], "title": "Vicinal risk minimization", "venue": "In Advances in neural information processing systems,", "year": 2001 }, { "authors": [ "Nitesh V Chawla", "Kevin W Bowyer", "Lawrence O Hall", "W Philip Kegelmeyer" ], "title": "Smote: synthetic minority over-sampling technique", "venue": "Journal of artificial intelligence research,", "year": 2002 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical automated data augmentation with a reduced search space", "venue": "arXiv preprint arXiv:1909.13719,", "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Raphael Gontijo-Lopes", "Sylvia J Smullin", "Ekin D Cubuk", "Ethan Dyer" ], "title": "Affinity and diversity: Quantifying mechanisms of data augmentation", "venue": "arXiv preprint arXiv:2002.08973,", "year": 2020 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Mixup as locally linear out-of-manifold regularization", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Dongyoon Han", "Jiwhan Kim", "Junmo Kim" ], "title": "Deep pyramidal residual networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In European conference on computer vision,", "year": 2016 }, { "authors": [ "Zhuoxun He", "Lingxi Xie", "Xin Chen", "Ya Zhang", "Yanfeng Wang", "Qi Tian" ], "title": "Data augmentation revisited: Rethinking the distribution gap between clean and augmented data", "venue": null, "year": 1909 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple data processing method to improve robustness and uncertainty", "venue": "arXiv preprint arXiv:1912.02781,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Kevin Zhao", "Steven Basart", "Jacob Steinhardt", "Dawn Song" ], "title": "Natural adversarial examples", "venue": "arXiv preprint arXiv:1907.07174,", "year": 2019 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Matthew D Hoffman", "Matthew J Johnson" ], "title": "Elbo surgery: yet another way to carve up the variational evidence lower bound", "venue": "In Workshop in Advances in Approximate Bayesian Inference, NIPS,", "year": 2016 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Hiroshi Inoue" ], "title": "Data augmentation by pairing samples for images classification", "venue": "arXiv preprint arXiv:1801.02929,", "year": 2018 }, { "authors": [ "Armand Joulin", "Edouard Grave", "Piotr Bojanowski", "Tomas Mikolov" ], "title": "Bag of tricks for efficient text classification", "venue": "arXiv preprint arXiv:1607.01759,", "year": 2016 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Artemy Kolchinsky", "Brendan D Tracey" ], "title": "Estimating mixture entropy with pairwise distances. Entropy", "venue": null, "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Yann LeCun", "Yoshua Bengio" ], "title": "Convolutional networks for images, speech, and time series", "venue": "The handbook of brain theory and neural networks,", "year": 1995 }, { "authors": [ "Daojun Liang", "Feng Yang", "Tian Zhang", "Peter Yang" ], "title": "Understanding mixup training methods", "venue": "IEEE Access,", "year": 2018 }, { "authors": [ "Sungbin Lim", "Ildoo Kim", "Taesup Kim", "Chiheon Kim", "Sungwoong Kim" ], "title": "Fast autoaugment", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Andrew L Maas", "Raymond E Daly", "Peter T Pham", "Dan Huang", "Andrew Y Ng", "Christopher Potts" ], "title": "Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pages 142–150", "venue": "Association for Computational Linguistics,", "year": 2011 }, { "authors": [ "Seyed-Mohsen Moosavi-Dezfooli", "Alhussein Fawzi", "Pascal Frossard" ], "title": "Deepfool: a simple and accurate method to fool deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Guillaume Perrault-Archambault", "Yongyi Mao", "Hongyu Guo", "Richong Zhang" ], "title": "Mixup as directional adversarial training, 2020", "venue": "URL https://openreview.net/forum?id= SkgjKR4YwH", "year": 2020 }, { "authors": [ "Joshua C Peterson", "Ruairidh M Battleday", "Thomas L Griffiths", "Olga Russakovsky" ], "title": "Human uncertainty makes classification more robust", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Charles R Qi", "Hao Su", "Kaichun Mo", "Leonidas J Guibas" ], "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "Jonas Rauber", "Roland Zimmermann", "Matthias Bethge", "Wieland Brendel" ], "title": "Foolbox native: Fast adversarial attacks to benchmark the robustness of machine learning models in pytorch, tensorflow, and jax", "venue": "Journal of Open Source Software,", "year": 2020 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Philipp Singer", "Dmitry Gordeev" ], "title": "Bengali.ai handwritten grapheme classification competition: Second place solution, 2020", "venue": "URL https://www.kaggle.com/c/bengaliai-cv19/", "year": 2020 }, { "authors": [ "Cecilia Summers", "Michael J Dinneen" ], "title": "Improved mixed-example data augmentation", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2019 }, { "authors": [ "Ryo Takahashi", "Takashi Matsubara", "Kuniaki Uehara" ], "title": "Data augmentation using random image cropping and patching for deep cnns", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2019 }, { "authors": [ "Naftali Tishby", "Noga Zaslavsky" ], "title": "Deep learning and the information bottleneck principle", "venue": "In 2015 IEEE Information Theory Workshop (ITW),", "year": 2015 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Learning from between-class examples for deep sound recognition", "venue": "arXiv preprint arXiv:1711.10282,", "year": 2017 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Between-class learning for image classification", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Hervé Jégou" ], "title": "Fixing the train-test resolution discrepancy", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Vladimir Vapnik" ], "title": "The nature of statistical learning theory", "venue": "Springer science & business media,", "year": 1999 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "David LopezPaz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Thomas Wolf", "Lysandre Debut", "Victor Sanh", "Julien Chaumond", "Clement Delangue", "Anthony Moi", "Pierric Cistac", "Tim Rault", "R’emi Louf", "Morgan Funtowicz", "Jamie Brew" ], "title": "Huggingface’s transformers: State-of-the-art natural language processing", "venue": null, "year": 1910 }, { "authors": [ "Zhirong Wu", "Shuran Song", "Aditya Khosla", "Fisher Yu", "Linguang Zhang", "Xiaoou Tang", "Jianxiong Xiao" ], "title": "3d shapenets: A deep representation for volumetric shapes", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Han Xiao", "Kashif Rasul", "Roland Vollgraf" ], "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "venue": "arXiv preprint arXiv:1708.07747,", "year": 2017 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Yoshihiro Yamada", "Masakazu Iwamura", "Takuya Akiba", "Koichi Kise" ], "title": "Shakedrop regularization for deep residual learning", "venue": "arXiv preprint arXiv:1802.02375,", "year": 2018 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": null, "year": 1905 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "arXiv preprint arXiv:1710.09412,", "year": 2017 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Liang" ], "title": "2018) performed a number of experiments on the importance of the mixing ratio of the labels in MixUp. They concluded that when the targets are not mixed in the same proportion as the inputs the model can be regularised to the point of underfitting", "venue": null, "year": 2018 }, { "authors": [ "Summers", "Dinneen" ], "title": "2019), we considered a grey-scale version of FMix. Specifically, we explored a method which softened the edges in the mask. To achieve this, after sorting the low frequency image by pixel value, instead of choosing a threshold and setting one side to 1 and the other to 0, we choose an equal distance either side of the threshold and linearly value the mask between 1 and 0 for some number of pixels. The number of grey pixels is chosen", "venue": null, "year": 2019 }, { "authors": [ "Liang" ], "title": "2018) argue that linear interpolation of inputs limits the memorisation ability of the network. Gontijo-Lopes et al. (2020) proposes two measures to explain the impact of augmentation on generalisation when jointly optimised: affinity and diversity. While the former captures the shift in the data distribution as perceived by the baseline model, the latter measures the training loss when learning with augmented data. A somewhat more math", "venue": null, "year": 2020 }, { "authors": [ "Guo" ], "title": "2019), who argue that MixUp regularises the model by constraining it outside the data manifold. They point out that this could lead to reducing the space of possible hypotheses, but could also lead to generated examples contradicting original ones, degrading quality. Upon Taylor-expanding the objective, a more recent study that also focuses on MixUp motivates its success by the co-action of four different regularisation factors (Carratino", "venue": null, "year": 2019 }, { "authors": [ "Zhang" ], "title": "2019) take a statistical learning view of MSDA, basing their study on the observation that MSDA distorts the data distribution and thus does not perform VRM in the traditional sense. They subsequently propose separating features into ‘minor’ and ‘major", "venue": null, "year": 2019 }, { "authors": [ "Yun" ], "title": "preventing the model from learning about so called ‘minor’ features, then that would suggest that the underlying data distribution has been distorted, breaking the core assumption of VRM", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Recently, a plethora of approaches to Mixed Sample Data Augmentation (MSDA) have been proposed which obtain state-of-the-art results, particularly in classification tasks (Chawla et al., 2002; Zhang et al., 2017; Tokozume et al., 2017; 2018; Inoue, 2018; Yun et al., 2019; Takahashi et al., 2019; Summers and Dinneen, 2019). MSDA involves combining data samples according to some policy to create an augmented data set on which to train the model. The policies so far proposed can be broadly categorised as either combining samples with interpolation (e.g. MixUp) or masking (e.g. CutMix). Traditionally, augmentation is viewed through the framework of statistical learning as Vicinal Risk Minimisation (VRM) (Vapnik, 1999; Chapelle et al., 2001). Given some notion of the vicinity of a data point, VRM trains with vicinal samples in addition to the data points themselves. This is the motivation for MixUp (Zhang et al., 2017); to provide a new notion of vicinity based on mixing data samples. In the classical theory, validity of this technique relies on the strong assumption that the vicinal distribution precisely matches the true distribution of the data. As a result, the classical goal of augmentation is to maximally increase the data space, without changing the data distribution. Clearly, for all but the most simple augmentation strategies, the data distribution is in some way distorted. Furthermore, there may be practical implications to correcting this, as is demonstrated in Touvron et al. (2019). In light of this, three important questions arise regarding MSDA: What is good measure of the similarity between the augmented and the original data? Why is MixUp so effective when the augmented data looks so different? If the data is distorted, what impact does this have on trained models?\nTo construct a good measure of similarity, we note that the data only need be ‘perceived’ similar by the model. As such, we measure the mutual information between representations learned from the real and augmented data, thus characterising how well learning from the augmented data simulates learning from the real data. This measure clearly shows the data-level distortion of MixUp by demonstrating that learned representations are compressed in comparison to those learned from the un-augmented data. To address the efficacy of MixUp, we look to the information bottleneck theory of deep learning (Tishby and Zaslavsky, 2015). This theory uses the data processing inequality, summarised as ‘post-processing cannot increase information’, to suggest that deep networks progressively discard information about the input whilst preserving information about the targets. Through this lens, we posit that the distortion and subsequent compression induced by MixUp promotes generalisation by preventing the network from learning about highly sample-specific features in the data. Regarding the impact on trained models, and again armed with the knowledge that MixUp distorts learned functions, we show that MixUp acts as a kind of adversarial training (Good-\nfellow et al., 2014), promoting robustness to additive noise. This accords with the theoretical result of Perrault-Archambault et al. (2020) and the robustness results of Zhang et al. (2017). However, we further show that MSDA does not generally improve adversarial robustness when measured as a worst case accuracy following multiple attacks as suggested by Carlini et al. (2019). In contrast to our findings regarding MixUp, we show that CutMix causes learned models to retain a good knowledge of the real data, which we argue derives from the fact that individual features extracted by a convolutional model generally only derive from one of the mixed data points. At the same time CutMix limits the ability of the model to over-fit by dramatically increasing the number of observable data points, in keeping with the original intent of VRM. We go on to argue that by restricting to only masking a square region, CutMix imposes an unnecessary limitation. Indeed, it should be possible to construct an MSDA which uses masking similar to CutMix whilst increasing the data space much more dramatically. Motivated by this, we introduce FMix, a masking MSDA that uses binary masks obtained by applying a threshold to low frequency images sampled from Fourier space. Using our mutual information measure, we show that learning with FMix simulates learning from the real data even better than CutMix. We subsequently demonstrate performance of FMix for a range of models and tasks against a series of augmented baselines and other MSDA approaches. FMix obtains a new state-of-the-art performance on CIFAR-10 (Krizhevsky et al., 2009) without external data and Fashion MNIST (Xiao et al., 2017) and improves the performance of several state-of-the-art models (ResNet, SE-ResNeXt, DenseNet, WideResNet, PyramidNet, LSTM, and Bert) on a range of problems and modalities.\nIn light of our analyses, and supported by our experimental results, we go on to suggest that the compressing qualities of MixUp are most desirable when data is limited and learning from individual examples is easier. In contrast, masking MSDAs such as FMix are most valuable when data is abundant. We finally suggest that there is no reason to see the desirable properties of masking and interpolation as mutually exclusive. In light of these observations, we plot the performance of MixUp, FMix, a baseline, and a hybrid policy where we alternate between batches of MixUp and FMix, as the number of CIFAR-10 training examples is reduced. This experiment confirms our above suggestions and shows that the hybrid policy can outperform both MixUp and FMix." }, { "heading": "2 RELATED WORK", "text": "In this section, we review the fundamentals of MSDA. Let pX(x) denote the input data distribution. In general, we can define MSDA for a given mixing function, mix(X1, X2,Λ), where X1 and X2 are independent random variables on the data domain and Λ is the mixing coefficient. Synthetic minority over-sampling (Chawla et al., 2002), a predecessor to modern MSDA approaches, can be seen as a special case of the above where X1 and X2 are dependent, jointly sampled as nearest neighbours in feature space. These synthetic samples are drawn only from the minority class to be used in conjunction with the original data, addressing the problem of imbalanced data. The mixing function is linear interpolation, mix(x1, x2, λ) = λx1+(1−λ)x2, and pΛ = U(0, 1). More recently, Zhang et al. (2017), Tokozume et al. (2017), Tokozume et al. (2018) and Inoue (2018) concurrently proposed using this formulation (as MixUp, Between-Class (BC) learning, BC+ and sample pairing respectively) on the whole data set, although the choice of distribution for the mixing coefficients varies for each approach. We refer to this as interpolative MSDA, where, following Zhang et al. (2017), we use the symmetric Beta distribution, that is pΛ = Beta(α, α).\nRecent variants adopt a binary masking approach (Yun et al., 2019; Summers and Dinneen, 2019; Takahashi et al., 2019). Let M = mask(Λ) be a random variable with mask(λ) ∈ {0, 1}n and µ(mask(λ)) = λ, that is, generated masks are binary with average value equal to the mixing coefficient. The mask mixing function is mix(x1,x2,m) = m x1 + (1 −m) x2, where denotes point-wise multiplication. A notable masking MSDA which motivates our approach is CutMix (Yun et al., 2019). CutMix is designed for two dimensional data, with mask(λ) ∈ {0, 1}w×h, and uses mask(λ) = rand rect(w √ 1− λ, h √ 1− λ), where rand rect(rw, rh) ∈ {0, 1}w×h yields a binary mask with a shaded rectangular region of size rw × rh at a uniform random coordinate. CutMix improves upon the performance of MixUp on a range of experiments. For the remainder of the paper we focus on the development of a better input mixing function. Appendix A provides a discussion of the importance of the mixing ratio of the labels. For the typical case of classification with a cross entropy loss, the objective function is simply the interpolation between the cross entropy against each of the ground truth targets." }, { "heading": "3 ANALYSIS", "text": "We now analyse both interpolative and masking MSDAs with a view to distinguishing their impact on learned representations. We summarise previous analyses and theories (Zhang et al., 2017; Liang et al., 2018; Guo et al., 2019; He et al., 2019; Verma et al., 2019; Yun et al., 2019) in Appendix G. For our analysis, we desire a measure which captures the extent to which learning about the augmented data simulates learning about the original data. To achieve this, we propose training unsupervised models on real data and augmented data and then measuring the mutual information, the reduction in uncertainty about one variable given knowledge of another, between the representations they learn. In particular, we propose using Variational Auto-Encoders (VAEs) (Kingma and Welling, 2013), which provide a rich depiction of the salient or compressible information in the data (Higgins et al.). Denoting the latent space of a VAE trained on the original data as ZX and on some candidate augmentation A as ZA, in Appendix B we show that we can obtain a tractable lower bound, I(ZA;X), and upper bound, I(ZA; X̂) where X̂ is the original data as reconstructed by a baseline VAE, for the intractable quantity I(ZA;ZX). Table 1 gives these quantities for MixUp, CutMix, and a baseline. The results show that MixUp consistently reduces the amount of information that is learned about the original data. In contrast, CutMix manages to induce greater mutual information with the data than is obtained from just training on the un-augmented data. Crucially, the results present concrete evidence that interpolative MSDA differs fundamentally from masking MSDA in how it impacts learned representations.\nHaving shown this is true for VAEs, we now wish to understand whether the finding also holds for trained classifiers. To this end, in Figure 4 in the appendix we visualise the decisions made by a classifier using Gradient-weighted Class Activation Maps (Grad-CAMs) (Selvaraju et al., 2017). Grad-CAM finds the regions in an image that contribute the most to the network’s prediction by taking the derivative of the model’s output with respect to the activation maps and weighting them according to their contribution. If MixUp prevents the network from learning about highly specific features in the data we would expect more of the early features to contribute to the network output.\nClearly, it is difficult to ascertain whether this is the case from the examples in the figure, although there is some indication that it may be true. To verify empirically we compute the average sum of Grad-CAM heatmaps over the CIFAR-10 test set for 5 repeats (independently trained PreActResNet18 models). We obtain the following scores: baseline - 146±5, MixUp - 162±3, CutMix - 131±6. It is clear that on average more of the early features contribute to the decisions made by MixUp trained models and that this result is consistent across independent runs.\nFollowing on from these observations, it is now pertinent to ask whether these different representations learned from MixUp give rise to practical differences other than just improved generalisation. Since it is our assessment that models trained with MixUp have an altered ‘perception’ of the data distribution, we suggest an analysis based on adversarial attacks, which involve perturbing images outside of the perceived data distribution to alter the given classification. We perform fast gradient sign method, standard gradient descent, projected gradient descent, additive uniform noise, and DeepFool (Moosavi-Dezfooli et al., 2016) attacks over the whole CIFAR-10 test set on PreAct-ResNet18 models subject to `∞ constraints using the Foolbox library (Rauber et al., 2020; 2017). The plots for the additive uniform noise and DeepFool attacks, given in Figure 1, show that MixUp provides an improvement over CutMix and the augmented baseline in this setting. This is because MixUp acts as a form of adversarial training (Goodfellow et al., 2014), equipping the models with valid classifications for images of a similar nature to those generated by the additive noise and DeepFool attacks. In Figure 1, we additionally plot the worst case robustness following all attacks as suggested by Carlini et al. (2019). These results show that the adversarial training effect of MixUp is limited and does not correspond to a general increase in robustness. We provide an enhanced depiction of these results in Appendix C." }, { "heading": "4 FMIX: IMPROVED MASKING", "text": "Our finding is that the masking MSDA approach works because it effectively preserves the data distribution in a way that interpolative MSDAs do not, particularly in the perceptual space of a Convolutional Neural Network (CNN). We suggest that this derives from the fact that each convolutional neuron at a particular spatial position generally encodes information from only one of the inputs at a time. This could also be viewed as local consistency in the sense that elements that are close to each other in space typically derive from the same data point. To the detriment of CutMix, it would be easy for a model to learn about the augmentation since perfectly horizontal and vertical artefacts are unlikely to be a salient feature of the data. We contend that a method which retains the masking nature of CutMix but increases the space of possible shapes (removing the bias towards horizontal and vertical edges) may be able to induce an even greater knowledge of the un-augmented data in trained models as measured by our mutual information analysis. This should in turn correspond with improved accuracy. If we can increase the number and complexity of masks then the space of novel features (that is, features which occur due to edges in the mask) would become significantly larger than the space of features native to the data. As a result, it is highly unlikely that a model would be able to ‘fit’ to this information. This leads to our core motivation: to construct a masking MSDA which maximises the space of edge shapes whilst preserving local consistency.\nFor local consistency, we require masks that are predominantly made up of a single shape or contiguous region. We might think of this as trying to minimise the number of times the binary mask transitions from ‘0’ to ‘1’ or vice-versa. For our approach, we begin by sampling a low frequency grey-scale mask from Fourier space which can then be converted to binary with a threshold. We will first detail our approach for obtaining the low frequency image before discussing our approach for choosing the threshold. Let Z denote a complex random variable with values on the domain Z = Cw×h, with density p<(Z) = N (0, Iw×h) and p=(Z) = N (0, Iw×h), where < and = return the real and imaginary parts of their input respectively. Let freq(w, h) [i, j] denote the magnitude of the sample frequency corresponding to the i, j’th bin of the w × h discrete Fourier transform. We can apply a low pass filter to Z by decaying its high frequency components. Specifically, for a given decay power δ, we use\nfilter(z, δ)[i, j] = z[i, j]\nfreq(w, h) [i, j] δ . (1)\nDefining F−1 as the inverse discrete Fourier transform, we can obtain a grey-scale image with G = < ( F−1 ( filter ( Z, δ ))) . (2)\nAll that now remains is to convert the grey-scale image to a binary mask such that the mean value is some given λ. Let top(n,x) return a set containing the top n elements of the input x. Setting the top λwh elements of some grey-scale image g to have value ‘1’ and all others to have value ‘0’ we obtain a binary mask with mean λ. Specifically, we have\nmask(λ,g)[i, j] = { 1, if g[i, j] ∈ top(λwh,g) 0, otherwise . (3)\nTo recap, we first sample a random complex tensor for which both the real and imaginary part are independent and Gaussian. We then scale each component according to its frequency via the parameter δ such that higher values of δ correspond to increased decay of high frequency information. Next, we perform an inverse Fourier transform on the complex tensor and take the real part to obtain a grey-scale image. Finally, we set the top proportion of the image to have value ‘1’ and the rest to have value ‘0’ to obtain our binary mask. Although we have only considered two dimensional data here it is generally possible to create masks with any number of dimensions. We provide some example two dimensional masks and mixed images (with δ = 3 and λ = 0.5) in Figure 2. We can see that the space of artefacts is significantly increased, furthermore, FMix achieves I(ZA;X) = 83.67±0.89, I(ZA; X̂) = 80.28±0.75, and MSE = 0.255±0.003, showing that learning from FMix simulates learning from the un-augmented data to an even greater extent than CutMix." }, { "heading": "5 EXPERIMENTS", "text": "We now perform a series of experiments to compare the performance of FMix with that of MixUp, CutMix, and augmented baselines. For each problem setting and data set, we provide exposition on the results and any relevant caveats. Throughout, our approach has been to use the hyper-parameters which yield the best results in the literature for each setting. Unless otherwise stated, we use α = 1 for the distribution of λ. For FMix, we use δ = 3 since this was found to produce large artefacts with sufficient diversity. We perform an ablation of both parameters in Appendix H, reporting results for 5 fold cross validation. We perform repeats where possible and report the average performance and standard deviation after the last epoch of training. A complete discussion of the experimental set-up can be found in Appendix E along with the standard augmentations used for all models on each data set. Additional experiments on point cloud and audio classification are given in Appendix D. In all tables, we give the best result and results that are within its margin of error in bold. We discuss any cases where the results obtained by us do not match the results obtained by the authors in the accompanying text, and give the authors results in parentheses. Uncertainty estimates are the standard deviation over 5 repeats. Code for all experiments is given in the supplementary material.\nImage Classification We first discuss image classification results on the CIFAR10/100 (Krizhevsky et al., 2009), Fashion MNIST (Xiao et al., 2017), and Tiny-ImageNet (Stanford, 2015) data sets. We train: PreAct-ResNet18 (He et al., 2016), WideResNet-28-10 (Zagoruyko and Komodakis, 2016), DenseNet-BC-190 (Huang et al., 2017) and PyramidNet-272-200 (Han et al., 2017). For PyramidNet, we additionally apply Fast AutoAugment (Lim et al., 2019), a successor to AutoAugment (Cubuk et al., 2019a), and ShakeDrop (Yamada et al., 2018) following Lim et al. (2019). The results in Table 2 show that FMix offers a significant improvement over the other\nmethods on test, with the exception of the WideResNet on CIFAR-10/100 and the PreAct-ResNet on Tiny-ImageNet. In combination with PyramidNet, FMix achieves, to the best of our knowledge, a new state-of-the-art classification accuracy on CIFAR-10 without use of external data. By the addition of Fast AutoAugment, this setting bares some similarity to the recently proposed AugMix (Hendrycks et al., 2019a) which performs MixUp on heavily augmented variants of the same image. With the PreAct-ResNet18, FMix obtains a new state-of-the-art classification accuracy on Fashion MNIST. Note that Zhang et al. (2017) also performed experiments with the PreActResNet18, WideResNet-28-10, and DenseNet-BC-190 on CIFAR-10 and CIFAR-100. There are some discrepancies between the authors results and the results obtained by our implementation. Whether any differences are significant is difficult to ascertain as no measure of deviation is provided in Zhang et al. (2017). However, since our implementation is based on the implementation from Zhang et al. (2017), and most of the differences are small, we have no reason to doubt it. We speculate that these discrepancies are simply a result of random initialisation, but could also be due to differences in reporting or training configuration.\nNext, we obtain classification results on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC2012) data set (Russakovsky et al., 2015). We train a ResNet-101 on the full data set (ImageNet), additionally evaluating on ImageNet-a (Hendrycks et al., 2019b), a set of natural adversarial examples to ImageNet models, to determine adversarial robustness. We train for 90 epochs with a batch size of 256. We perform experiments with both α = 1.0 and α = 0.2 (as this was used by Zhang et al. (2017)). The results, given in Table 3, show that FMix was the only MSDA to provide an improvement over the baseline with these hyper-parameters. Note that MixUp obtains an accuracy of 78.5 in Zhang et al. (2017) when using a batch size of 1024. Additionally note that MixUp obtains an accuracy of 79.48 and CutMix obtains an accuracy of 79.83 in Yun et al. (2019) when training for 300 epochs. Due to hardware constraints we cannot replicate these settings and so it is\nnot known how FMix would compare. On ImageNet-a, the general finding is that MSDA gives a good improvement in robustness to adversarial examples. Interestingly, MixUp with α = 1.0 yields a lower accuracy on ImageNet but a much higher accuracy on ImageNet-a, suggesting that models trained with MixUp learn a fundamentally different function.\nFor a final experiment with image data, we use the Bengali.AI handwritten grapheme classification data set (Bengali.AI, 2020), from a recent Kaggle competition. Classifying graphemes is a multiclass problem, they consist of a root graphical form (a vowel or consonant, 168 classes) which is modified by the addition of other vowel (11 classes) or consonant (7 classes) diacritics. To correctly classify the grapheme requires classifying each of these individually, where only the root is necessarily always present. We train separate models for each sub-class, and report the individual classification accuracies and the combined accuracy (where the output is considered correct only if all three predictions are correct). We report results for 5 folds where 80% of the data is used for training and the rest for testing. We extract the region of the image which contains the grapheme and resize to 64 × 64, performing no additional augmentation. The results for these experiments, with an SE-ResNeXt-50 (Xie et al., 2017; Hu et al., 2018), are given in Table 4. FMix and CutMix both clearly offer strong improvement over the baseline and MixUp, with FMix performing significantly better than CutMix on the root and vowel classification tasks. As a result, FMix obtains a significant improvement when classifying the whole grapheme. In addition, note that FMix was used in the competition by Singer and Gordeev (2020) in their second place prize-winning solution. This was the best result obtained with MSDA.\nSentiment Analysis Although typically restricted to classification of two dimensional data, we can extend the MSDA formulation for classification of one dimensional data. In Table 5, we perform a series of experiments with MSDAs for the purpose of sentiment analysis. In order for MSDA to be effective, we group elements into batches of similar sequence length as is already a standard practice. This ensures that the mixing does not introduce multiple end tokens or other strange artefacts (as would be the case if batches were padded to a fixed length). The models used are: pretrained FastText-300d (Joulin et al., 2016) embedding followed by a simple three layer CNN (LeCun et al., 1995), the FastText embedding followed by a two layer bi-directional LSTM (Hochreiter and Schmidhuber, 1997), and pre-trained Bert (Devlin et al., 2018) provided by the HuggingFace transformers library (Wolf et al., 2019). For the LSTM and CNN models we compare MixUp and FMix with a baseline. For the Bert fine-tuning we do not compare to MixUp as the model input is a series of tokens, interpolations between which are meaningless. We first report results on the\nToxic Comments (Jigsaw and Google, 2018) data set, a Kaggle competition to classify text into one of 6 classes. For this data set we report the ROC-AUC metric, as this was used in the competition. Note that these results are computed over the whole test set and are therefore not comparable to the competition scores, which were computed over a subset of the test data. In this setting, both MixUp and FMix provide an improvement over the baseline, with FMix consistently providing a further improvement over MixUp. The improvement when fine-tuning Bert with FMix is outside the margin of error of the baseline, but mild in comparison to the improvement obtained in the other settings. We additionally report results on the IMDb (Maas et al., 2011), Yelp binary, and Yelp finegrained (Zhang et al., 2015) data sets. For the IMDb data set, which has one tenth of the number of examples, we found α = 0.2 to give the best results for both MSDAs. Here, MixUp provides a clear improvement over both FMix and the baseline for both models. This suggests that MixUp may perform better when there are fewer examples.\nCombining MSDAs We have established through our analysis that models trained with interpolative MSDA perform a fundamentally different function to models trained with masking. We now wish to understand whether the benefits of interpolation and masking are mutually exclusive. We therefore performed experiments with simultaneous action of multiple MSDAs, alternating their application per batch with a PreAct-ResNet18 on CIFAR-10. A combination of interpolation and masking, particularly FMix+MixUp (96.30±0.08), gives the best results, with CutMix+MixUp performing slightly worse (96.26±0.04). In contrast, combining FMix and CutMix gives worse results (95.85±0.1) than using either method on its own. For a final experiment, we note that our results suggest that interpolation performs better when there is less data available (e.g. the IMDb data set) and that masking performs better when there is more data available (e.g. ImageNet and the Bengali.AI data set). This finding is supported by our analysis since it is always easier for the model to learn specific features, and so we would naturally expect that preventing this is of greater utility, when there is less data. We confirm this empirically by varying the size of the CIFAR-10 training set and training with different MSDAs in Figure 3. Notably, the FMix+MixUp policy obtains superior performance irrespective of the amount of available data." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "In this paper we have introduced FMix, a masking MSDA that improves classification performance for a series of models, modalities, and dimensionalities. We believe the strength of masking methods resides in preserving local features and we improve upon existing approaches by increasing the number of possible mask shapes. We have verified this intuition through a novel information theoretic analysis. Our analysis shows that interpolation causes models to encode more general features, whereas masking causes models to encode the same information as when trained with the original data whilst eliminating memorisation. Our preliminary experiments suggest that combining interpolative and masking MSDA could improve performance further, although further work is needed to fully understand this phenomenon. Future work should also look to expand on the finding that masking MSDA works well in combination with Fast AutoAugment (Lim et al., 2019), perhaps by experimenting with similar methods like AutoAugment (Cubuk et al., 2019a) or RandAugment (Cubuk et al., 2019b). Finally, our early experiments resulted in several lines of enquiry that ultimately did not bare fruit, which we discuss further in Appendix F." }, { "heading": "A ON THE IMPORTANCE OF TARGETS", "text": "The standard formulation for classification with MSDA weights the cross entropy losses computed with each of the true labels by the corresponding input mixing ratio. It could be suggested that by mixing the targets differently, one might obtain better results. However, there are key observations from prior art which give us cause to doubt this supposition; in particular, Liang et al. (2018) performed a number of experiments on the importance of the mixing ratio of the labels in MixUp. They concluded that when the targets are not mixed in the same proportion as the inputs the model can be regularised to the point of underfitting. However, despite this conclusion their results show only a mild performance change even in the extreme event that targets are mixed randomly, independent of the inputs. That doesn’t mean that the target space is always insignificant. For example, we might care about how calibrated the outputs are. Calibration is the extent to which an output ‘probability’ corresponds to the actual probability of being correct. Clearly, this is a challenging property to evaluate since we have no notion of ground truth uncertainty in the data. Peterson et al. (2019) suggest using human uncertainty as a baseline on the CIFAR-10 data set. Specifically, they introduce the CIFAR-10H data set which consists of human soft-labels for the CIFAR-10 test set, i.e. the distribution resulting from many different humans labelling each image. We evaluate a series of CIFAR-10 pretrained PreAct-ResNet18 models on CIFAR-10H in Table 6. The metric used is the relative entropy of the model outputs with respect to the soft-labels. The results show that the masking MSDA approaches induce a notion of uncertainty that is more similar to that of human observers. An important weakness of this claim derives from the cross entropy objective used to train models. We note that\nH(pŶ |X , pY |X) = H(pŶ |X) +D ( pŶ |X ‖ pY |X ) . (4)\nIn other words, the model is jointly required to match the target distribution and minimise the entropy of each output. The result of this is that trained models naturally output very high confidence predictions as an artefact of their training process. The above claim should therefore be taken with a pinch of salt since it is likely that the improved results derive simply from the lower entropy targets and model outputs. Furthermore, we expect that significant improvement would be gained in this test by training MSDA models with a relative entropy objective rather than the cross entropy.\nB VAE MUTUAL INFORMATION\nRecall from the paper that we wish to estimate the mutual information between the representation learned by a VAE from the original data set, ZX , and the representation learned from some augmented data set, ZA, written I(ZX ;ZA) = EZX [ D ( p(ZA |ZX) ‖ pZA )] . VAEs comprise an encoder, p(Z |X), and a decoder, p(X |Z). We impose a standard Normal prior on Z, and train the model to maximise the Evidence Lower BOund (ELBO) objective\nL = EX [ EZ |X [ log(p(X |Z)) ] −D ( p(Z |X) ‖N (0, I) )] . (5)\nDenoting the outputs of the decoder of the VAE trained on the augmentation as X̂ = decode(ZX), and by the data processing inequality, we have I(ZA; X̂) ≤ I(ZA;ZX) with equality when the decoder retains all of the information in Z. Now, we need only observe that we already have a model of p(ZA |X), the encoder trained on the augmented data. Estimating the marginal pZA presents a challenge as it is a Gaussian mixture. However, we can measure an alternative form of the mutual information that is equivalent up to an additive constant, and for which the divergence has a closed form solution, with\nEX̂ [ D ( p(ZA | X̂) ‖ pZA )] = EX̂ [ D ( p(ZA | X̂) ‖N (0, I) )] −D ( pZA ‖N (0, I) ) . (6)\nThe above holds for any choice of distribution that does not depend on X̂ . Conceptually, this states that we will always lose more information on average if we approximate p(ZA | X̂) with any constant distribution other than the marginal pZA . Additionally note that we implicitly minimise D ( pZA ‖N (0, I) ) during training of the VAE (Hoffman and Johnson, 2016). In light of this fact,\nwe can write I(ZA; X̂) ≈ EX̂ [D ( p(ZA | X̂) ‖N (0, I) ) ].\nWe can now easily obtain a helpful upper bound of I(ZA;ZX) such that it is bounded on both sides. Since ZA is just a function of X , again by the data processing inequality, we have I(ZA;X) ≥ I(ZA;ZX). This is easy to compute since it is just the relative entropy term from the ELBO objective. To summarise, we can compute our measure by first training two VAEs, one on the original data and one on the augmented data. We then generate reconstructions of data points in the original data with one VAE and encode them in the other. We now compute the expected value of the relative entropy between the encoded distribution and an estimate of the marginal to obtain an estimate of a lower bound of the mutual information between the representations. We then recompute this using real data points instead of reconstructions to obtain an upper bound." }, { "heading": "C SUPPLEMENTARY ANALYSES", "text": "" }, { "heading": "D ADDITIONAL EXPERIMENTS", "text": "Point Cloud Classification We now demonstrate the extension of FMix to 3D through point cloud classification on ModelNet10 (Wu et al., 2015). We transform the pointclouds to a voxel representation before applying a 3D FMix mask. Table 7 reports the average median accuracy from the last 5 epochs, due to large variability in the results. It shows that FMix continues to improve results within significance, even in higher dimensions.\nAudio Classification The Google Commands data set was created to promote deep learning research on speech recognition problems. It is comprised of 65,000 one second utterances of one of 30 words, with 10 of those words being the target classes and the rest considered unrelated or background noise. We perform MSDA on a Mel-frequency spectrogram of each utterance. The\nresults for a PreAct ResNet-18 are given in Table 7. We evaluate FMix, MixUp, and CutMix for the standard α = 1 used for the majority of our experiments and α = 0.2 recommended by Zhang et al. (2017) for MixUp. We see in both cases that FMix and CutMix improve performance over MixUp outside the margin of error, with the best result achieved by FMix with α = 1." }, { "heading": "E EXPERIMENTAL DETAILS", "text": "In this section we provide the experimental details for all experiments presented in the main paper. Unless otherwise stated, the following parameters are chosen: α = 1, δ = 3, weight decay of 1×104 and optimised using SGD with momentum of 0.9. For cross validation experiments, 3 or 5 folds of 10% of the training data are generated and used for a single run each. Test set experiments use the entire training set and give evaluations on the test sets provided. If no test set is provided then a constant validation set of 10% of the available data is used. Table 8 provides general training details that were present in all experiments.\nAll experiments were run on a single GTX1080ti or V100, with the exceptions of ImageNet experiments (4 × GTX1080ti) and DenseNet/PyramidNet experiments (2 × V100). ResNet18 and LSTM experiments ran within 2 hours in all instances, PointNet experiments ran within 10 hours, WideResNet/DenseNet experiments ran within 2.5 days and auto-augment experiments ran within 10 days.\nFor all image experiments we use standard augmentations to normalise the image to [0, 1] and perform random crops and random horizontal flips. For the google commands experiment we used the transforms and augmentations implemented here https://github.com/tugstugi/ pytorch-speech-commands for their solution to the tensorflow speech regonition challenge." }, { "heading": "F THINGS WE TRIED THAT DIDN’T WORK", "text": "This section details a number of experiments and modifications we attempted which did not lead to significant results. Our aim here is to prevent future research effort being devoted to approaches that have already been explored by us. It may also be the case that better versions of these could be constructed which obtain better results.\nF.1 SALIENCE PRIOR\nIt is clear that we should care about how the mixing coefficient relates to the relative amount of salient information from each data point in the outcome. This presents a challenge because getting λ of the salient information in the first data point does not imply that we have 1 − λ of the salient information in the second. We could consider making an assumption that the expected distribution of salient information in each data point is the same. In such a case, the above problem no longer exists. For images, a simple assumption would be that the salient information is roughly Gaussian about the centre. To apply a salience prior to our mask generation process, we need to change the binarisation algorithm. Specifically, we iterate over the values in descending order until the mass over the prior is equal to λ. We experimented with this approach and found no significant performance gain, and so did not pursue it any further. That said, there may still be some value to the above motivation and a more complex, data point specific, salience distribution could work.\nF.2 MASK SOFTENING\nFollowing the observation that combining interpolation and masking provides the best results, and particularly the experiments in Summers and Dinneen (2019), we considered a grey-scale version of FMix. Specifically, we explored a method which softened the edges in the mask. To achieve this, after sorting the low frequency image by pixel value, instead of choosing a threshold and setting one side to 1 and the other to 0, we choose an equal distance either side of the threshold and linearly value the mask between 1 and 0 for some number of pixels. The number of grey pixels is chosen to ensure that the mean mask value is retained and that the fraction of the image that is non-binary does not exceed some present value.\nWe found that softening the masks resulted in no performance gains, and in fact, occasionally hindered training. We considered it again for the toxic comments experiments since we assumed smooth transitions would be very important for text models. It did offer minor improvements over default FMix, however, we judged that the gain was not worth the added complexity and diluting of the core idea of FMix for us to present it in the paper. Furthermore, proposing it for the singular case of toxic comments would have been bad practice, since we only observed an improvement for one model, on one data set. That said, we feel mask softening would be interesting to explore further, certainly in the case of text models. We would need to experiment with softened FMix masks in multiple text data sets and observe improvement in most or all of them over base FMix in order to formally propose softening as an FMix modification.\nF.3 TARGET DISTRIBUTION\nA final alteration that we experimented with relates to the distribution of targets. The idea was that we could change the distribution of the target mixing coefficients to obtain better ‘calibrated’ model outputs. The way this is done is simple, we pass the sampled λ through its CDF and then through the inverse CDF of the target distribution. This allows us to, for example, encourage confident outputs by choosing a symmetric Beta distribution with α ≈ 0.1. The issue with this approach is two fold. First, changing the distribution of the outputs in this way has no bearing on the ordering, and so no effect on the classification accuracy. Second, any simple transform of this nature can be trivially learned by the model or applied in post. In other words, it is equivalent to training a model normally and then just transforming the outputs. As a result, it is difficult to argue that this approach does anything particularly clever. We trained models with different target distributions at several points and found that the performance was not significantly different." }, { "heading": "G CURRENT UNDERSTANDING OF MSDA", "text": "Attempts to explain the success of MSDAs were not only made when they were introduced, but also through subsequent empirical and theoretical studies. In this section we review these studies to paint a picture of the current theories, and points of contention, on how MSDA works. In addition to their experimentation with the targets, Liang et al. (2018) argue that linear interpolation of inputs limits the memorisation ability of the network. Gontijo-Lopes et al. (2020) proposes two measures to explain the impact of augmentation on generalisation when jointly optimised: affinity and diversity. While the former captures the shift in the data distribution as perceived by the baseline model, the latter measures the training loss when learning with augmented data. A somewhat more mathematical view on MSDA was adopted by Guo et al. (2019), who argue that MixUp regularises the model by constraining it outside the data manifold. They point out that this could lead to reducing the space of possible hypotheses, but could also lead to generated examples contradicting original ones, degrading quality. Upon Taylor-expanding the objective, a more recent study that also focuses on MixUp motivates its success by the co-action of four different regularisation factors (Carratino et al., 2020).\nFollowing Zhang et al. (2017), He et al. (2019) take a statistical learning view of MSDA, basing their study on the observation that MSDA distorts the data distribution and thus does not perform VRM in the traditional sense. They subsequently propose separating features into ‘minor’ and ‘major’, where a feature is referred to as ‘minor’ if it is highly sample-specific. Augmentations that significantly affect the distribution are said to make the model predominantly learn from ‘major’ features. From an information theoretic perspective, ignoring these ‘minor’ features corresponds to increased compression of the input by the model. Although He et al. (2019) noted the importance of characterising the effect of data augmentation from an information perspective, they did not explore any measures that do so. Instead, He et al. (2019) analysed the variance in the learned representations. It can be seen that this is analogous to the entropy of the representation since entropy can be estimated via the pairwise distances between samples, with higher distances corresponding to both greater entropy and variance (Kolchinsky and Tracey, 2017). In proposing Manifold MixUp, Verma et al. (2019) additionally suggest that MixUp works by increasing compression. The authors compute the singular values of the representations in early layers of trained networks, with smaller singular values again corresponding to lower entropy. The issue with these approaches is that the entropy of the representation is only an upper bound on the information that the representation has about the input.\nAn issue with these findings is that they relate purely to interpolative MSDAs. It is also the case that there is disagreement in the conclusions of some of these studies. If interpolative MSDA works by preventing the model from learning about so called ‘minor’ features, then that would suggest that the underlying data distribution has been distorted, breaking the core assumption of VRM. Furthermore, Yun et al. (2019) suggested that masking MSDA approaches work by addressing this distortion. If this is the case then we should expect them to perform worse than interpolative MSDAs since the bias towards compressed representations has been removed. Clearly, there is some contention about the underlying mechanisms driving generalisation in MSDAs. In particular, it is necessary to provide an explanation for masking MSDAs that is complementary to the current explanations of interpolative MSDAs, rather than contradictory to them." }, { "heading": "H HYPERPARAMETER CHOICE", "text": "Figure 6a gives the relationship between validation accuracy and the parameter α for three MSDA methods. Validation accuracy is the average over 5 folds with a validation set consisting of 10% of the data. This ablation was performed on the CIFAR-10 data set using the PreAct ResNet18 model from the previous experiments. In the cases of FMix and MixUp there exists an optimal value. In both cases, this point is close to α = 1, although for MixUp it is skewed slightly toward 0, as was found for their ImageNet experiments. The choice of decay power δ is certainly more significant. Figure 6b shows that low values of δ drastically reduce the final accuracy. This is unsurprising since low δ corresponds to a speckled mask, with no large regions of either data point present in the augmentation. Larger values of δ correspond to smoother marks with large cohesive regions from each donor image. We note that for δ & 3 there is little improvement to be gained, validating our decision to use δ = 3." } ]
2,020
FMIX: ENHANCING MIXED SAMPLE DATA AUGMEN-
SP:56f796be21ad1e4a563138ba70053071cc419e8c
[ "This work proposed a very interesting idea that the back-propagated errors have log-normal distributions. The authors could extend this intriguing observation into computation efficient algorithms; reduced-precision floating-point quantization or the pruning of the back-prop error, which are very interesting. The authors also provide details of derivations as well as the data analysis along with several small ideas (in Appendix) to strongly support their ideas." ]
While training can mostly be accelerated by reducing the time needed to propagate neural gradients (loss gradients with respect to the intermediate neural layer outputs) back throughout the model, most previous works focus on the quantization/pruning of weights and activations. These methods are often not applicable to neural gradients, which have very different statistical properties. Distinguished from weights and activations, we find that the distribution of neural gradients is approximately lognormal. Considering this, we suggest two closed-form analytical methods to reduce the computational and memory burdens of neural gradients. The first method optimizes the floating-point format and scale of the gradients. The second method accurately sets sparsity thresholds for gradient pruning. Each method achieves state-of-the-art results on ImageNet. To the best of our knowledge, this paper is the first to (1) quantize the gradients to 6-bit floating-point formats, or (2) achieve up to 85% gradient sparsity — in each case without accuracy degradation. Reference implementation accompanies the paper in the supplementary material.
[ { "affiliations": [], "name": "Brian Chmiel" }, { "affiliations": [], "name": "Liad Ben-Uri" }, { "affiliations": [], "name": "Moran Shkolnik" }, { "affiliations": [], "name": "Elad Hoffer" }, { "affiliations": [], "name": "Ron Banner" }, { "affiliations": [], "name": "Daniel Soudry" }, { "affiliations": [], "name": "◦ †Habana" } ]
[ { "authors": [ "Md Aamir Raihan", "Tor M. Aamodt" ], "title": "Sparse weight activation training", "venue": "arXiv preprint arXiv:2001.01969,", "year": 2020 }, { "authors": [ "Dan Alistarh", "Demjan Grubic", "Jungshian Li", "Ryota Tomioka", "M. Vojnovic" ], "title": "Qsgd: Communicationoptimal stochastic gradient descent, with applications to training neural networks. 2016", "venue": null, "year": 2016 }, { "authors": [ "R. Banner", "Yury Nahshan", "Daniel Soudry" ], "title": "Post training 4-bit quantization of convolutional networks for rapid-deployment", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Ron Banner", "Itay Hubara", "Elad Hoffer", "Daniel Soudry" ], "title": "Scalable methods for 8-bit training of neural networks", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Chaim Baskin", "Natan Liss", "Yoav Chai", "Evgenii Zheltonozhskii", "Eli Schwartz", "Raja Girayes", "Avi Mendelson", "Alexander M Bronstein" ], "title": "Nice: Noise injection and clamping estimation for neural network quantization", "venue": "arXiv preprint arXiv:1810.00162,", "year": 2018 }, { "authors": [ "Jeremy Bernstein", "Yu-Xiang Wang", "Kamyar Azizzadenesheli", "Anima Anandkumar" ], "title": "signsgd: compressed optimisation for non-convex problems", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Gerard Biau", "David M Mason" ], "title": "High-dimensional norms", "venue": "In Mathematical Statistics and Limit Theorems,", "year": 2015 }, { "authors": [ "Léopold Cambier", "Anahita Bhiwandiwalla", "Ting Gong", "Mehran Nekuii", "Oguz H. Elibol", "Hanlin Tang" ], "title": "Shifted and squeezed 8-bit floating point format for low-precision training of deep neural networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Jungwook Choi", "Pierce I-Jen Chuang", "Zhuo Wang", "Swagath Venkataramani", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)", "venue": "arXiv preprint arXiv:1807.06964,", "year": 2018 }, { "authors": [ "Jungwook Choi", "Zhuo Wang", "Swagath Venkataramani", "Pierce I-Jen Chuang", "Vijayalakshmi Srinivasan", "Kailash Gopalakrishnan" ], "title": "Pact: Parameterized clipping activation for quantized neural networks", "venue": "arXiv preprint arXiv:1805.06085,", "year": 2018 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Jun Fang", "Ali Shafiee", "Hamzah Abdel-Aziz", "David Thorsley", "Georgios Georgiadis", "Joseph Hassoun" ], "title": "Post-training piecewise linear quantization for deep neural networks", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Jonathan Frankle", "Michael Carbin" ], "title": "The lottery ticket hypothesis: Finding sparse, trainable neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Vipul Gupta", "Dhruv Choudhary", "P. Tang", "Xiaohan Wei", "X. Wang", "Yuzhen Huang", "A. Kejariwal", "K. Ramchandran", "M.W. Mahoney" ], "title": "Fast distributed training of deep neural networks: Dynamic communication thresholding for model and data", "venue": "parallelism. ArXiv,", "year": 2020 }, { "authors": [ "B. Hanin", "M. Nica" ], "title": "Products of many large random matrices and gradients in deep neural networks", "venue": "Communications in Mathematical Physics,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Greg Henry", "Ping Tak Peter Tang", "Alexander Heinecke" ], "title": "Leveraging the bfloat16 artificial intelligence datatype for higher-precision computations", "venue": "IEEE 26th Symposium on Computer Arithmetic (ARITH),", "year": 2019 }, { "authors": [ "C.A.R. Hoare" ], "title": "Algorithm 65: Find", "venue": "Commun. ACM,", "year": 1961 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Sambhav R. Jain", "Albert Gural", "Michael Wu", "Chris H. Dick" ], "title": "Trained quantization thresholds for accurate and efficient fixed-point inference of deep neural networks", "venue": "In Machine Learning and Systems,", "year": 2020 }, { "authors": [ "Dhiraj Kalamkar", "Dheevatsa Mudigere", "Naveen Mellempudi", "Dipankar Das", "Kunal Banerjee", "Sasikanth Avancha", "Dharma Teja Vooturi", "Nataraj Jammalamadaka", "Jianyu Huang", "Hector Yuen" ], "title": "A study of bfloat16 for deep learning training", "venue": "arXiv preprint arXiv:1905.12322,", "year": 2019 }, { "authors": [ "Raghuraman Krishnamoorthi" ], "title": "Quantizing deep convolutional networks for efficient inference: A whitepaper, 2018", "venue": "URL http://arxiv.org/abs/1806.08342", "year": 2018 }, { "authors": [ "Christos Louizos", "Max Welling", "Diederik P. Kingma" ], "title": "Learning sparse neural networks through l0 regularization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Paulius Micikevicius", "Sharan Narang", "Jonah Alben", "Gregory Frederick Diamos", "Erich Elsen", "David Garcı́a", "Boris Ginsburg", "Michael Houston", "Oleksii Kuchaiev", "Ganesh Venkatesh", "Hao Wu" ], "title": "Mixed precision training", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yury Nahshan", "Brian Chmiel", "Chaim Baskin", "Evgenii Zheltonozhskii", "Ron Banner", "Alex M. Bronstein", "Avi Mendelson" ], "title": "Loss aware post-training quantization", "venue": "arXiv preprint arXiv:1911.07190,", "year": 2019 }, { "authors": [ "Haidong Rong", "Yangzihao Wang", "F. Zhou", "Junjie Zhai", "Haiyang Wu", "R. Lan", "F. Li", "H. Zhang", "Y. Yang", "Zhenyu Guo", "D. Wang" ], "title": "Distributed equivalent substitution training for large-scale recommender systems", "venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval,", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "CoRR, abs/1409.1556,", "year": 2015 }, { "authors": [ "Xiao Sun", "Jungwook Choi", "Chia-Yu Chen", "Naigang Wang", "Swagath Venkataramani", "Vijayalakshmi Srinivasan", "Xiaodong Cui", "Wei Zhang", "Kailash Gopalakrishnan" ], "title": "Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Xu Sun", "Xuancheng Ren", "Shuming Ma", "Houfeng Wang" ], "title": "meprop: Sparsified back propagation for accelerated deep learning with reduced overfitting", "venue": null, "year": 2017 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Naigang Wang", "Jungwook Choi", "D. Brand", "Chia-Yu Chen", "K. Gopalakrishnan" ], "title": "Training deep neural networks with 8-bit floating point numbers", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Bernard Widrow", "Istvá Kollár" ], "title": "Quantization noise: roundoff error in digital computation, signal processing, control, and communications", "venue": null, "year": 2008 }, { "authors": [ "S. Wiedemann", "Temesgen Mehari", "Kevin Kepp", "W. Samek" ], "title": "Dithered backprop: A sparse and quantized backpropagation algorithm for more efficient deep neural network training", "venue": "IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW),", "year": 2020 }, { "authors": [ "Shuang Wu", "Guoqi Li", "Feng Chen", "Luping Shi" ], "title": "Training and inference with integers in deep neural networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Xucheng Ye", "P. Dai", "J. Luo", "X. Guo", "Y. Qi", "Jianlei Yang", "Yiran Chen" ], "title": "Accelerating cnn training by pruning activation gradients", "venue": "arXiv preprint arXiv:1908.00173,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Neural gradients are used in the training process of deep networks to backpropagate the error-gradient throughout the model, thus allowing to compute the required weight updates. As these neural gradients are needed for a substantial ratio of the underlying computations (about 23 ), compressing them can alleviate data-throughput requirements and accelerate the training process.\nCompression of neural gradients reduce the memory footprint for the intermediate calculation and the bandwidth of data transfer inside the HW accelerator. Moreover, in term of distributed training in model parallelism the neural gradients are one of the main bottlenecks that need to be transferred between devices(Rong et al., 2020; Gupta et al., 2020).\nMany previous works (Banner et al., 2019; Fang et al., 2020) compress tensors such as weights and activations by approximating their distributions using an analytically tractable density. These works often assume a bell-shaped distribution such as Gaussian or Laplace distributions, which have been reported to fail for neural gradients (Ye et al., 2019). One key observation in this paper is that neural gradient distributions are heavy-tailed, fundamentally different from the light-tailed distributions of weights and activations. Further statistical and distributional tests reveal gradient magnitudes follow a lognormal distribution.\nAdopting this lognormal observation, our paper suggests two main applications — quantization and pruning, used to reduce the computational and memory burden of neural gradients. To tackle these challenges, we first formalize the problems and find closed-form expressions that enable us to predict the optimal quantization and pruning policies. These measures are easy to use and depend only on the estimated lognormal parameters.\n∗Equal contribution.\nIn Figure 1 we summarize these applications and their derivation. The first application uses the lognormal prior to enabling low-precision floating-point (FP) quantization of the gradients. Here we optimize two tasks. The first task is to find a partition between mantissa and exponent bit-widths that minimizes quantization noise for a given n bit FP gradient representation. The second task is to scale these gradients so that they would be properly represented within a limited dynamic range (distance between the maximum and minimum that FP format can represent). We provide useful insights that make empirically-based heuristics such as loss scaling (Micikevicius et al., 2018) a more grounded approach with a theoretical basis. Optimizing both tasks we obtain state-of-the-art results for FP quantization of the neural gradients. The second application performs accurate and predictable stochastic pruning of gradients on the fly, which results in two state-of-the-art pruning schemes. The first translates the desired sparsity level into an accurate threshold, and the other enables combined use of different sparsity levels at different layers (heterogeneous sparsity)." }, { "heading": "2 RELATED WORK", "text": "Quantization and pruning of neural networks have seen a tremendous amount of works (e.g., Nahshan et al. (2019); Choi et al. (2018); Baskin et al. (2018); Louizos et al. (2018); Frankle & Carbin (2018); Cambier et al. (2020)) aiming to reduce both bandwidth and memory footprint, as well as computation time. Most of these methods focus on the quantization or pruning of the weights / activations in the forward path (Banner et al., 2019; Nahshan et al., 2019) or the weight gradients (Bernstein et al., 2018; Alistarh et al., 2016) in the backward path. So far, neural gradients got less attention. Some of these methods (Banner et al., 2019; Ye et al., 2019; Fang et al., 2020) use a systemic and rigorous statistical approach to optimize various distortion measures. For example, Banner et al. (2019) used the normal distributional assumption (of weights and activations) to analytically minimize the mean-squared quantization error. Our work follows a similar line to rigorously optimize similar performance measures for quantization and pruning of gradient distributions, which are different from that of the weights and activations.\nGradient Quantization. While a lot of research focused on the quantization of weights and activations for inference (Krishnamoorthi, 2018; Choi et al., 2018; Jain et al., 2020), there were also major advances in quantization during training, many of them faced difficulty trying to represent the high dynamic range of the gradients (Banner et al., 2018; Wu et al., 2018). Cambier et al. (2020) suggests keeping for each tensor shift and scale in full precision numbers to make them fit in FP8 format dynamic range. Mantissa vs exponent allocation of the available bits has proven crucial in deep learning workloads, where for example, BF16 (1-8-7: sign-exponent-mantissa) has shown greater success compared to traditional FP16 (1-5-10) format due to wider dynamic range (Henry et al., 2019; Kalamkar et al., 2019). Research over required format and trade-offs in exponent versus mantissa is on-going with growing interest over lower precision representations such as FP8. Some works have\nexplored using different FP8 formats: Wang et al. (2018) has used (1-5-2), while (Sun et al., 2019) suggests using one type of FP8 format for the forward (1-4-3), and a different FP8 format (1-5-2) for the backward pass, after empirically assessing the different possible formats. Additionally, with the growing usage of FP16 mixed-precision training, researchers and practitioners faced the need to use loss scaling (Micikevicius et al., 2018), adjusting the tensor distribution to avoid over/under-flow. This procedure required, in some cases, an intensive parameter search and was not guaranteed to succeed in all cases.\nGradient pruning. Focusing on the computationally-intensive back-propagation, ”meProp” (Sun et al., 2017) prunes the K smallest absolute-valued entries of the neural gradients on the fly, using the top-k algorithm. Works following it, replaced pruning with quantization to induce sparsity (Wiedemann et al., 2020), or used top-k pruning as well on the copies of weights and activations used in back-prop (Aamir Raihan & Aamodt, 2020). Ye et al. (2019), inspired by conventional unbiased estimators like stochastic rounding, suggested ”stochastic pruning”, reaching higher sparsity levels. Yet, the authors assumed the gradients are normally distributed, leading to incorrect estimation of the threshold and a large difference between the required sparsity and the one obtained. As we shall see later, using the correct statistical distribution model is essential to determine the proper threshold." }, { "heading": "3 NEURAL GRADIENTS DISTRIBUTION", "text": "Many prior works (Banner et al., 2019; Bernstein et al., 2018) take the assumption that tensor data e.g., weights (W ) and activations (A) is sampled from a Gaussian distribution. Recently, Ye et al. (2019); Wiedemann et al. (2020) used the same assumption for the distribution of neural gradients∇A. In the section, we discuss this assumption. We show that neural gradients are better approximated by lognormal distributions, i.e., the gradient logarithm values are normally distributed, as opposed to the gradients themselves.\nIn Fig. 2, we plot the histogram of neural gradient magnitudes at linear and log scales in one layer of ResNet18 - ImageNet dataset. At a linear scale, the distribution has a few gradients of huge magnitude and a great many gradients of small magnitudes (Fig 2a). Plotting the histogram on a logarithmic scale reveals a distribution close to a symmetric normal (Gaussian) distribution (Fig 2b). This is the hallmark of the lognormal distribution. Finally, when plotting the theoretical quantiles of the normal distribution against the quantiles of the gradient distribution (Q-Q plot), we see that the points follow a strongly nonlinear pattern in Fig 2c, suggesting that the data is not distributed as a standard normal distribution. Note that in the Q-Q plot for lognormal distribution (Fig 2d), almost all points lie on a straight line.\nWe further estimate the goodness of fit of the neural gradients to normal and lognormal distributions. To that end, we measure the static distance (largest vertical line) between the cumulative distribution function (CDF) of the empirically observed distribution and the CDF of the reference distribution (also known as Kolmogorov-Smirnov test (Smirnov, 1948)). For each model and dataset in Table 1, we calculate the average (across all layers) of the static distance to normal and lognormal distributions. The analysis is performed on the absolute value of the gradients, excluding the zero-valued entries. Note that lognormal distribution gets a better fit. Additional statistical distributions in Appendix A.1 ." }, { "heading": "4 APPLICATION I - OPTIMAL FLOATING-POINT QUANTIZATION", "text": "Floating-point (FP) representations can cover a very wide dynamic range with a relatively small number of digits. The mantissa-exponent tradeoff control the balance between the dynamic range of numbers that can be represented (exponent) to the precision of these numbers (mantissa). This dynamic range is especially important for the heavy-tailed distributions that characterize neural gradients. In this section, we study the optimal characteristics of the FP quantizer." }, { "heading": "4.1 PROBLEM FORMULATION", "text": "We can decompose any positive real value x ∈ R+ as follows:\nx = 2ln x = M∈[1,2)︷ ︸︸ ︷ 2ln x−bln xc ·2 E∈Z︷ ︸︸ ︷ blnxc, (1)\nwhere M ∈ [1, 2) is the mantissa and E ∈ Z the exponent. Given N bits, we allocate 1 bit for the sign and seek the optimal allocation of n1 bits to M and n2 bits to E, such that n1 + n2 = N − 1. Accordingly, we define the quantized xq as:\nxq = { 2Emax E ≥ Emax Mq · 2E −Emax ≤ E ≤ Emax 0 E ≤ −Emax\n(2)\nwhere Emax = 2n2−1 and Mq is the quantized mantissa with the range [1, 2) divided into 2n1 quantization levels with a spacing of ∆ = 12n1 .\nFinally, we measure the relative error between the FP number xq and the real number x, which is simply the difference between the two numbers divided by the real number (Widrow & Kollár, 2008):\nη(n1, n2) = ∣∣∣∣xq − xx ∣∣∣∣ (3)" }, { "heading": "4.2 ANALYTICAL DERIVATION OF THE RELATIVE ERROR", "text": "We assume that x ∼ Lognormal(µ, σ2). Note thatE = blnxc ≈ lnx ∼ N (µ, σ2). In Appendix A.4 we split the range into three parts according to E: (i) −Emax ≤ E ≤ Emax; (ii) E ≥ Emax; (iii) E ≤ −Emax, and calculate the expected contribution for each term. A closed-form formula for the expected relative error could be obtained as follows:\nE [η(n1, n2)] = 2Φ ( Emax σ ) − 1\n8 · ln (2) · (2n1 − 1) + 2Emax−1e\nσ2 ln2(2) 2 ( erf ( σ ln 2√\n2 + Emax√ 2σ\n) − 1 )\n− 1 2 erf\n( Emax√\n2σ\n) + 3\n2 − Φ ( Emax σ ) (4)\nwhere Φ(x) is the CDF of N (0, 1). In Fig. 3a we show that analytical results stated by Eq. (4) are in good agreement with simulations for FP8 with various number of exponent bits. Simulations were obtained by quantizing 10,000 values, generated from a lognormal distribution with σ = 1, 3, 5." }, { "heading": "4.3 THE OPTIMAL MANTISSA-EXPONENT REPRESENTATION", "text": "The relative error in Eq. 4 depends on the scale parameter σ, the number of bits of the mantissa n1 and exponent n2, respectively (the latter through Emax = 2n2−1). Given any N -bit FP format, we wish to find a mantissa-exponent partition that minimizes the expected relative error such that n1 +n2 = N − 1. Minimizing Eq. 4 yields this optimal partition. To do so we set n1 = N −n2− 1, equate the derivative to zero and solve. The computational cost of such a solution is negligible (details are in Appendix A.4.4). This allocation depends on N and σ. In Fig. 3b we show the optimal allocations for N = 5, 6, 7, 8 bits and σ ∈ [1, 8]. In Fig. 3c we show the FP format obtained by solving Eq. (3) with normal distribution assumption for neural gradients, which leads to a sub-optimal format as shown in Table 3. The full solution can be found in Appendix A.5.\nTable 2 summarizes the statistical analysis applied for various floating point formats. Empirical observations prove that gradient distributions have a range of [3,5.5] and [2.5, 4.5] in ImageNet and Cifar100 datasets respectively, which we use to determine the optimal mantissa-exponent partition for FP4–FP8. In Section 6, we use these partitions to train ImageNet and Cifar models at reduced precision (bit-width lower than 8), and show empirically that these partitions provide the best results." }, { "heading": "4.4 PER-LAYER GRADIENT SCALING", "text": "The use of loss scaling (Micikevicius et al., 2018), is key to the quantization of gradients using low precision FP formats. The idea is to shift the neural gradients’ dynamic range to fit the floating-point range thus avoiding possible underflows. Loss scaling is usually performed by multiplying the loss value by a large constant and then dividing weight gradients by the same value after back-propagation and before any update has taken place. As gradient distribution changes across training, dynamic loss scaling is sometimes needed. In this setting, gradients are monitored for any overflow or underflow\nthat may occur. Upon this occurrence, gradients are discarded for that step and loss scale value is changed to combat the phenomena (either increased or decreased heuristically).\nChoosing either a fixed or dynamic global loss scale that can fit across all the layers is challenging and may prove impossible for low precision FP formats that further reduce the dynamic range. In Fig. A.5, we show the standard deviation σ of the gradient at the log scale for different transformer layers. The high variability in σ between the layers makes the choice of one global loss scale unlikely. The variation in gradient statistics across layers may also explain the need for previous hybrid approaches that kept some layers at higher precision. For example, Sun et al. (2019) reported that when training ResNet18 models using FP8, they had to keep the first and last layers at FP16 precision. Fig. A.6 suggests a reason — the first and last layers exhibit std that is very different from the other layers (i.e., they require a different gradient scaling ). Cambier et al. (2020) showed the low results achieved using a global loss scale in FP8 format in transformer and suggested a full precision (expensive) computational operation of rescaling and shifting the neural gradients to FP8 dynamic range.\nWe therefore suggest to use a per-layer gradient scale, instead of a global loss scale. As detailed in Appendix A.6, our gradient scaling method keeps the largest gradients representable but sacrifices the smallest gradients. These tiny gradients can be pruned without significantly distorting the original tensor because: (1) the lognormal distribution suggests that gradients of tiny magnitude are relatively scarce; (2) such tiny gradients are typically less significant in training compared to the larger gradients. Pseudo-code appears in Algorithm 1" }, { "heading": "5 APPLICATION II - STOCHASTIC PRUNING", "text": "Inspired by conventional unbiased estimators for quantization such as ”stochastic rounding”, researchers have recently proposed ”stochastic pruning” (Ye et al., 2019), an unbiased pruning method that introduces zero bias on expectation.\nGiven a threshold α, we sample a uniform variable ε ∼ U [0, 1] and prune x as follows:\nTα,ε (x) = x |x| > α sign(x) · α α · ε ≤ |x| ≤ α 0 |x| < α · ε\n(5)\nThe method is graphically illustrated in Figure 4. Note that all values in the range [−α, α] need to be mapped to only one of the three possible values (0, ±α). Their increased frequency of occurrence can be used to design a custom encoding method to compress their representation. In Appendix A.10, we propose an encoding method with a compression ratio equivalent to quantizing to 4 bits at 80% sparsity or to only 2 bits at 90% sparsity." }, { "heading": "5.1 PROBLEM FORMULATION", "text": "Our goal would be to find an analytical expression for a proper threshold α that induces the desired sparsity S using stochastic pruning. Specifically, let x be a random variable with a known distribution and S a given sparsity level (0 < S < 1). Using stochastic pruning, with ε ∼ U [0, 1], we obtain:\nS = Eε ∫ α·ε 0 f(x) dx (6)" }, { "heading": "5.2 ANALYTICAL DERIVATION OF SPARSITY", "text": "Understanding the lognormal distribution of the gradients, Eq. (6) is simply the CDF of a lognormal distribution at α · , that is:\nS = Eε [ 1\n2 +\n1 2 erf ( ln (α · ε)− µ√ 2σ )] = τ= εeµ ∫ α eµ 0 [ 1 2 + 1 2 erf ( ln (τ)√ 2σ )] eµ α dτ (7)\nThe complete solution for this integral can be found in Appendix A.3, resulting in:\nS = 1 2 + eµ 2α\n[ e σ2 2 erf ( σ√ 2 − ln ( αeµ )√ 2σ ) + α eµ · erf ( ln ( αeµ )√ 2σ ) − eσ 2 2 ] (8)\nEq. (8) can easily be solved numerically to find α. As shown in Fig. A.12 the lognormal parameters µ and σ of the gradients’ distribution at each layer are pretty stable throughout the training, which allows sampling σ, µ and calculate the threshold α not very frequently. In practice, we perform this procedure only once per epoch while achieving stable sparsity levels throughout the training. Moreover, the computational complexity of solving Eq. (8) is negligible (empirically it converges in a few iterations); further details are found in Appendix A.7." }, { "heading": "5.3 HETEROGENEOUS SPARSITY ALLOCATION", "text": "We show in Fig. A.9a that the angle between the tensors before and after the stochastic pruning (measured by the cosine similarity) can serve as an important proxy to the overall validation accuracy achieved. Interestingly, using the cosine similarity, we observed that stochastic pruning takes a different toll from the different layers, i.e. pruning all layers to the same sparsity level damages some of them more than others. This phenomenon can be explained and assessed by analyzing the cosine similarity of a lognormal distribution, where the difference between the layers is the parameters of the distribution. We derive the cosine similarity as another analytical measure and propose an algorithm that better preserves the cosine similarity of some of the layers, by decreasing their sparsity level, while increasing the sparsity level of other layers — maintaining the overall sparsity budget (mean sparsity of all the layers). Further details can be found in Appendix A.9. This allows us to increase the sparsity level while preserving accuracy, results can be seen in Section 6." }, { "heading": "6 EXPERIMENTS", "text": "In this section, we evaluate the methods and predictions above, all stemming from the lognormal distribution of the neural gradients, for the two suggested applications: floating point format quantization and stochastic pruning. Experiments, details, and additional results appear in Appendix A.12.\nFloating point format. In Table 3 we show the results of different allocations between exponent and mantissa for different FP formats in Cifar100 and ImageNet dataset. We quantize all convolutional layers’ gradients, unlike previous methods (Sun et al., 2019) that keep part of them in FP32. All results were achieved using the suggested gradient scaling, where the mean is sampled once every epoch. For all FP formats, the results fit the analysis in Table 2. In contrast, we get sub-optimal FP format if instead we solve Eq. (3) with a normal distribution assumption.\nPer-layer gradient scaling In Table 4 we compare the suggested gradient scaling with static and dynamic (Micikevicius et al., 2018) global loss scaling. We clipped the values to maximum/minimum FP representation for our method and the ’static’ method to avoid overflow/underflow, respectively. On the other hand, the dynamic method (Micikevicius et al., 2018) clips the values to adjust the scale in response to overflow/underflow in the weights updates. Our suggested layer-wise gradient scaling achieves better results, highlighting that one global loss scale for the entire network might be too restrictive in practice. In Fig. A.7, we show the variability of the suggested gradient scale across different layers. Lastly, to verify that gradient quantization is the main bottleneck, we also quantized the weights and activations to INT4 and found a minor degradation in accuracy (0.3%).\nStochastic pruning. In Section 6 we compare the proposed methods for homogeneous and heterogeneous stochastic pruning against SAW (Aamir Raihan & Aamodt, 2020) that uses ”top-k” pruning and ATP (Ye et al., 2019) that also uses stochastic pruning but assumes the gradients are normally distributed. Notice that stochastic pruning outperforms the non-stochastic top-k, and that the heterogeneous version surpasses the homogeneous one. The validation accuracy during training for different sparsity levels and different datasets can be found in Fig. A.16. In Section 6 we demonstrate our methods’ ability to produce the required sparsity level, for both the homogeneous and heterogeneous versions. In contrast, the sparsity is not accurate for the baseline methods: (1) finding the threshold using top-k and then applying stochastic pruning, and (2) using ATP (Ye et al., 2019), which assumes a normal distribution. In Fig. A.15 we see how the sparsity inaccuracy occurs at all layers, and in\nFig. A.1a we see how other distributions (not lognormal) cause an inaccuracy. This strengthens the importance of using the correct distribution of the neural gradients in Eq. (6)." }, { "heading": "7 SUMMARY", "text": "We evaluated the distribution of neural gradients and showed they can be well-approximated as a lognormal distribution. We use this distribution to analytically derive accurate measures (e.g., sparsity and local distortion metrics), useful for the two following applications:\nQuantization. We found the optimal bit allocation to the mantissa and exponent for a floating-point gradient representation, explaining prior results for FP8 and paving the way towards lower accuracy representations. We suggest using a per-layer gradient scale and find its optimal value, preventing under/over-flow in scenarios that challenged prior methods or required an intensive parameter search. Combining both methods, we trained using low precision neural gradients on ImageNet and achieved, for the first time, no noticeable validation accuracy degradation with FP7 and FP6.\nFig. A.24). The validation accuracy for our method is never less than the accuracy of the other methods for the same achieved sparsity. Notice the large deviation in the other methods between the achieved and required sparsity. This emphasizes the importance of using the correct distribution in order to both enjoy the improved performance of stochastic pruning over regular ”top-k” and maintain the ability to fully control the achieved sparsity. Additional details and results are in Appendix A.12\nPruning. We can use stochastic pruning to prune the neural gradients during training precisely to a predetermined sparsity level, with minimal overhead computation. We show that this form of stochastic pruning is superior to deterministic pruning. Specifically, we have achieved up to 80% gradient sparsity without hurting validation accuracy (ResNet18 on ImageNet) using a homogeneous sparsity level for all layers. We also show that the uniform sparsity method is sub-optimal with respect to an analytical error measure we derive (the cosine similarity), and suggest allocating different sparsity levels to the different layers to preserve it better. We suggest and test an algorithm for this allocation - allowing for more aggressive overall pruning, achieving 85% gradient sparsity while preserving baseline accuracy, and reaching nearly 90% with less than 0.3% accuracy degradation.\nWhy we get lognormal distribution? From the central limit theorem, normal distributions universally arise from the sum of many random variables, while lognormal distributions universally arise as the product of many random variables. This might suggest that, as we backpropgate the neural gradients through the network, these gradients have a few dominant paths, so most significant operations are products, rather than summations (i.e., so along these paths: effective depth >> effective width). Recent work (Hanin & Nica, 2018) made this explanation rigorous at initialization, in the limit where both the width and depth of the neural network jointly go to infinity. However, we suspect this explanation is only a part of a more nuanced picture. Understanding this is an interesting direction for future research." } ]
null
IMPROVED QUANTIZED AND SPARSE TRAINING
SP:30332615c0031634abb0108b91caad1657a5e8be
[ "The paper studies the problem of classifier recalibration under differential privacy constraints. They propose a framework with a calibrator and several private data sources, and it works as follows. At each iteration, the calibrator queries each source, and the data source sends back the private answer, which will be used to optimize the calibration. They also provide a recalibration technique, accuracy temperature scaling, which is effective under the privacy constraint for the reason of low sensitivity. Rigorous experimental results are provided." ]
Classifiers deployed in high-stakes applications must output calibrated confidence scores, i.e. their predicted probabilities should reflect empirical frequencies. Typically this is achieved with recalibration algorithms that adjust probability estimates based on real-world data; however, existing algorithms are not applicable in realworld situations where the test data follows a different distribution from the training data, and privacy preservation is paramount (e.g. protecting patient records). We introduce a framework that provides abstractions for performing recalibration under differential privacy constraints. This framework allows us to adapt existing recalibration algorithms to satisfy differential privacy while remaining effective for domain-shift situations. Guided by our framework, we also design a novel recalibration algorithm, accuracy temperature scaling, that is tailored to the requirements of differential privacy. In an extensive empirical study, we find that our algorithm improves calibration on domain-shift benchmarks under the constraints of differential privacy. On the 15 highest severity perturbations of the ImageNet-C dataset, our method achieves a median ECE of 0.029, over 2x better than the next best recalibration method and almost 5x better than without recalibration.
[]
[ { "authors": [ "Philip E. Agre" ], "title": "Surveillance and capture: Two models of privacy", "venue": "Inf. Soc.,", "year": 1994 }, { "authors": [ "Yoshua Bengio" ], "title": "Deep learning of representations for unsupervised and transfer learning", "venue": "In Proceedings of ICML workshop on unsupervised and transfer learning,", "year": 2012 }, { "authors": [ "Henri Berestycki", "Jérôme Busca", "Igor Florent" ], "title": "Asymptotics and calibration of local volatility models", "venue": "Quantitative finance,", "year": 2002 }, { "authors": [ "Richard Berk" ], "title": "Criminal justice forecasts of risk: A machine learning approach", "venue": "Springer Science & Business Media,", "year": 2012 }, { "authors": [ "Richard Berk" ], "title": "Machine learning risk assessments in criminal justice settings", "venue": "In Springer International Publishing,", "year": 2019 }, { "authors": [ "Glenn W Brier" ], "title": "Verification of forecasts expressed in terms of probability", "venue": "Monthly weather review,", "year": 1950 }, { "authors": [ "Nicolo Cesa-Bianchi", "Gabor Lugosi" ], "title": "Prediction, learning, and games", "venue": "Cambridge university press,", "year": 2006 }, { "authors": [ "Weijie Chen", "Berkman Sahiner", "Frank W. Samuelson", "Aria Pezeshk", "Nicholas A. Petrick" ], "title": "Calibration of medical diagnostic classifier scores to the probability of disease", "venue": "Statistical methods in medical research,", "year": 2018 }, { "authors": [ "Moustapha Cissé", "Piotr Bojanowski", "Edouard Grave", "Yann Dauphin", "Nicolas Usunier" ], "title": "Parseval networks: Improving robustness to adversarial examples", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Jeremy M Cohen", "Elan Rosenfeld", "J Zico Kolter" ], "title": "Certified adversarial robustness via randomized smoothing", "venue": "arXiv preprint arXiv:1902.02918,", "year": 2019 }, { "authors": [ "Wenyuan Dai", "Qiang Yang", "Gui-Rong Xue", "Yong Yu" ], "title": "Boosting for transfer learning", "venue": "In Proceedings of the 24th international conference on Machine learning,", "year": 2007 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2009 }, { "authors": [ "Cynthia Dwork" ], "title": "Differential privacy: A survey of results", "venue": "In International conference on theory and applications of models of computation,", "year": 2008 }, { "authors": [ "Cynthia Dwork", "Aaron Roth" ], "title": "The algorithmic foundations of differential privacy", "venue": "Foundations and Trends in Theoretical Computer Science,", "year": 2014 }, { "authors": [ "Yaroslav Ganin", "Victor Lempitsky" ], "title": "Unsupervised domain adaptation by backpropagation", "venue": "arXiv preprint arXiv:1409.7495,", "year": 2014 }, { "authors": [ "Dorothy J. Glancy" ], "title": "Privacy in autonomous vehicles", "venue": "Santa Clara law review,", "year": 2012 }, { "authors": [ "Tilmann Gneiting", "Fadoua Balabdaoui", "Adrian E Raftery" ], "title": "Probabilistic forecasts, calibration and sharpness", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2007 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "arXiv preprint arXiv:1512.03385,", "year": 2015 }, { "authors": [ "Qingfeng He", "Annie Antón" ], "title": "A framework for modeling privacy requirements in role engineering", "venue": "Proceedings of the 9th International Workshop on Requirements Engineering: Foundation for Software Quality (REFSQ’03),", "year": 2003 }, { "authors": [ "Dan Hendrycks", "Thomas G. Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": null, "year": 2019 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer vision", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Javed Khan", "Jun S Wei", "Markus Ringner", "Lao H Saal", "Marc Ladanyi", "Frank Westermann", "Frank Berthold", "Manfred Schwab", "Cristina R Antonescu", "Carsten Peterson" ], "title": "Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks", "venue": "Nature medicine,", "year": 2001 }, { "authors": [ "Xiaowei Kortum", "Lorenz Grigull", "Urs Muecke", "Werner Lechner", "Frank Klawonn" ], "title": "Improving the decision support in diagnostic systems using classifier probability calibration", "venue": "In IDEAL,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Volodymyr Kuleshov", "Nathan Fenner", "Stefano Ermon" ], "title": "Accurate uncertainties for deep learning using calibrated regression", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ananya Kumar", "Percy S Liang", "Tengyu Ma" ], "title": "Verified uncertainty calibration", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Balaji Lakshminarayanan", "Alexander Pritzel", "Charles Blundell" ], "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic gradient descent with warm restarts", "venue": "arXiv preprint arXiv:1608.03983,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": null, "year": 2017 }, { "authors": [ "Frank McSherry", "Kunal Talwar" ], "title": "Mechanism design via differential privacy", "venue": "Annual IEEE Symposium on Foundations of Computer Science", "year": 2007 }, { "authors": [ "Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida" ], "title": "Spectral normalization for generative adversarial networks", "venue": null, "year": 2018 }, { "authors": [ "Mahdi Pakdaman Naeini", "Gregory Cooper", "Milos Hauskrecht" ], "title": "Obtaining well calibrated probabilities using bayesian binning", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Alexandru Niculescu-Mizil", "Rich Caruana" ], "title": "Predicting good probabilities with supervised learning", "venue": "In Proceedings of the 22nd international conference on Machine learning,", "year": 2005 }, { "authors": [ "Sinno Jialin Pan", "Qiang Yang" ], "title": "A survey on transfer learning", "venue": "IEEE Transactions on knowledge and data engineering,", "year": 2009 }, { "authors": [ "Vishal M. Patel", "Raghuraman Gopalan", "Ruonan Li", "Rama Chellappa" ], "title": "Visual domain adaptation: A survey of recent advances", "venue": "IEEE Signal Processing Magazine,", "year": 2015 }, { "authors": [ "John Platt" ], "title": "Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods", "venue": "Advances in large margin classifiers,", "year": 1999 }, { "authors": [ "Keywan Christian Rasekhschaffe", "Robert C. Jones" ], "title": "Machine learning for stock selection", "venue": "Financial Analysts Journal,", "year": 2019 }, { "authors": [ "Cynthia Rudin", "Berk Ustun" ], "title": "Optimized scoring systems: Toward trust in machine learning for healthcare and criminal justice", "venue": null, "year": 2018 }, { "authors": [ "Rui Shu", "Hung H Bui", "Hirokazu Narui", "Stefano Ermon" ], "title": "A dirt-t approach to unsupervised domain adaptation", "venue": "arXiv preprint arXiv:1802.08735,", "year": 2018 }, { "authors": [ "Alexander J Smola", "Peter J Bartlett", "Dale Schuurmans", "Bernhard Schölkopf", "Michael I Jordan" ], "title": "Advances in large margin classifiers", "venue": "MIT press,", "year": 2000 }, { "authors": [ "Jasper Snoek", "Yaniv Ovadia", "Emily Fertig", "Balaji Lakshminarayanan", "Sebastian Nowozin", "D Sculley", "Joshua Dillon", "Jie Ren", "Zachary Nado" ], "title": "Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yi Sun", "Ding Liang", "Xiaogang Wang", "Xiaoou Tang" ], "title": "Deepid3: Face recognition with very deep neural networks", "venue": "arXiv preprint arXiv:1502.00873,", "year": 2015 }, { "authors": [ "Ilya Sutskever", "James Martens", "George Dahl", "Geoffrey Hinton" ], "title": "On the importance of initialization and momentum in deep learning", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Jonathan Wenger", "Hedvig Kjellström", "Rudolph Triebel" ], "title": "Non-parametric calibration for classification", "venue": "ArXiv, abs/1906.04933,", "year": 2019 }, { "authors": [ "Luona Yang", "Xiaodan Liang", "Tairui Wang", "Eric P. Xing" ], "title": "Real-to-virtual domain unification for end-to-end autonomous driving", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Bianca Zadrozny", "Charles Elkan" ], "title": "Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers", "venue": "In Icml,", "year": 2001 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "arXiv preprint arXiv:1605.07146,", "year": 2016 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning classifiers are currently deployed in high stakes applications where (1) the cost of failure is high, so prediction uncertainty must be accurately calibrated (2) the test distribution does not match the training distribution, and (3) data is subject to privacy constraints. All three of these challenges must be addressed in applications such as medical diagnosis (Khan et al., 2001; Chen et al., 2018; Kortum et al., 2018), financial decision making (Berestycki et al., 2002; Rasekhschaffe & Jones, 2019; He & Antón, 2003), security and surveillance systems (Sun et al., 2015; Patel et al., 2015; Agre, 1994), criminal justice (Berk, 2012; 2019; Rudin & Ustun, 2018), and mass market autonomous driving (Kendall & Gal, 2017; Yang et al., 2018; Glancy, 2012). While much prior work has addressed these challenges individually, they have not been considered simultaneously. The goal of this paper is to propose a framework that formalizes challenges (1)-(3) jointly, introduce benchmark problems, and design and compare new algorithms under the framework.\nA standard approach for addressing challenge (1) is uncertainty quantification, where the classifier outputs its confidence in every prediction to indicate how likely it is that the prediction is correct. These confidence scores must be meaningful and trustworthy. A widely used criterion for good confidence scores is calibration (Brier, 1950; Cesa-Bianchi & Lugosi, 2006; Guo et al., 2017) — i.e. among the data samples for which the classifier outputs confidence p ∈ (0, 1), exactly p fraction of the samples should be classified correctly.\nSeveral methods (Guo et al., 2017) learn calibrated classifiers when the training distribution matches the test distribution. However, this classical assumption is always violated in real world applications, and calibration performance can significantly degrade under even small domain shifts (Snoek et al., 2019). To address this challenge, several methods have been proposed to re-calibrate a classifier on data from the test distribution (Platt et al., 1999; Guo et al., 2017; Kuleshov et al., 2018; Snoek et al., 2019). These methods make small adjustments to the classifier to minimize calibration error on a validation dataset drawn from the test distribution, but they are typically only applicable when they have (unrestricted) access to data from this validation set.\nAdditionally, high stakes applications often require privacy. For example, it is difficult for hospitals to share patient data with machine learning providers due to legal privacy protections (Centers for\nMedicare & Medicaid Services, 1996). When the data is particularly sensitive, provable differential privacy becomes necessary. Differential privacy (Dwork et al., 2014) provides a mathematically rigorous definition of privacy along with algorithms that meet the requirements of this definition. For instance, the hospital may share only certain statistics of their data, where the shared statistics must have bounded mutual information with respect to individual patients. The machine learning provider can then use these shared statistics — possibly combining statistics from many different hospitals — to recalibrate the classifier and provide better confidence estimates.\nIn this paper, we present a framework to address all three challenges – calibration, domain shift, and differential privacy – and introduce a benchmark to standardize performance and compare algorithms. We show how to modify modern recalibration techniques (e.g. (Zadrozny & Elkan, 2001; Guo et al., 2017)) to satisfy differential privacy using this framework, and compare their empirical performance. This framework can be viewed as performing federated learning for recalibration, with the constraint that each party’s data must be kept differentially private.\nWe also present a novel recalibration technique, accuracy temperature scaling, that is particularly effective in this framework. This new technique requires private data sources to share only two statistics: the overall accuracy and the average confidence score for a classifier. We adjust the classifier until the average confidence equals the overall accuracy. Because only two numbers are revealed by each private data source, it is much easier to satisfy differential privacy. In our experiments, we find that without privacy requirements the new recalibration algorithm performs on par with algorithms that use the entire validation dataset, such as (Guo et al., 2017); with privacy requirements the new algorithm performs 2x better than the second best baseline.\nIn summary, the contributions of our paper are as follows. (1) We introduce the problem of “privacy preserving calibration under domain shift” and design a framework for adapting existing recalibration techniques to this setting. (2) We introduce accuracy temperature scaling, a novel recalibration method designed with privacy concerns in mind, that requires only the overall accuracy and average confidence of the model on the validation set. (3) Using our framework, we empirically evaluate our method on a large set of benchmarks against state-of-the-art techniques and show that it performs well across a wide range of situations under differential privacy." }, { "heading": "2 BACKGROUND AND RELATED WORK", "text": "" }, { "heading": "2.1 CALIBRATION", "text": "Description of Calibration Consider a classification task from input domain (e.g. images) X to a finite set of labels Y = {1, · · · ,m}. We assume that there is some joint distribution P ∗ on X × Y . This could be the training distribution, or the distribution from which we draw test data. A classifier is a pair (φ, p̂) where φ : X → Y maps each input x ∈ X to a label y ∈ Y and p̂ : X → [0, 1] maps each input x to a confidence value c. We say that the classifier (φ, p̂) is perfectly calibrated (Brier, 1950; Gneiting et al., 2007) with respect to the distribution P ∗ if ∀c ∈ [0, 1]\nPr P∗(x,y)\n[φ(x) = y | p̂(x) = c] = c. (1)\nNote that calibration is a property not only of the classifier (φ, p̂), but also of the distribution P ∗. A classifier (φ, p̂) can be calibrated with respect to one distribution (e.g. the training distribution) but not another (e.g. the test distribution). To simplify notation we drop the dependency on P ∗.\nTo numerically measure how well a classifier is calibrated, the commonly used metric is Expected Calibration Error (ECE) (Naeini et al., 2015) defined by\nECE(φ, p̂) := ∫ c∈[0,1] Pr[p̂(x) = c] · |Pr[φ(x) = y | p̂(x) = c]− c| . (2)\nIn other words, ECE measures average deviation from Eq. 1. In practice, the ECE is approximated by binning — partitioning the predicted confidences into bins, and then taking a weighted average of the difference between the accuracy and average confidence for each bin (see Appendix A.1 for details.)\nRecalibration Methods Several methods apply a post-training adjustment to a classifier (φ, p̂) to achieve calibration (Platt et al., 1999; Niculescu-Mizil & Caruana, 2005). The one most relevant to our paper is temperature scaling (Guo et al., 2017). On each input x ∈ X , a neural network\ntypically first computes a logit score l1(x), l2(x), · · · , ln(x) for each of the n labels, then computes a confidence score or probability estimate p̂(x) with a softmax function. Temperature scaling adds a temperature parameter T ∈ R+ to the softmax function\np̂(x;T ) = max i eli(x)/T∑ j e lj(x)/T . (3)\nA higher temperature reduces the confidence, and vice versa. T is trained to minimize the standard cross entropy objective on the validation dataset, which is equivalent to maximizing log likelihood. Despite its simplicity, temperature scaling performs well empirically in classification calibration for deep neural networks.\nAlternative methods for classification calibration have also been proposed. Histogram binning (Zadrozny & Elkan, 2001) partitions confidence scores ∈ [0, 1] into bins {[0, ), [ , 2 ), · · · , [1− , 1]} and sorts each validation sample into a bin based on its confidence p̂(x). The algorithm then resets the confidence level of each bin to match the average classification accuracy of data points in that bin. Isotonic regression methods (Kuleshov et al., 2018) learn an additional layer on top of the softmax output layer. This additional layer is trained on a validation dataset to fit the output confidence scores to the empirical probabilities in each bin. Other methods include Platt scaling (Platt et al., 1999) and Gaussian process calibration (Wenger et al., 2019)." }, { "heading": "2.2 ROBUSTNESS TO DOMAIN SHIFT", "text": "Preventing massive performance degradation of machine learning models under domain shift has been a long-standing problem. There are several approaches developed in the literature. Unsupervised domain adaptation (Ganin & Lempitsky, 2014; Shu et al., 2018) learns a joint representation between the source domain (original data) and target domain (domain shifted data). Invariance based methods (Cissé et al., 2017; Miyato et al., 2018; Madry et al., 2017; Lakshminarayanan et al., 2017; Cohen et al., 2019) prevent the classifier output from changing significantly given small perturbations to the input. Transfer learning methods (Pan & Yang, 2009; Bengio, 2012; Dai et al., 2007) fine-tune the classifier on labeled data in the target domain. We classify our method in this category because we also fine-tune on the target domain, but with minimal data requirements (we only need the overall classifier accuracy)." }, { "heading": "2.3 DIFFERENTIAL PRIVACY", "text": "Differential privacy (Dwork et al., 2014) is a procedure for sharing information about a dataset to the public while withholding critical information about individuals in the dataset. Informally, it guarantees that an attacker can only learn a limited amount of new information about an individual. Differentially private approaches are critical in privacy sensitive applications. For example, a hospital may wish to gain medical insight or calibrate its prediction models by releasing diagnostic information to outside experts, but it cannot release information about any particular patient.\nOne common notion of differential privacy is -differential privacy (Dwork et al., 2014). Let us define a database D as a collection of data points in a universe X , and represent it by its histogram: D ∈ N|X |, where each entry Dx represents the number of elements in the database that takes the value x ∈ X . A randomized algorithmM is one that takes in input D ∈ N|X | and (stochastically) outputs some valueM(D) = b for b ∈ Range(M). Definition 1. Let M be a randomized function M : N|X | → Range(M). We say that M is - differentially private if for all S ⊆ Range(M) and for any two databases D,D′ ∈ N|X | that differ by only one element, i.e. ‖D −D′‖1 ≤ 1, we have\nPr[M(D) ∈ S] Pr[M(D′) ∈ S] ≤ e\nIntuitively, the output ofM should not change much if a single data point is added or removed. An attacker that learns the output ofM gains only limited information about any particular data point.\nGiven a deterministic real valued function f : N|X | → Rz , we would like to design a functionM that remains as close as possible to f but satisfies Definition 1. This can be achieved by the Laplace\nmechanism (McSherry & Talwar, 2007; Dwork, 2008). Let us define the L1 sensitivity of f :\n∆f = max D,D′∈N|X| ‖D−D′‖1=1\n‖f(D)− f(D′)‖1\nThen the Laplace mechanism adds Laplacian random noise as in (4):\nML(D; f, ) = f(D) + (Y1, . . . , Yz) (4)\nwhere Yi are i.i.d. random variables drawn from the Laplace(loc = 0, scale = ∆f/ ) distribution. The functionML satisfies -differential privacy, and we reproduce the proof in Appendix A.2." }, { "heading": "3 RECALIBRATION UNDER DIFFERENTIAL PRIVACY", "text": "In this section we propose a framework for performing recalibration that allows independent parties to pool their data for improved calibration, while maintaining differential privacy. This setup can be framed as differentially private federated learning for recalibration. Multiple parties experience the same domain shift (e.g. because they live in the same changing world). Each party would benefit from access to additional data, but each party also wants to keep their own data private. Our framework allows all parties to react to domain shifts more quickly by pooling their data (so each individual party needs less labeled data from the new distribution), while maintaining the privacy of each party." }, { "heading": "3.1 EXAMPLE APPLICATIONS", "text": "We begin with example scenarios that illustrate the main desiderata and challenges of this problem.\nExample 1: Suppose you have a classifier for diagnosing a medical condition and deploy your classifier across many hospitals. The hospitals need calibrated confidences for a similar but more unusual condition (e.g. the original model may have been trained on an already existing virus strain but need to be recalibrated for a novel strain of the virus). There are two options: 1. Each hospital uses only their own private data to calibrate the classifier; 2. Each hospital sends some (differentially private) information to you, and you aggregate the information and calibrate the classifier. Option 2 is preferable if each hospital has only a handful of patients for the particular condition.\nIn this case, the hospitals are the parties that wish to keep their data (patient info) private. The novel strain of the virus represents a domain shift. If the hospitals each have only a few data points, they want to aggregate their data in order to improve their classifier’s calibration while still respecting patient privacy.\nExample 2: Suppose that there is a third-party advertising company that runs ads for websites. This advertising company has worked with news websites before, but recently acquired new clients from a different category of websites. The individual websites have user information, but they cannot provide the third-party advertising company with this user information due to privacy constraints. The third-party advertising company wants calibrated models for whether a user will click on an ad.\nExample 3: Another category of scenarios in which our framework can be used is for individual privacy. An individual may have labeled data that he wishes to keep private; however, he would still like calibrated confidences from prediction models (e.g. financial software for individuals). With differential privacy, individuals can provide summary statistics with added noise to an aggregator. In this setup, differential privacy is guaranteed on the individual level. Aggregators can then improve their confidence estimation using noisy summary statistics from many individuals." }, { "heading": "3.2 GENERAL FRAMEWORK", "text": "We propose a standard framework to handle the general situation represented by the examples above. This two-party framework involves (1) a calibrator and (2) private data sources, and it allows us to adapt recalibration algorithms for differential privacy. A private data source may be e.g. a hospital (as in Example 1 above), a website (as in Example 2), or an individual (as in Example 3).\n1. [Calibrator:] Input an uncalibrated classifier (φ, p̂).\n2. [Private Data Sources:] Each data source i = 1, · · · , d inputs private dataset Di.\n3. At iteration k = 1, · · · ,K (a) [Calibrator:] The calibrator designs a function fk : N|X | → Rs, where s ∈ N. For\neach i = 1, · · · , d, the calibrator sends function fk to private data source i. (b) [Private Data Sources:] For each i = 1, · · · , d, the i-th private data source uses the\nLaplace mechanism in Eq. 4 to convert fk toMk that satisfies /K-differential privacy, and sendsMk(Di) back to the calibrator.\n4. [Calibrator:] Output a new classifier (φ, p̂′) based onMk(Di), k = 1, · · · ,K, i = 1, · · · , d.\nUnder this framework, differential privacy is automatically satisfied: if for each k = 1, · · · ,K, Mk is /K-differentially private, then the combined function (M1, · · · ,Mk) is -differentially private (Dwork et al., 2014). The differential privacy guarantees for each private data source i are independent of the policy of the calibrator or other private data sources; i.e. even if the calibrator and all other private data sources collude to steal information from the i-th data source — as long as the i-th private data source follows the protocol, its data will be protected by differential privacy.\nThis framework simplifies the problem into two design choices: select the query function fk for k = 1, · · · ,K, and select the mapping from observationsMk(D1), · · · ,Mk(Dd) at k = 1, · · · ,K to the calibrated confidence function p̂′. We will discuss the most reasonable choices for several existing recalibration algorithms. Note that in general, the calibration quality degrades as the privacy level increases (i.e. decreases)." }, { "heading": "3.3 ADAPTING EXISTING ALGORITHMS", "text": "In this section, we explain how we adapt algorithms introduced in Section 2 to our framework. Note that many existing recalibration algorithms involve parametric optimization, and in these cases multiple iterations K are needed to search the parameter space. However, using additional iterations hurts the calibration since a larger K increases the added Laplace noise for /K-differential privacy; i.e. for any fixed Laplace noise, more queries means less privacy. Thus, we propose the use of the golden section search algorithm as a better alternative to grid search for parametric optimization, since it is more efficient at finding the extremum of a unimodal function within a specified interval, and requires fewer queries. See Appendix C.1 for additional details about the golden section search.\nTemperature Scaling Temperature scaling finds the temperature T in Eq. 3 that maximizes log likelihood. At each iteration k = 1, · · · ,K, the function fk queries Di for the log likelihood at some temperature, and we average the log likelihood over all the private datasets. We observe that log likelihood is a unimodal function of the temperature in Proposition 1 (see Appendix B for proof). Therefore, the golden section search algorithm can find the maximum of the unimodal function with the fewest queries. We may refer to temperature scaling as NLL-T for brevity.\nProposition 1. For any distribution p∗ onX×Y where Y = {1, · · · ,m}, and for any set of functions l1, · · · , lm : X → R, Ex,y∼p∗ [ log e\nly(x)/T∑ j e lj(x)/T\n] is a unimodal function of T .\nECE Minimization (ECE-T) Instead of finding a temperature that maximizes log likelihood, we find that empirically it is often better to directly minimize the discretized ECE in Eq. 2. Adapting ECE minimization to our framework is similar to log likelihood maximization, except that we query for the necessary quantities to compute the ECE score instead of the log likelihood. In Appendix C.2.2, we show how to compute the ECE score with as few queried quantities as possible.\nHistogram Binning Histogram binning can be adapted to the above protocol with only one iteration (K = 1). The function f1 queries Di for the number of correct predictions in each bin and the total number of samples in each bin. We average the query results from different datasets. To compute the new confidence for a bin, we divide the average number of correct predictions by the average total number of samples in each bin." }, { "heading": "4 ACCURACY TEMPERATURE SCALING", "text": "When we add Laplace noise according to Eq. 4, the added noise increases with the number of iterations K and the L1 sensitivity of the query functions fk. In other words, when we adapt a calibration algorithm to our framework, we need to add more noise if the original algorithm gains a\nlot of information about the private datasets D1, · · · , Dd. The relative amount of noise also increases as the amount of data available decreases, as is the case when binning is used. Larger noise will degrade calibration performance. To improve performance, we propose a new recalibration algorithm called accuracy temperature scaling that acquires much less information than previous algorithms.\nOur method is a form of temperature scaling that is based on a weaker notion than calibration. Let classification accuracy and average confidence be denoted as\nAcc(φ) = Pr[φ(x) = y], and Conf(p̂) = E[p̂(x)]\nAcc and Conf are expectations of [0, 1]-bounded random variables, so they can be accurately estimated even from a relatively small quantity of data. We say that a classifier is consistent if Acc(φ) = Conf(p̂). We tune the temperature parameter in Eq. 3 until the average confidence Conf is identical to the average accuracy Acc, i.e. until consistency is achieved. We will refer to our method as Acc-T for brevity.\nConsistency is a strictly weaker condition than calibration. Surprisingly, even when there is a lot of data and no privacy requirements, optimizing for consistency achieves similar performance as directly optimizing for ECE in our experiments, as shown in Appendix E.2." }, { "heading": "4.1 ACCURACY TEMPERATURE SCALING UNDER DIFFERENTIAL PRIVACY", "text": "Adapting Acc-T to our differential privacy framework is similar to doing so for temperature scaling in Section 3.3. As we show in Proposition 2 (see Appendix B for proof), the Acc-T objective is also a unimodal function of T , so we can use golden section search to find the T that minimizes the objective function. Algorithm 1 provides the complete algorithm for Acc-T under differential privacy. On Line 2, we select initial temperature values. Line 3 specifies a query function that the hospitals use to pool their data while respecting differential privacy. Lines 4-12 implement differentially private golden section search over the recalibration temperature parameter. The algorithm outputs a temperature value that improves the classifier’s calibration on the new domain.\nProposition 2. For any distribution p∗ on X × Y where Y = {1, · · · ,m}, and for any set of functions l1, · · · , lm : X → R, let p̂T : x 7→ maxi e\nli(x)/T∑ j e lj(x)/T and φ : x 7→ arg maxi li(x).\n|Prx,y∼p∗ [φ(x) = y]− Ep∗ [p̂T (x)]| is a unimodal function of T .\nAlgorithm 1 Acc-T with differential privacy 1: Input Private datasets D1, · · · , Dd. Logit functions l1, · · · , lm : X → R. Initial temperature\nrange [T 0−, T 0 +]. Number of iterations K. Define φ and p̂T as in Proposition 2.\n2: Set T 00 = T 0 + − (T 0+ − T 0−) ∗ 0.618, T 01 = T 0− + (T 0+ − T 0−) ∗ 0.618 3: For T 00 set M0 : Di 7→ ∑ xi,yi∈Di ( I(φ(xi) = yi)− p̂T 00 (xi) ) + Lap ( K+1 ) and sample\nv00 = 1 d ∑d i=1M0(Di). Similarly setM1 for T 01 and sample v01 .\n4: for k = 0, · · · ,K − 1 do 5: if |vk0 | ≥ |vk1 | then 6: Set T k+1+ = T k +, T k+1 − = T k 0 , T k+1 0 = T k 1 , T k+1 1 = T− + (T+ − T−) ∗ 0.618 7: Set vk+10 = v k 1 . Sample v k+1 1 for T k+1 1 as in line 3. 8: else 9: Set T k+1− = T k −, T k+1 + = T k 1 , T k+1 1 = T k 0 , T k+1 0 = T+ − (T+ − T−) ∗ 0.618\n10: Set vk+11 = v k 0 . Sample v k+1 0 for T k+1 0 as in line 3. 11: end if 12: end for 13: Return (TK− + T K + )/2 as the optimal temperature." }, { "heading": "4.2 COMPARISON", "text": "We will briefly discuss how our method, Acc-T, compares to others such as histogram binning, temperature scaling, or ECE-T in terms of its theoretical bias (calibration error given infinite data), worst case variance (calibration error degradation when less data is available), and adaptability to differential privacy (based on the relative amount of noise that must be added to satisfy differential\nprivacy). Acc-T has a higher theoretical bias than the other methods, since its objective function does not directly minimize the calibration error. However, in our experiments on deep neural networks, the bias of Acc-T is only slightly worse or comparable to that of ECE-T or temperature scaling in practice. Acc-T also has a lower worst case variance than other methods because it does not use binning (so there are more data points per bin) and its objective function has a smaller range than that of temperature scaling. Overall, Acc-T has the highest adaptability to differential privacy; it has smaller L1 sensitivity than the other methods, so less noise is necessary for differential privacy. Additional factors that affect the calibration quality and the level of privacy are discussed in Appendix D.1." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we run an extensive series of large, controllable experiments on three datasets to compare our proposed method Acc-T against five different baseline methods, three of which are designed with privacy concerns in mind, using the general procedure in Section 3. These benchmarks include various domain shifts and privacy settings, and our proposed Acc-T method consistently outperforms the other baseline methods. We also extensively validate the relationship between calibration error and several relevant factors for domain shift and privacy. Additional experimental details are included in Apppendix E." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Methods We evaluate the differentially private versions of temperature scaling, ECE-T, histogram binning, and Acc-T over an extensive range of settings that considers calibration under various domain shifts and privacy concerns. We also include two baseline methods, (1) no calibration and (2) recalibration with only one private dataset from the target domain (so data from other sources is not used; in this case privacy constraints need not be taken into account but less data is available).\nDatasets To simulate various domain shifts, we use the ImageNet-C, CIFAR-100-C, and CIFAR-10-C datasets (Hendrycks & Dietterich, 2019), which are perturbed versions of the ImageNet (Deng et al., 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), and CIFAR-10 (Krizhevsky & Hinton, 2009) test sets. Each -C dataset includes 15 perturbed versions of the original test set, with perturbations such as Gaussian noise, motion blur, jpeg compression, and fog. We divide each perturbed test set into a validation split containing different “private data sources” with the same number of samples, and a test split containing all of the remaining images. We then apply the recalibration algorithms over the validation split and evaluate the ECE on the test split. Note that only the unperturbed training sets were used to train the models.\nRelevant factors We evaluate the ECE for all of the methods while controlling the following three factors: (1) the number of private data sources, (2) the number of samples per data source, and (3) the privacy level . When we vary one factor, we keep the other two factors constant.\nAdditional details We useK = 5 iterations for all experiments, and report the average ECE achieved over 500 trials with randomly divided splits for each experiment. We report other experimental setup details including the type of network used in Appendix E.1." }, { "heading": "5.2 RESULTS AND ANALYSIS", "text": "In Fig. 1, we plot the ECE vs. (1a) the number of private data sources, (1b) the number of samples per data source, and (1c) the value, for the ImageNet “fog” perturbation. Fig. 2 shows a similar plot for the CIFAR-100 “jpeg compression” perturbation, and Fig. 3 shows a similar plot for the CIFAR-10 “motion blur” perturbation. Our proposed method, Acc-T, is shown in red, and clearly outperforms other methods under the constraints of differential privacy for these ranges of values. Full plots for all perturbations and datasets are included in Appendix E.3. Table 1 shows the overall median and mean ECE achieved by each recalibration method on ImageNet, CIFAR-100, and CIFAR-10. These averages are computed over all perturbations, numbers of private data sources, numbers of samples per source, and settings from the suite of experiments in E.3. Our method, Acc-T, far outperforms other methods in the domain-shifted differential privacy setting.\nThe performance of all recalibration algorithms degrades when subjected to the constraints of differential privacy, but some are affected more than others for a given situation. Selecting a\ndifferentially private recalibration algorithm for a particular situation thus requires some consideration. To this end, we provide some analysis over these methods under the three relevant factors.\nNumber of Private Data Sources As the number of sources increases, Acc-T tends to do well, even when the number of samples per source is small. Because Acc-T does not involve binning and the sensitivity of its objective function is small, there is relatively less noise for this method than for others. Therefore, it can effectively combine data from multiple sources even under the constraints of differential privacy, and is the best method in general.\nNumber of Samples Per Source As the number of samples per source increases, Acc-T tends to do well given enough data sources. As the number of samples per source grows towards infinity, recalibration with only one source works very well since we do not need to query other sources or apply privacy constraints. Histogram binning and ECE-T may also perform quite well with many bins when the number of samples is very large.\nPrivacy concern When is very low (i.e. the privacy requirements are very high), recalibrating with only one data source works well; this method remains unaffected by the strong privacy constraints, while all other methods worsen drastically due to the increased noise. For mid-range values, Acc-T works well. When is very high, ECE-T can work well, since privacy is not much of a concern." }, { "heading": "Expected Calibration Error (median / mean)", "text": "" }, { "heading": "6 CONCLUSION", "text": "Simultaneously addressing the challenges of calibration, domain shift, and privacy is extremely important in many environments. In this paper, we introduced a framework for recalibration on domain-shifted data under the constraints of differential privacy. Within this framework, we designed a novel algorithm to handle all three challenges. Our method demonstrated impressive performance across a wide range of settings on a large suite of benchmarks. In future work, we are interested in investigating recalibration under different types of privacy mechanisms." }, { "heading": "A ADDITIONAL BACKGROUND INFORMATION", "text": "" }, { "heading": "A.1 COMPUTATION OF ECE", "text": "To compute the ECE, discretization is necessary. We first divide [0, 1] into bins c = (c1, · · · , ck) such that 0 < c1 < · · · < ck = 1, and then we compute the average accuracy Acc and average confidence Conf in each bin (for convenience, denote c0 = 0)\nAcc(f, c, i) = Pr [f(x) = y|p̂(x) ∈ [ci−1, ci)] Conf(f, c, i) = E[p̂(x)|p̂(x) ∈ [ci−1, ci)]\nThen the ECE defined in Eq. 2 can be approximated by a discretized version ECE(f, p̂) ≈ ECE(f, p̂; c)\n:= k∑ i=1 Pr [p̂(x) ∈ [ci−1, ci)] · |Acc(f, c, i)− Conf(f, c, i)|\nGiven empirical data D = {x1:n, y1:n} we can estimate ECE(f, p̂; c) as ECE(f, p̂; c) ≈ ˆECE(f, p̂; c,D)\n:= k∑ i=1 1 n ∣∣∣∣∣∣ ∑\nxi∈[ci−1,ci)\nI(f(xi) = yi)− p̂(xi) ∣∣∣∣∣∣ Note that there are two approximations: we first discretize the ECE, and then use finite data to approximate the discretized expression\nECE(f, p̂) ≈ ECE(f, p̂; c) ≈ ˆECE(f, p̂; c,D) In practice, if the first approximation is better (more bins are used), then the second approximation must be worse (there will be less data in each bin) (Kumar et al., 2019). In other words, with finite data, there is a tradeoff between calibration error and estimation error. Note that newer estimators, e.g. (Kumar et al., 2019), can measure the ECE even more accurately, particularly when there are more bins." }, { "heading": "A.2 LAPLACE MECHANISM PROOF", "text": "Theorem 1. The Laplace mechanism (Dwork et al., 2014) preserves -differential privacy.\nProof. Let D ∈ N|X | and D′ ∈ N|X | be two databases that differ by up to one element, i.e. ‖D − D′‖1 ≤ 1. Let function f : N|X | → Rz , and let pD and p′D denote the probability density functions ofML(D; f, ) andML(D′; f, ), respectively. Then we can take the ratio of pD to p′D at an arbitrary point x ∈ Rz:\npD(x) p′D(x) = z∏ i=1\n( exp(− |f(D)i−xi|∆f )\nexp(− |f(D ′)i−xi|\n∆f )\n)\n= z∏ i=1 exp ( (|f(D′)i − xi| − |f(D)i − xi|) ∆f )\n≤ z∏ i=1 exp ( |f(D)i − f(D′)i| ∆f ) = exp ( · ‖f(D)− f(D′)‖1\n∆f ) ≤ exp( )\nwhere the first inequality follows from the triangle inequality, and the second inequality follows from the definition of sensitivity (Dwork et al., 2014)." }, { "heading": "B PROOFS", "text": "Proof of Proposition 1.\n∂\n∂T Ex,y∼p∗\n[ log\nely(x)/T∑ j e lj(x)/T\n] = Ex,y∼p∗ ∂ ∂T ly(x)/T − ∂ ∂T log ∑ j elj(x)/T = Ex,y∼p∗ [ −ly(x)/T 2 − − ∑ j e lj(x)/T lj(x)/T\n2∑ j e lj(x)/T\n]\n= 1\nT 2 Ex,y∼p∗\n[ −ly(x) + ∑ j lj(x)e\nlj(x)/T∑ j e lj(x)/T ] Let us set the derivative equal to 0. Suppose there are multiple solutions T1 > T2; this implies that\nEx,y∼p∗ [∑ j lj(x)e lj(x)/T1∑\nj e lj(x)/T1\n] = Ex,y∼p∗ [∑ j lj(x)e\nlj(x)/T2∑ j e lj(x)/T2\n] . (5)\nEx,y∼p∗ [∑ j lj(x)e lj(x)/T∑\nj e lj(x)/T\n] is monotonically non-increasing. Therefore, if there are 0 or 1 so-\nlutions to Eq. 5, the original function must be unimodal. If there are at least 2 solutions T1 < T2, then Ex,y∼p∗ [∑ j lj(x)e lj(x)/T∑\nj e lj(x)/T\n] must be a constant function ∀T ∈ [T1, T2], which im-\nplies that Ex,y∼p∗ [\nely(x)/T∑ j e lj(x)/T\n] is a constant function of T ∈ [T1, T2]. This further implies that\nEx,y∼p∗ [\nely(x)/T∑ j e lj(x)/T\n] is a constant function for all T ∈ R, which is also unimodal.\nProof of Proposition 2. Because p̂T (x) is a monotonically decreasing function of T , Ep∗ [p̂T (x)] is also a monotonically decreasing function of T . This means that Prx,y∼p∗ [f(x) = y]− Ep∗ [p̂T (x)] is a monotonically decreasing function of T . The absolute value of a monotonic function must be monotonic or unimodal." }, { "heading": "C ADDITIONAL DETAILS FOR SECTION 3", "text": "" }, { "heading": "C.1 GOLDEN SECTION SEARCH", "text": "The golden section search is an algorithm for finding the extremum of a unimodal function within a specified interval. It is an iterative method that reduces the search interval with each iteration. The algorithm is described below. Note that we describe the algorithm for a minimization problem, but it also works for maximization problems.\n1. Specify the function to be minimized, g(·), and specify an interval over which to minimize g, [Tmin, Tmax]. 2. Select two interior points T1 and T2, with T1 < T2, such that T1 = Tmax − √ 5−1 2 (Tmax −\nTmin) and T2 = Tmin + √ 5−1 2 (Tmax − Tmin). Evaluate g(T1) and g(T2).\n3. If g(T1) > g(T2), then determine a new Tmin, T1, T2, Tmax as follows:\nTmin = T1\nTmax = Tmax\nT1 = T2\nT2 = Tmin + √ 5− 1 2 (Tmax − Tmin)\nIf g(T1) < g(T2), determine a new Tmin, T1, T2, Tmax as follows:\nTmin = Tmin\nTmax = T2\nT2 = T1 T1 = Tmax − √\n5− 1 2 (Tmax − Tmin)\nNote that in either case, only one new calculation is performed. 4. If the interval is sufficiently small, i.e. Tmax − Tmin < δ, then the maximum occurs at\n(Tmin + Tmax)/2. Otherwise, repeat Step 3." }, { "heading": "C.2 ADAPTING EXISTING RECALIBRATION METHODS FOR DIFFERENTIAL PRIVACY", "text": "In this section, we go into more detail about how to adapt several existing recalibration algorithms for the differential privacy setting with our framework." }, { "heading": "C.2.1 TEMPERATURE SCALING", "text": "Temperature scaling optimizes over the temperature parameter T using the negative log likelihood loss, and thus requires multiple iterations to query the databases Di at different temperature values using golden section search. In this case, the objective function g(·) is the negative log-likelihood (NLL) loss over all samples. In the standard NLL formulation, the overall loss is the average of the samples’ NLL losses, but summing these losses for each database rather than taking the average is equivalent except for a constant scale factor (the total number of samples in the database). Thus, the function fk queries eachDi for its summed NLL loss. The sensitivity ∆f is technically infinite, since the range of the NLL function is infinite, but in practice we can choose some sufficiently large value (we chose ∆f = 10, since that was approximately the largest NLL value we saw among the images that we checked). We chose Tmin = 0.5 and Tmax = 3.0, since empirically the optimal temperature always seems to fall within this range, and used K = 5 iterations. To aggregate information from different Di, we simply average the Mk(D1), · · · ,Mk(Dd). The new classifier (φ, p̂′) outputs probabilities that are recalibrated with the (noisy) optimal temperature." }, { "heading": "C.2.2 TEMPERATURE SCALING BY ECE MINIMIZATION", "text": "The standard recalibration objective when applying temperature scaling is to maximize the log likelihood of a validation dataset. This objective is given in both recent papers (Guo et al., 2017) and established textbooks (Smola et al., 2000). An alternative, but surprisingly overlooked, objective is to minimize the discretized ECE directly. To adapt this method to differential privacy, we must again use multiple iterations to query the databases Di at different temperature values using golden section search. Here we want to find the temperature that minimizes the discretized ECE:\nmin T ∑ bins |Acc− Conf| · pr = min T ∑ bins ∣∣∣∣ncorrectnbin − ∑ i ci nbin ∣∣∣∣ · nbinntotal (6) where pr is the proportion of samples in the bin, ncorrect is the number of correct predictions in the bin, nbin is the total number of samples in the bin, ∑ i ci is the sum of the confidence scores for all samples in the bin, and ntotal is the total number of samples across all bins.\nSimplifying Eq. 6 and ignoring ntotal as a constant, our objective function g(·) becomes g(φ, T ) = ∑ bins |ncorrect − ∑ i ci|\nThe function fk queries each Di for the quantity (ncorrect − ∑ i ci) in each bin. The sensitivity ∆f = 1, since this quantity could change by up to 1 with the addition or removal of one sample to a database. We use Tmin = 0.5, Tmax = 3.0, and K = 5 iterations. We use 15 bins (since we also evaluate the discretized ECE with 15 bins), so theMk are vectors ∈ R15. To aggregate information from different Di, we average theMk(D1), · · · ,Mk(Dd), take the absolute value of this average, and then sum this absolute value vector over all bins. In the absence of noise, this aggregation process\nwill yield the correct overall g(·) exactly, using all samples from all sources. The new classifier (φ, p̂′) outputs probabilities that are recalibrated with the (noisy) optimal temperature. Unsurprisingly, ECE-T performs very well without the constraints of differential privacy, so this method may be a good choice when is high." }, { "heading": "C.2.3 HISTOGRAM BINNING", "text": "Histogram binning is a relatively simple, non-parametric recalibration method that can be adapted to differential privacy with a single iteration (i.e. K = 1). The function f1 queries Di for the number of correct predictions in each bin and the total number of samples in each bin. ∆f = 2 because if exactly one entry is added or removed from a database, the number of correct predictions can change by at most 1 for exactly one of the bins, and the total number of samples can change by at most 1 for exactly one of the bins. We use 15 bins in our experiments, so theMk are vectors ∈ R30. To aggregate information from different Di, we average theMk(D1), · · · ,Mk(Dd). The new confidence for each bin is the average number of correct predictions divided by the average total number of samples for that bin." }, { "heading": "D ADDITIONAL DETAILS FOR SECTION 4", "text": "" }, { "heading": "D.1 FACTORS THAT AFFECT CALIBRATION QUALITY AND PRIVACY", "text": "Table 2 shows several factors and hyperparameter choices that affect the calibration quality and the level of privacy for all recalibration methods. More data improves both calibration and privacy. More iterations improves calibration when privacy is not required (e.g. running more iterations of gradient descent), but hurts privacy (making multiple queries in a parametric optimization setting with the same amount of added noise increases ). Using more bins for methods that involve binning improves calibration when enough data is available, but may hurt privacy. Higher sensitivity of the fk function hurts privacy, and higher represents less privacy. We discuss each of these in more detail below.\nData Differentially private recalibration algorithms require sufficient data in order to work well. We cannot trivially combine data from different private datasets because each dataset holder must honor its agreement with the individuals whose information is in that dataset. Our framework describes a method for pooling data from different private datasets while allowing each one to respect differential privacy for its users, which is necessary for improved calibration while preserving privacy.\nNumber of iterations For parametric optimization recalibration methods, multiple iterations are generally needed to search the parameter space. Using additional iterations improves the calibration without differential privacy (e.g. running more iterations of gradient descent), but hurts the calibration when differential privacy is required. With multiple iterations, a worst-case bound on the overall L1 sensitivity of fk is K times the sensitivity of a single query ∆fsingle, since a single database entry may change the response to each query by up to ∆fsingle. Thus, the amount of noise added to the true query responses must follow a L(0,K ·∆fsingle/ ) distribution to maintain -differential privacy. Because using more iterations increases the amount of noise added, it is best to search through the parameter space while minimizing the number of iterations needed for the desired granularity. We use golden section search to do this. Each iteration of the golden section search narrows the range of possible values of the extremum, but increases the amount of noise added to the data; in general, we select K such that the granularity and the noise are balanced.\nBinning Several of the recalibration methods discussed use binning, where all of the confidence estimates are divided into mutually-exclusive bins. Without differential privacy, using more bins\ngenerally improves calibration when a lot of data is available (i.e. above a ”data threshold”), but hurts calibration below this data threshold. When not enough data is available, using more bins increases the estimation error since there are too few samples in each bin. In the differential privacy setting, using more bins may degrade the calibration. In this setting, one query may request a summary statistic from each bin. Because a single database entry can be in exactly one bin, the remaining bins are unaffected and the sensitivity does not increase with more bins. However, although the number of bins does not affect the absolute amount of noise, it can affect the relative amount of noise. When more bins are used, there are fewer elements in each bin on average. Thus, the summary statistics involved tend to be lower, and the noise is relatively higher.\nNote that when multiple equal-width bins are involved, as in temperature scaling by ECE minimization (see Section C.2.2), the optimization problem may not be strictly unimodal since samples can change bins as the temperature changes. Using bins with equal numbers of samples, rather than equal widths, ensures unimodality in temperature scaling but makes it difficult to combine information from different private data sources (since different sources will have different bin endpoints). Thus, we elected to use equal-width bins in our experiments. Although the function to be minimized is not necessarily unimodal, it is generally a close enough approximation that golden section search returns reasonably good results with few queries, and empirically performs better than grid search.\nSensitivity of fk An fk function with a large range has a detrimental effect on the amount of noise added. For instance, the range of the negative log-likelihood is technically infinite (although in practice we used some sufficiently large value). Thus, the sensitivity of a method with the negative log-likelihood in the objective function is quite high, and the amount of noise needed to preserve differential privacy is large.\nvalue Calibration is worse when is smaller, i.e. when there is a higher privacy level with stronger differential privacy constraints." }, { "heading": "E ADDITIONAL EXPERIMENTAL DETAILS AND RESULTS", "text": "" }, { "heading": "E.1 EXPERIMENTAL SETUP", "text": "We simulated the problem of recalibration with multiple private datasets on domain-shifted data using the ImageNet-C, CIFAR-100-C, and CIFAR-10-C datasets (Hendrycks & Dietterich, 2019), which are perturbed versions of the ImageNet (Deng et al., 2009), CIFAR-100 (Krizhevsky & Hinton, 2009), and CIFAR-10 (Krizhevsky & Hinton, 2009) test sets respectively. We randomly divided each perturbed test set into nsources validation sets of size nsamples and a test set comprising the remaining images, where nsources represents the number of private data sources and nsamples represents the number of samples per source. We computed each ECE value by binning with 15 equal-width bins.\nFor ImageNet, we varied the number of private data sources from 100 to 2000 in step sizes of 100, with 10 samples per data source and = 1. We varied the number of samples per data source from 5 to 100 in step sizes of 5, with 100 private data sources and = 1. We varied from 0.2 to 2.0 in step sizes of 0.2, with 50 samples per data source and 100 private data sources. For CIFAR-100 and CIFAR-10, we varied the number of private data sources from 10 to 250 in step sizes of 10, with 10 samples per data source and = 1. We varied the number of samples per data source from 5 to 50 in step sizes of 5, with 50 private data sources and = 1. We varied from 0.2 to 2.0 in step sizes of 0.2, with 30 samples per data source and 50 private data sources. We used K = 5 iterations for all experiments. We reported the average ECE achieved over 500 randomly divided trials for each experiment.\nAll models were trained on only the unperturbed training sets. For ImageNet, we trained a ResNet50 network (He et al., 2015) for 90 epochs with an SGD optimizer (Sutskever et al., 2013) with an initial learning rate of 0.1, and decayed the learning rate according to a cosine annealing schedule (Loshchilov & Hutter, 2016). For CIFAR-100 and CIFAR-10, we trained Wide ResNet-28-10 networks (Zagoruyko & Komodakis, 2016) for 200 epochs with an SGD optimizer with an initial learning rate of 0.1, and again decayed the learning rate with a cosine annealing schedule. For each dataset, we tested both the unperturbed accuracy and the perturbed accuracy on each of 15\nperturbation types in (Hendrycks & Dietterich, 2019) at multiple severity levels to ensure sharpness. These accuracy tables can be found in E.2." }, { "heading": "Classification Accuracy", "text": "" }, { "heading": "E.2 EXPERIMENTS WITHOUT DIFFERENTIAL PRIVACY CONSTRAINTS", "text": "Table 3 shows the classification accuracy achieved by our models on each of the 15 perturbations of the CIFAR-10-C, CIFAR-100-C, and ImageNet-C test sets, as well as on the unperturbed test set. Note that the models are trained only on unperturbed training data. The accuracies achieved are in line with reported state-of-the-art numbers.\nTables 4, 5, and 6 summarize our calibration results without differential privacy constraints for CIFAR-10, CIFAR-100, and ImageNet, respectively. Our Acc-T algorithm generally improves the model’s calibration compared to the standard temperature scaling method NLL-T. Despite its simplicity, Acc-T also performs on par with ECE-T, generally achieving similar ECEs, even when privacy is not required." }, { "heading": "E.3 EXPERIMENTS WITH DIFFERENTIAL PRIVACY CONSTRAINTS", "text": "The figures in this section show recalibration results for ImageNet, CIFAR-100, and CIFAR-10 under the highest severity perturbations. In the left panel of each figure, we vary the number of private data sources. In the middle panel, we vary the number of samples per data source. In the right panel, we vary the privacy level . Our method, Acc-T, generally does best in these settings.\nIMAGENET RESULTS\nImageNet, unperturbed\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, brightness perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, contrast perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.02 0.03 0.04 0.05 0.06 0.07 0.08 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.02\n0.04\n0.06\n0.08\n0.10\n0.12\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, defocus blur perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14 0.16 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, elastic transform perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, fog perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, frost perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, Gaussian noise perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.03\n0.05\n0.08\n0.10\n0.12\n0.15\n0.18\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, glass blur perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.02\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, impulse noise perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, jpeg compression perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.05\n0.10\n0.15\n0.20\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, motion blur perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, pixelate perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, shot noise perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.03\n0.05\n0.08\n0.10\n0.12\n0.15\n0.18\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, snow perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nImageNet, zoom blur perturbation\n250 500 750 1000 1250 1500 1750 2000 Private Data Sources\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n20 40 60 80 100 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 100, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03 0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 50, Sources = 100\nCIFAR-100 RESULTS\nCIFAR-100, unperturbed\n0 50 100 150 200 250 Private Data Sources\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, brightness perturbation\n0 50 100 150 200 250 Private Data Sources\n0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, contrast perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, defocus blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25 0.30 EC\nE ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, elastic transform perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, fog perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, frost perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, Gaussian noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.20\n0.30\n0.40\n0.50 0.60 EC\nE ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, glass blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, impulse noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.05 0.10 0.15 0.20 0.25 0.30 0.35 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05 0.10 0.15 0.20 0.25 0.30 0.35 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, jpeg compression perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, motion blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25 0.30 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, pixelate perturbation\n0 50 100 150 200 250 Private Data Sources\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, shot noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, snow perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-100, zoom blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25 0.30 EC\nE ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.15\n0.20\n0.25\n0.30\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10 RESULTS\nCIFAR-10, unperturbed\n0 50 100 150 200 250 Private Data Sources\n0.03\n0.04\n0.05\n0.06\n0.07\n0.08\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.02\n0.03\n0.04\n0.05\n0.06\n0.07\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.02\n0.03\n0.04\n0.05\n0.06\n0.07\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, brightness perturbation\n0 50 100 150 200 250 Private Data Sources\n0.05\n0.06\n0.07\n0.08\n0.09\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.03\n0.04\n0.05\n0.06\n0.07\n0.08\n0.09\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.03\n0.04\n0.05\n0.06\n0.07\n0.08\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, contrast perturbation\n0 50 100 150 200 250 Private Data Sources\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10 0.15 0.20 0.25 0.30 0.35 0.40 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, defocus blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.08 0.10 0.12 0.15 0.17 0.20 0.23 0.25 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 0.25 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 0.23 0.25 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, elastic transform perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06\n0.08\n0.10\n0.12\n0.14\n0.16\n0.18\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, fog perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06\n0.08\n0.10\n0.12\n0.14\n0.16\n0.18\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, frost perturbation\n0 50 100 150 200 250 Private Data Sources\n0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, Gaussian noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.20\n0.30\n0.40\n0.50 0.60 EC\nE ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.20\n0.30\n0.40\n0.50\n0.60\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, glass blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, impulse noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.20\n0.30\n0.40\n0.50\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.20\n0.30\n0.40\n0.50\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, jpeg compression perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, motion blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.22 EC E ECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05 0.08 0.10 0.12 0.15 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, pixelate perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.05\n0.10\n0.15\n0.20\n0.25\n0.30\n0.35\n0.40\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, shot noise perturbation\n0 50 100 150 200 250 Private Data Sources\n0.10\n0.20\n0.30\n0.40\n0.50\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.10\n0.20\n0.30\n0.40\n0.50\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.10\n0.20\n0.30\n0.40\n0.50\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, snow perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06\n0.08\n0.10\n0.12\n0.14\n0.16\nEC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\n0.16\nEC E\nECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04\n0.06\n0.08\n0.10\n0.12\n0.14\nEC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nCIFAR-10, zoom blur perturbation\n0 50 100 150 200 250 Private Data Sources\n0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. Number of Data Sources no_recal one_source HB ACC-T ECE-T NLL-T\n(a) Samples = 10, = 1.0\n10 20 30 40 50 Samples per Data Source\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E ECE vs. Number of Samples no_recal one_source HB ACC-T ECE-T NLL-T\n(b) Sources = 50, = 1.0\n0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 ε\n0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 EC E\nECE vs. ε no_recal one_source HB ACC-T ECE-T NLL-T\n(c) Samples = 30, Sources = 50\nWe note that using different clipping thresholds for NLL-T (where the clipped NLL loss is min(clipping threshold, NLL)) can affect its performance slightly. In practice, selecting the optimal clipping threshold would violate differential privacy, because doing so would require access to the labeled test data. However, even under the most favorable threshold, Acc-T significantly outperforms NLL-T. In Fig. 52, we show an example of NLL-T performance at different clipping thresholds for CIFAR-10 under the “snow” perturbation with a perturbation severity of 1. In this case, using the optimal clipping threshold would improve performance by 0.7% over using a clipping threshold of 10, and this improvement comes at a cost of privacy violations.\nFinally, Table 7 shows the overall median and mean ECE achieved by each recalibration method on CIFAR-100 with a perturbation severity of 1 (the lowest perturbation level). These averages are computed over all perturbations, numbers of private data sources, numbers of samples per source, and settings from the suite of experiments. Comparing these results to those shown in Table 1, which used a perturbation severity of 5 (the highest level), we see that the overall calibration improves for all methods when the degree of domain shift is lower, but our proposed algorithm, Acc-T, still outperforms other methods." }, { "heading": "Expected Calibration Error (median / mean)", "text": "" } ]
2,020
null
SP:5e5fb9699cfc3ee5368b83d473e2e6289e372714
[ "The work introduces the Transformer-QL, a transformer-based model that aims to capture long distance dependencies in the input. The network processes the information defining multiple temporal scales, with finer scales for nearby elements, and coarser scales for distant information. It also includes the recurrent memory extension from Transformer-XL from Dai et al. The model is tested in a long range language modeling task." ]
Transformer networks have shown outstanding performance on many natural language processing tasks. However the context length (the number of previous tokens on which the output states depend) of a Transformer network grows at best linearly with the memory and computational power used. This limitation prevents a transformer network to have very long context in a resource limited application. In this work, we propose a class of transformer networks, namely Transformer-QL (Quadratically Large), in which, the context length can grow at best quadratically with the memory and computational power used. We have empirically evaluated a Transformer-QL model in three long range language modeling datasets. The results show that Transformer-QL can provide significant improvements over other state of the art networks.
[]
[ { "authors": [ "Joshua Ainslie", "Santiago Ontañón", "Chris Alberti", "Philip Pham", "Anirudh Ravula", "Sumit Sanghai" ], "title": "ETC: encoding long and structured data in transformers", "venue": null, "year": 2004 }, { "authors": [ "Iz Beltagy", "Matthew E. Peters", "Arman Cohan" ], "title": "Longformer: The long-document transformer", "venue": "CoRR, abs/2004.05150,", "year": 2020 }, { "authors": [ "Rewon Child", "Scott Gray", "Alec Radford", "Ilya Sutskever" ], "title": "Generating long sequences with sparse transformers", "venue": null, "year": 1904 }, { "authors": [ "Krzysztof Choromanski", "Valerii Likhosherstov", "David Dohan", "Xingyou Song", "Jared Davis", "Tamás Sarlós", "David Belanger", "Lucy Colwell", "Adrian Weller" ], "title": "Masked language modeling for proteins via linearly scalable long-context transformers", "venue": null, "year": 2006 }, { "authors": [ "Zihang Dai", "Zhilin Yang", "Yiming Yang", "Jaime G. Carbonell", "Quoc Viet Le", "Ruslan Salakhutdinov" ], "title": "Transformer-XL: Attentive language models beyond a fixed-length context", "venue": "In Proceedings of the 57th Conference of the Association for Computational Linguistics,", "year": 2019 }, { "authors": [ "Zihang Dai", "Guokun Lai", "Yiming Yang", "Quoc V. Le" ], "title": "Funnel-Transformer: Filtering out sequential redundancy for efficient language processing", "venue": null, "year": 2006 }, { "authors": [ "David Donahue", "Vladislav Lialin", "Anna Rumshisky" ], "title": "Injecting hierarchy with U-Net transformers", "venue": "CoRR, abs/1910.10488,", "year": 2019 }, { "authors": [ "Ankit Gupta", "Jonathan Berant" ], "title": "GMAT: global memory augmentation for transformers", "venue": "CoRR, abs/2006.03274,", "year": 2020 }, { "authors": [ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Gregory F. Diamos", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou" ], "title": "Deep learning scaling is predictable", "venue": "empirically. CoRR,", "year": 2017 }, { "authors": [ "Angelos Katharopoulos", "Apoorv Vyas", "Nikolaos Pappas", "François Fleuret" ], "title": "Transformers are RNNs: Fast autoregressive transformers with linear attention", "venue": null, "year": 2006 }, { "authors": [ "Nikita Kitaev", "Lukasz Kaiser", "Anselm Levskaya" ], "title": "Reformer: The efficient transformer", "venue": "In 8th International Conference on Learning Representations, ICLR 2020,April", "year": 2020 }, { "authors": [ "Shiyang Li", "Xiaoyong Jin", "Yao Xuan", "Xiyou Zhou", "Wenhu Chen", "Yu-Xiang Wang", "Xifeng Yan" ], "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Huyen Nguyen" ], "title": "SimpleBooks: Long-term dependency book dataset with simplified English vocabulary for word-level language modeling", "venue": null, "year": 1911 }, { "authors": [ "Raghavendra Pappagari", "Piotr Zelasko", "Jesús Villalba", "Yishay Carmiel", "Najim Dehak" ], "title": "Hierarchical transformers for long document classification", "venue": "In IEEE Automatic Speech Recognition and Understanding Workshop, ASRU 2019,", "year": 2019 }, { "authors": [ "Jack W. Rae", "Anna Potapenko", "Siddhant M. Jayakumar", "Chloe Hillier", "Timothy P. Lillicrap" ], "title": "Compressive transformers for long-range sequence modelling", "venue": "In 8th International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Sandeep Subramanian", "Ronan Collobert", "Marc’Aurelio Ranzato", "Y-Lan Boureau" ], "title": "Multi-scale transformer language models", "venue": null, "year": 2005 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Shuohang Wang", "Luowei Zhou", "Zhe Gan", "Yen-Chun Chen", "Yuwei Fang", "Siqi Sun", "Yu Cheng", "Jingjing Liu" ], "title": "Cluster-former: Clustering-based sparse transformer for long-range dependency encoding", "venue": "CoRR, abs/2009.06097,", "year": 2020 }, { "authors": [ "Sinong Wang", "Belinda Z. Li", "Madian Khabsa", "Han Fang", "Hao Ma" ], "title": "Linformer: Self-attention with linear complexity", "venue": "CoRR, abs/2006.04768,", "year": 2020 }, { "authors": [ "Zihao Ye", "Qipeng Guo", "Quan Gan", "Xipeng Qiu", "Zheng Zhang" ], "title": "BP-Transformer: Modelling long-range context via binary partitioning", "venue": null, "year": 1911 }, { "authors": [ "Manzil Zaheer", "Guru Guruganesh", "Avinava Dubey", "Joshua Ainslie", "Chris Alberti", "Santiago Ontañón", "Philip Pham", "Anirudh Ravula", "Qifan Wang", "Li Yang", "Amr Ahmed" ], "title": "Big bird: Transformers for longer sequences", "venue": null, "year": 2007 }, { "authors": [ "Subramanian" ], "title": "We have used hyperparameter settings same as TransformerQL to train the Multi-scale Transformer. The result is shown in Table 5. From the table, we can see that Multi-scale Transformer has been widely bitten by Transformer-QL even when the Multi-scale Transformer", "venue": null, "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Since its introduction in Vaswani et al. (2017), Transformer networks have overtaken its predecessor Recurrent Neural Networks (RNN) in almost every natural language processing task. However, one limitation of Transformer network is its high requirement of memory and computational power. In a vanilla Transformer network, the memory and computational requirement grows quadratically with the sequence length, and thus with context length.\nIn an effort to overcome the above limitation, Transformer-XL (Dai et al., 2019) and Compressive Transformer (Rae et al., 2020) have been recently proposed. However, in both the network, the context length can grow at best linearly with the memory and computation usage. An alternative strategy have been explored in Li et al. (2019); Ye et al. (2019); Child et al. (2019); Zaheer et al. (2020); Beltagy et al. (2020); Wang et al. (2020b); Kitaev et al. (2020); Katharopoulos et al. (2020); Choromanski et al. (2020); Wang et al. (2020a). All these works have proposed to replace the vanilla self-attention network by a different one with linear or log-linear memory and computation complexity leading to novel transformer architectures with overall linear or log-linear cost. Although, they provide an improvement over the quadratic cost of the vanilla transformer network, the achieved cost is still, at best, linear. Besides, since those techniques are based on either sparsification or compression of the self attention mechanism, they struggle to accumulate long distance information (Gupta & Berant, 2020).\nSeveral works such as Burtsev & Sapunov (2020); Ainslie et al. (2020); Gupta & Berant (2020) have proposed to increase the context length by introducing a global attention which attains to every input token, thus capable of capturing long distance dependency. However, capturing long distance dependency using those approaches involves extreme compression of state space by the global attention mechanism. Moreover, even though, they perform well on several tasks, their performance on language modeling task have not been tested. Another line of work (Zhang et al., 2019; Pappagari et al., 2019) have suggested to use hierarchical arrangement of transformer network to capture document-wide dependency. However, applicability of those networks requires hierarchical structure in the data itself. Moreover, those techniques have been proposed for document compression rather than language modeling.\nIn this paper, we propose a class of transformer architectures, namely Transformer-QL (Quadratically Large), to alleviate the problem of capturing long distance dependency. Similar to multi-scale transformer networks (Donahue et al., 2019; Subramanian et al., 2020; Zhao et al., 2020;\nDai et al., 2020), Transformer-QL captures the contextual information in multiple temporal scales - finer scales to capture recent past information and coarser scales to capture distance past information. Additionally like Transformer-XL, it keeps the hidden states of a past segment in memory and use it to process future segments causing the context length to grow beyond the current segment. Overall, the context length in Transformer-QL can grow up to quadratically with the memory/computational usage. The contributions of the work are as follows:\n• We have proposed a novel class of transformer architectures, namely Transformer-QL, in which, the context length can be made to grow linearly with memory and computation cost. Further, employing a linear cost self attention layer like Wang et al. (2020b); Katharopoulos et al. (2020), the context length of Transformer-QL can be made to grow quadratically in both memory and computational cost.\n• We have empirically evaluated a Transformer-QL model on three long range language modeling datasets. The results show significant improvement in perplexity score over Transformer-XL and Compressive Transformer.\nThe organization of the paper is as follows. In section 2, the proposed Transformer-QL architecture along with its background has been introduced. Section 3 provides empirical evaluation of Transformer-QL. The section also studies the sensitivity of Transformer-QL to several hyperparameters. Finally, in Section 4, the conclusion has been drawn and future directions of the work have been suggested." }, { "heading": "2 METHOD", "text": "" }, { "heading": "2.1 TERMINOLOGY AND NOTATIONS", "text": "In a transformer network, the input sequence are partitioned into smaller segments of fixed length. Each segment is processed independently of other segments. We refer to the number of tokens in each segment as segment length. In a transformer network with recurrent memory like TransformerXL, the hidden states of the recent past segments are preserved in a fixed length memory. We refer to the number of tokens in each layer of the memory unit as memory length. For an output state (i.e. the output states of the last layer), we use the term context length to refer to the number of past tokens on which the output state depends. In transformer network, different output states might have different context length. We respectively refer the minimum and maximum of the context lengths of all the output states in a network as minimum context length and maximum context length of the network. We refer the sum of segment length and the memory length using the term window length.\nWe denote the segment length, memory length, window length and model dimension by ns, nm, nw and dm respectively. Thus, we have nw = ns + nm. We also use the notations slt and m l t to denote the output and memory of l-th layer at time step t for l = 1, 2, · · · , L where L is the total number of Layers. The output and memory of embedding layer at time step t have been denoted by s0t and m0t respectively. The number of heads in the self attention layers has been denoted by H ." }, { "heading": "2.2 BACKGROUND", "text": "Transformer A transformer network consists of stacked collection of multiple transformer layers. Each transformer layer contains a multi-head self-attention layer followed by a position-wise feed forward layer. Though the memory and computational cost of position-wise feed forward layer is linear in the length of input sequence, the multi-head self attention layer has a quadratic cost. The transformer network tackles the quadratic memory and computational cost by dividing the input sequence into smaller segments and applying the transformer network on each segment independently. However, such method limits the context lengths to the segment length. Dai et al. (2019) has named this problem as context fragmentation problem.\nTransformer-XL Dai et al. (2019) has proposed Transformer-XL to solve the context fragmentation problem. In Transformer-XL, instead of discarding the hidden states after the computation of a segment, they are saved in memory (please refer to Figure 3). During the computation of the following segments, the self attention is applied over the hidden states of both the current segment and the\nmemory, thus has an increased context length without quadratic increase in the memory and computational cost. In fact, the memory/computational cost of the self-attention layer of Transformer-XL grows only quadratically only with the segment size ns and linearly with the memory length nm. On the other hand, the context lengths get increased by a length of nm per layer. By keeping ns small and making nm large enough, the memory and computational cost of Transformer-XL can be made close to linear with respect to the context lengths. Rae et al. (2020) has proposed to improved the memory/computational cost of Transformer-XL further by keeping the part of the memory states in a compressed form. However, even with this improvement, the memory and computational cost can be at best linear in the context length." }, { "heading": "2.3 THE MODEL", "text": "Overview In this paper, we explore to increase the context length by compressing the hidden states hierarchically. The high level view of our architecture is shown in Figure 1. As shown in the figure, the model processes the input sequence in several scales of temporal granularity. Each scale consists of several transformer layers with recurrent memory. The output of the last layer of one scale is compressed causing the temporal granularity as well as the segment length to reduce. As the segment length reduces, we also simultaneously increase the memory length to keep the total length (i.e. segment length + memory length) of the layer constant. Then the new segment and memory is fed as the input to the first layer of the next scale. The resulting architecture is similar to the multiscale transformer architectures (Donahue et al., 2019; Subramanian et al., 2020; Zhao et al., 2020). Additionally, Transformer-QL keeps recurrent memory to store hidden states of previous segments. Therefore, in Transformer-QL, the layers belonging to a finer scale process contextual information in fine-grained manner, but have a smaller context length. On the other hand, a layer belonging to a coarser scale process information in coarse-grained manner, but have a longer context length (please refer to Figure 5 for a detailed illustration of the context lengths of Transformer-QL layers). To get the final output of the network, we causally combine the (possibly over-sampled) outputs of the last layer of each scale and pass those through several transformer layers (following Subramanian et al. (2020); Zhao et al. (2020)) to learn deep representation of the output.\nThe Compression Function For compression, we use one of average pooling and max pooling with pool size and stride both equal to c where c is the rate by which we compress the states while transitioning from one scale to the next. Let slt and m l t be the output and memory states of l-th layer with length nls and n l m respectively. We apply the compression function on the concatenation of m l t and slt to get the output s l+1 t of length n l+1 s = (n l s + n l m)/c (for simplicity assume that n l s + n l m is divisible by c). If nl+1s > n l s, we take the last n l s elements of s l+1 t to form the output of the\ncompression layer. Finally, we keep a recurrent memory ml+1t of length n l s + n l m − nl+1s making nls + n l m = n l+1 s + n l+1 m to hold.\nThe Memory Updates In Transformer-XL with segment length ns and memory length nm, the segment of length ns is get shifted into the memory. In other words, the memory for the next time step is computed as mlt+1 = concat(m l t, s l t)[−nm :] for all layer l. However, in Transformer-QL, the granularity of layers belonging to different scales are different. More precisely, a segment of length n0s belonging to scale 1 is compressed into a segment of length n 0 s/c\ni−1 at a layer belonging to scale i. Thus, in Transformer-QL, we update the memory of a layer l belonging to scale i as mlt+1 = concat(m l t, s l t[: n i h])[−nim :] where nih = n0s/ci−1 and nim are the shift length and the memory length at scale i for i = 0, 1, · · · respectively. The complete algorithm of Transformer-QL is shown in Figure 2.\nDroppath Regularization Since the output of the last layer of every scale is summed in the accumulation layer, the path through a higher scale forms to a deeper network while the path through a lower scale forms to a shallower network. Consequently, layers in the higher scales might remain under-fitted due to lack of gradient flow through the deeper network while the layers in the lower scales get over-fitted. To alleviate this problem, we have introduced droppath regularization. In the accumulation layer, let each output be computed as so = 1l ∑l i=1 s\ni where si represents the (possibly over-sampled) output of scale i and l is the total number of scales. In droppath regularization with droppath probability p, we drop the output of all the scales below j with a probability p/(l − 1) for j = 2, 3, · · · , l from the accumulated output. More precisely, we generate a random number u from uniform probability distribution and compute the output as so = 1l−j+1 ∑l i=j s i if\nu ∈ [ (j−2)p l−1 , (j−1)p l−1 ] . For u ≥ p, no droppath is applied." }, { "heading": "2.4 THE COMPLEXITY", "text": "The memory/computational complexity of a Transformer-XL network (Dai et al., 2019) with segment length ns, memory length nm and L layer is Θ((α(nm, ns) + ns)L) where α(·, ·) is the complexity of self-attention layer. The context length nc of the network is Θ(nmL). Since α(nm, ns) = Ω(nm + ns) (Li et al., 2019; Ye et al., 2019; Child et al., 2019; Zaheer et al., 2020; Beltagy et al., 2020; Wang et al., 2020b; Kitaev et al., 2020; Katharopoulos et al., 2020; Choromanski et al., 2020), the memory and computational complexity of Transformer-XL in terms of context length is Ω(nc). Similarly, the memory and computational complexity of Compressive Transformer (Rae et al., 2020) in term of context length is Ω(nc/c) where c is the compression rate. Therefore, the memory/computational complexity of both Transformer-XL network and Compressive Transformer network in term of the context length is at least linear. Consequently, increasing the context length in both the networks requires at least linear increase in the amount of both memory and computational requirements.\nOn the other hand, a Transformer-QL network with L Transformer-XL layers and i compression layers, the context length nc becomes Θ(ci(ns +nm)) = O(clogcns(ns +nm)) = O(ns(ns +nm)) where ns = n0s and nm = n 0 m are the segment and memory length in scale 1 of the network. Note that, since at most i = logcns compression layer can be used in Transformer-QL, we have ci = O(clogcns) = O(ns). If we set nm = O(ns), we have nc = O((ns)2). However, the time and memory complexity of a Transformer-QL network is Θ(α(ns, nm)L + (ns + nm)i)) = Θ(α(ns, nm)(L + i)). Since α(ns, nm) = Ω(ns + nm) and we set nm = O(ns), the memory/computational complexity of Transformer-QL becomes Ω(ns(L + i)). Therefore, the memory/computational complexity of Transformer-QL in terms of context length is Ω( √ nc(L + i)) =\nΩ (√ nc(L+ logcns) ) . Thus, the complexity of Transformer-QL can be at best sub-linear. Moreover, if we set the compression rate c to ns, the memory and computational complexity can be at best Θ( √ nc) or, in other words, the context length can be at best quadratic in the memory and computational cost. In Appendix B, we provide an algorithm to compute a tight estimation of the context length of a Transformer-QL network. In the appendix, we have also provided a detailed illustration of the dependency structure of the hidden states of a Transformer-QL network on the past tokens." }, { "heading": "3 EMPIRICAL EVALUATION", "text": "In this section, we empirically evaluate the efficacy of Transformer-QL for long range language modeling task. Towards that goal, we compare the results of Transformer-QL with that of TransformerXL (Dai et al., 2019) and Compressive Transformer (Rae et al., 2020). Then we evaluate the sensitivity of Transformer-QL to several hyper-parameters." }, { "heading": "3.1 COMPARISON WITH STATE OF THE ART METHODS", "text": "State of the Art Methods We compare Transformer-QL network with the following two networks:\nTransformer-XL (Dai et al., 2019) Transformer-XL is similar to vanilla Transformer with two modifications. It uses recurrent memory to store and access the hidden states of the past time steps. The recurrent memory enables to increase the minimum context length up to nmL where nm is the memory length and L is the number of layers. It also uses relative positional embedding of token instead of absolute positional embedding.\nCompressive Transformer (Rae et al., 2020) Like Transformer-XL, Compressive Transformer also uses recurrent memory. However, Compressive Transformer keeps part of the recurrent memory in compressed format, thus has an increased context length over Transformer-XL.\nDatasets We compare Transformer-QL against the above two networks on three long range language modeling datasets: SimpleBooks-2 (Nguyen, 2019), SimpleBooks-92 (Nguyen, 2019) and WikiText-103 (Merity et al., 2017). SimpleBooks-2 and SimpleBooks-92 are created from Gutenberg book corpus (www.gutenberg.org) while WikiText-103 are created from Wikipedia articles. All\nthe three datasets preserve paragraph and section structures of their sources making those suitable for long range language modeling task. The statistics of the three datasets are shown in Table 4 of Appendix C.\nExperimental Details For the experiments of Transformer-XL and Compressive Transformer, we have used an 8-layer network. And for the experiments of Transformer-QL, we have used a network with 3 layers in scale 1, 3 layers in scale 2 and 2 layers in the output block. Thus, the TransformerQL has a total of eight layers as in Transformer-XL and Compressive Transformer. We set the compression rate of Compressive Transformer to 2. In Transformer-QL, we have used max-pooling layer with pool size 2 and stride 2 as the compression layer. Thus, both the Transformer-QL and Compressive Transformer have a compression rate of 2. For the experiments on the SimpleBooks92 and WikiText-103, we have set the model dimension to 1536 and used an initial learning rate of 1 × 10−4. On the other hand, for the experiments on SimpleBooks-2, we have set the model dimension to 256 and learning rate to 2.5 × 10−4. All the models have been trained using Adam optimizer. We set the droppath probability of Transformer-QL to 0.3. The details of other hyperparameters can be found in Appendix E .\nResults The results of the comparison are shown in Table 1. The results are grouped by the window length nw of the test model as the lower bound of memory and computation requirement directly depends on it. In all the datasets and settings, Transformer-XL performs worst among all three. Worst performance of Transformer-XL is not surprising as it has smallest average context length (shown in the fifth column) for a given nw. However, Compressive Transformer has a slightly larger average context length than Transformer-QL. Yet, Transformer-QL has performed similarly or significantly better than Compressive Transformer in all the setting which indicates that Transformer-QL can exploits the contextual information more effectively than Compressive Transformer." }, { "heading": "3.2 EFFECT OF MODEL DIMENSION", "text": "In this section, we investigate the effect of model dimension on the performance of TransformerQL. Towards that goal, we have performed experiments on WikiText-103 dataset with varying model dimension. For each experiment, we compared the test perplexity of Transformer-QL with that of Transformer-XL. The results are shown in Table 2. The improvement in test perplexity of Transformer-QL over Transformer-XL has been computed by subtracting the test perplexity of Transformer-QL from that of Transformer-XL. The relative improvement is computed by\nRelative improvement = Improvement× 100\nTest perplexity of Transformer-XL\nAs shown in the table, Transformer-QL performs relatively worse for small model dimension like 512 and relative improvement increases as the model dimension increases. We speculate that the relatively worse performance of Transformer-QL for smaller model dimension is caused by the difficulty in compressing hidden states during switching from one scale to the next. To alleviate the problem, Donahue et al. (2019) have proposed to increase the model dimension as the model transits from a lower scale to a higher one. On the other hand, Dai et al. (2020) have suggested a novel query-only-pooling to solve the problem. We take it as a future work to try those approaches in Transformer-QL." }, { "heading": "3.3 EFFECT OF CONTEXT LENGTH", "text": "In this section, we study the relative improvement in perplexity scores obtained by Transformer-QL over Transformer-XL for varying context length. The results are shown in Table 3. From the table, it can be noticed that relative improvement obtained by Transformer-QL is more when the context length of the Transformer-XL networks are smaller in the first place. For example, for test ns = 08 and nm = 08, the relative improvement is as high as 8.80%. On the other hand, for the test ns = 02 and nm = 30, the relative improvement is only 2.76%. This can be explained by the fact that for the segment and memory length 02 and 30, the average context length of Transformer-XL is already large enough (241) to provide good result. By extending the average context length from 241 to 332, Transformer-QL provides only a small improvement following the law of diminishing return (Hestness et al., 2017)." }, { "heading": "4 CONCLUSION AND FUTURE WORK", "text": "In the work, we have proposed a class of transformer networks namely Transformer-QL in which the context length can grow quadratically in memory and computational usage. Our empirical evaluation shows that Transformer-QL can perform significantly better than other long range language modeling networks like Transformer-XL and Multi-scale Transformer by exploiting longer context length. Further more, it can perform significantly better than Compressive Transformer by exploiting the contextual information more effectively.\nIn our empirical evaluation, we have evaluated a Transformer-QL network with only one compression layer. In future, we want to evaluate a network with more then one compression layers. Also, we have empirically found that the performance of Transformer-QL network can be worse than that of Transformer-XL when the model dimension is small. As our future work, we want explore different methods for removing this limitation." }, { "heading": "A TRANSFORMER-XL ALGORITHM", "text": "The algorithm for Transformer-XL is shown in Figure 3." }, { "heading": "B CONTEXT LENGTH OF TRANSFORMER-QL", "text": "A tight estimate of the minimum context length of a Transformer-QL network can be computed using algorithm of Figure 4. For simplicity, we have assumed that all the division operations result into integer output. We have also assumed that there is at least one layer in every scale. The maximum context length can be obtained by adding ns to the minimum context length.\nAdditionally, in Figure 5, we have shown the detailed computation of minimum/maximum context length with an example. In the figure, the notation Slt1:t2 used to denote a hidden states of l-th layer and the state depends on the t1-th to t2-th tokens of the input sequence. In the example of the figure, each output state depends on at least 44 previous tokens. In other words, minimum context length of the network is 44. On the other hand, in a Transformer-XL network of same segment length, memory length and number of layers, the minimum context length would have been 4× 4 = 16." }, { "heading": "C STATISTICS OF DATASETS", "text": "The statistics of the datasets are shown in Table 4." }, { "heading": "D COMPARISON WITH MULTI-SCALE TRANSFORMER", "text": "In this section, we empirically compare Transformer-QL with Multi-scale Transformer (Subramanian et al., 2020). Our implementation of Multi-scale Transformer is same as Transformer-QL without any recurrent memory. The resultant Multi-scale Transformer is similar to the button-up model of Subramanian et al. (2020). We have used hyperparameter settings same as TransformerQL to train the Multi-scale Transformer. The result is shown in Table 5. From the table, we can see that Multi-scale Transformer has been widely bitten by Transformer-QL even when the Multi-scale Transformer has been trained and tested with a larger window length." }, { "heading": "E HYPERPARAMETER SETTING", "text": "We used the following values for hyperparameter for the experiments on SimpleBooks-2 datasets:\nHyperparameter Transformer-XL Compressive Transformer-QL Multi-scale Transformer Transformer d model 256 256 256 256 d embed 256 256 256 256 div val 1 1 1 1 untie r False False False False proj same dim True True True True n head 4 4 4 4 d head 64 64 64 64 d inner 1024 1024 1024 1024 train batch size 128×8ns 128×8 ns 128×8 ns 128×8 ns train ns 08 04 08 08 train nm 08 06 08 - train ncm - 06 - - pre lnorm True True True True warmup steps 0 0 0 0 train steps 60, 000 60, 000 60, 000 60, 000 learning rate 0.00025 0.00025 0.00025 0.00025 min lr ratio 0.004 0.004 0.004 0.004 clip 0.25 0.25 0.25 0.25 dropout 0.1 0.1 0.1 0.1 dropatt 0.1 0.1 0.1 0.1 droppath - - 0.3 0.3 init std 0.02 0.02 0.02 0.02 proj init std 0.01 0.01 0.01 0.01 recons loss weight - 0.01 - -\nFor the SimpleBooks-92 datasets and model dimension 1536, the following values of hyperparameters are used:\nHyperparameter Transformer-XL Compressive Transformer-QL Multi-scale Transformer Transformer d model 1536 1536 1536 1536 d embed 1536 1536 1536 1536 div val 4 4 4 4 untie r False False False False proj same dim True True True True n head 16 16 16 16 d head 96 96 96 96 d inner 6144 6144 6144 6144 train batch size 512×8ns 512×8 ns 512×8 ns 512×8 ns train ns 08 04 08 08 train nm 08 06 08 08 train ncm - 06 - - pre lnorm True True True True warmup steps 0 0 0 0 train steps 250, 000 250, 000 250, 000 250, 000 learning rate 0.0001 0.0001 0.0001 0.0001 min lr ratio 0.004 0.004 0.004 0.004 clip 0.1 0.1 0.1 0.1 dropout 0.15 0.15 0.15 0.15 dropatt 0.15 0.15 0.15 0.15 droppath - 0.3 0.3 0.3 init std 0.02 0.02 0.02 0.02 proj init std 0.01 0.01 0.01 0.01 recons loss weight - 0.01 - -\nFor the WikiText-103 datasets and model dimension 1536, the following values of hyperparameters are used:\nHyperparameter Transformer-XL Compressive Transformer-QL Multi-scale Transformer Transformer d model 1536 1536 1536 1536 d embed 1536 1536 1536 1536 div val 4 4 4 4 untie r False False False False proj same dim True True True True n head 16 16 16 16 d head 96 96 96 96 d inner 6144 6144 6144 6144 train batch size 512×16ns 512×16 ns 512×16 ns 512×16 ns train ns 16 08 16 16 train nm 16 12 16 - train ncm - 12 - - pre lnorm True True True True warmup steps 0 0 0 0 train steps 350, 000 350, 000 350, 000 350, 000 learning rate 0.0001 0.0001 0.0001 0.0001 min lr ratio 0.004 0.004 0.004 0.004 clip 0.1 0.1 0.1 0.1 dropout 0.15 0.15 0.15 0.15 dropatt 0.15 0.15 0.15 0.15 droppath - - 0.3 0.3 init std 0.02 0.02 0.02 0.02 proj init std 0.01 0.01 0.01 0.01 compression rate - 2 2 2 recons loss weight - 0.01 - -\nFor training models of model dimensions 512 and 1024, we have used initial learning rate of 0.0005 and 0.00025 respectively keeping the rest of the hyper-parameters same." } ]
2,020
TRANSFORMER-QL: A STEP TOWARDS MAKING TRANSFORMER NETWORK QUADRATICALLY LARGE
SP:1d1f110bbf38b9ed8faeefa13e59367e2945206c
[ "This paper considers a FPS game that can be decomposed into two sub-tasks, navigation and shooting. A hierarchical meta RL method is introduced and the updating rules for sub-policies and meta parameters are provided. Experiments focus on this specific environment and hence the hierarchical structure is also specified as a meta controller over two sub-policies defined for navigation and shooting explicitly." ]
Deep reinforcement learning algorithms aim to achieve human-level intelligence by solving practical decisions-making problems, which are often composed of multiple sub-tasks. Complex and subtle relationships between sub-tasks make traditional methods hard to give a promising solution. We implement a first-person shooting environment with random spatial structures to illustrate a typical representative of this kind. A desirable agent should be capable of balancing between different sub-tasks: navigation to find enemies and shooting to kill them. To address the problem brought by the environment, we propose a Meta Soft Hierarchical reinforcement learning framework (MeSH), in which each low-level sub-policy focuses on a specific sub-task respectively and high-level policy automatically learns to utilize low-level sub-policies through meta-gradients. The proposed framework is able to disentangle multiple sub-tasks and discover proper low-level policies under different situations. The effectiveness and efficiency of the framework are shown by a series of comparison experiments. Both environment and algorithm code will be provided for open source to encourage further research.
[]
[ { "authors": [ "Pierre-Luc Bacon", "Jean Harb", "Doina Precup" ], "title": "The option-critic architecture", "venue": "In Thirty-First AAAI Conference on Artificial Intelligence,", "year": 2017 }, { "authors": [ "Thomas G Dietterich" ], "title": "Hierarchical reinforcement learning with the maxq value function decomposition", "venue": "Journal of artificial intelligence research,", "year": 2000 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Volodymir Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "arXiv preprint arXiv:1802.01561,", "year": 2018 }, { "authors": [ "Kevin Frans", "Jonathan Ho", "Xi Chen", "Pieter Abbeel", "John Schulman" ], "title": "Meta learning shared hierarchies", "venue": "arXiv preprint arXiv:1710.09767,", "year": 2017 }, { "authors": [ "Christopher Grimm", "Satinder Singh" ], "title": "Learning independently-obtainable reward functions", "venue": "arXiv preprint arXiv:1901.08649,", "year": 2019 }, { "authors": [ "Peter Henderson", "Wei-Di Chang", "Pierre-Luc Bacon", "David Meger", "Joelle Pineau", "Doina Precup" ], "title": "Optiongan: Learning joint reward-policy options using generative adversarial inverse reinforcement learning", "venue": "arXiv preprint arXiv:1709.06683,", "year": 2017 }, { "authors": [ "Maximilian Igl", "Andrew Gambardella", "Jinke He", "Nantas Nardelli", "N Siddharth", "Wendelin Böhmer", "Shimon Whiteson" ], "title": "Multitask soft option learning", "venue": "In Conference on Uncertainty in Artificial Intelligence,", "year": 2020 }, { "authors": [ "Yiding Jiang", "Shixiang Shane Gu", "Kevin P Murphy", "Chelsea Finn" ], "title": "Language as an abstraction for hierarchical deep reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Steven Kapturowski", "Georg Ostrovski", "John Quan", "Remi Munos", "Will Dabney" ], "title": "Recurrent experience replay in distributed reinforcement learning", "venue": "In International conference on learning representations,", "year": 2018 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Zichuan Lin", "Li Zhao", "Derek Yang", "Tao Qin", "Tie-Yan Liu", "Guangwen Yang" ], "title": "Distributional reward decomposition for reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Jacob Menashe", "Peter Stone" ], "title": "Escape room: a configurable testbed for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1812.09521,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Junhyuk Oh", "Satinder Singh", "Honglak Lee", "Pushmeet Kohli" ], "title": "Zero-shot task generalization with multi-task deep reinforcement learning", "venue": "arXiv preprint arXiv:1706.05064,", "year": 2017 }, { "authors": [ "Zhen-Jia Pang", "Ruo-Ze Liu", "Zhou-Yu Meng", "Yi Zhang", "Yang Yu", "Tong Lu" ], "title": "On reinforcement learning for full-length game of starcraft", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "In International conference on machine learning,", "year": 2019 }, { "authors": [ "Andrew M Saxe", "Adam Christopher Earle", "Benjamin S Rosman" ], "title": "Hierarchy through composition with multitask lmdps", "venue": null, "year": 2017 }, { "authors": [ "John Schulman", "Sergey Levine", "Pieter Abbeel", "Michael Jordan", "Philipp Moritz" ], "title": "Trust region policy optimization", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "David Silver", "Aja Huang", "Chris J Maddison", "Arthur Guez", "Laurent Sifre", "George Van Den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot" ], "title": "Mastering the game of go with deep neural networks and tree", "venue": "search. nature,", "year": 2016 }, { "authors": [ "Richard S Sutton", "Doina Precup", "Satinder Singh" ], "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "venue": "Artificial intelligence,", "year": 1999 }, { "authors": [ "Alexander Sasha Vezhnevets", "Simon Osindero", "Tom Schaul", "Nicolas Heess", "Max Jaderberg", "David Silver", "Koray Kavukcuoglu" ], "title": "Feudal networks for hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1703.01161,", "year": 2017 }, { "authors": [ "Tianhe Yu", "Saurabh Kumar", "Abhishek Gupta", "Sergey Levine", "Karol Hausman", "Chelsea Finn" ], "title": "Gradient surgery for multi-task learning", "venue": "arXiv preprint arXiv:2001.06782,", "year": 2020 }, { "authors": [ "Haosheng Zou", "Tongzheng Ren", "Dong Yan", "Hang Su", "Jun Zhu" ], "title": "Reward shaping via metalearning", "venue": "arXiv preprint arXiv:1901.09330,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "With great breakthrough of deep reinforcement learning (DRL) methods (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Schulman et al., 2015; Lillicrap et al., 2015), it is an urgent need to use DRL methods to solve more complex decision-making problems. The practical problem in real world is often a subtle combination of multiple sub-tasks, which may happen simultaneously and hard to disentangle by time series. For instance, in StarCraft games (Pang et al., 2019), agents need to consider building units and organizing battles, sub-tasks may change rapidly over the whole game process; sweeping robots tradeoff between navigating and collecting garbage; shooting agents should move to appropriate positions and launch attacks, etc. The relationship between sub-tasks is complex and subtle. Sometimes they compete with each other and need to focus on one task to gain key advantages; at other times, they need to cooperate with each other to maintain the possibility of global exploration. It is often time consuming and ineffective to learn simply by collecting experience and rewarding multiple objectives for different sub-tasks.\nA reasonable idea is to utilize deep hierarchical reinforcement learning (DHRL) methods (Vezhnevets et al., 2017; Igl et al., 2020), where the whole system is divided into a high-level agent and several low-level agents. Low-level agents learn sub-policies, which select atomic actions for corresponding sub-tasks. The high-level agent is responsible for a meta task in the abstract logic or coarser time granularity, guiding low-level agents by giving a goal, or directly selecting among subpolicies. However, DHRL methods face some inherent problems: due to the complex interaction between multi-level agents, there is no theoretical guarantee of convergence, and it shows unstable experimental performance. Most DHRL methods require heavy manual design, and end-to-end system lacks reasonable semantic interpretation. In addition, these agents are often constrained by specific tasks and are easy to overfit. Even transferring between similar tasks, they perform poorly and need a lot of additional adjustments.\nWe introduce a first-person shooting (FPS) environment with random spatial structures. The game contains a 3D scene from human perspective. When the player defeats all enemies, the player wins the game. When the player drops to the ground or losses all health points, the player loses the\ngame. It is very risky for the player to drop to the ground, thus environment contains two key tasks: navigation and combat. The terrain and enemies in the game are randomly generated. This ensures: 1) the agent cannot learn useful information by memorizing coordinates; 2) the possibility of overfitting is restrained and the generalization ability of learned policy is enhanced. The state information is expressed in the way of raycast. This representation of environment information requires much less computing resources than the raw image representation. It can be trained and tested even with only CPU machines, which makes us pay more attention to the reinforcement learning algorithm itself rather than the computing ablity related to image processing.\nFor this environment, we propose a Meta Soft Hierarchical reinforcement learning framework (MeSH). The high-level policy is a differentiable meta parameter generator, and the low-level policy contains several sub-policies, which are in the same form and differentiated automatically in the training procedure. The high-level policy selects and combines sub-policies through the meta parameter and interacts with the environment. We find that the meta generator can adaptively combines sub-policies with the process of the task, and have strong interpretability in semantics. Compared with a series of baselines, the agent has achieved excellent performance in FPS environment.\nThe main contributions of this work are as follows:\n• clarifying the complex relationship between multi-task composition.\n• a novel meta soft hierarchical reinforcement learning framework, MeSH, which uses differentiable meta generator to adaptively select sub-policies and shows strong interpretability.\n• a series of comparison experiments to show the effectiveness of the framework.\n• an open-sourced environment and code to encourage further research on multi-task RL 1.\nIn this paper, we discuss the related work in Section 2. We introduce the details of the implemented environment in Section 3. We show our proposed framework in Section 4. We demonstrate details of our experiments in Section 5. At last, we conclude in Section 6." }, { "heading": "2 RELATED WORK", "text": "In decision-making problems with high-dimensional continuous state space, the agent often needs to complete tasks that contain multiple sub-tasks. To complete taxi agent problem (Dietterich, 2000), the agent needs to complete sub-tasks such as pickup, navigate, putdown. Menashe & Stone (2018) proposed Escape Room Domain, which is a testbed for HRL. The agent leaves the room from the starting point and needs to press four buttons of different colors to leave the room. In these environments, the agent needs to optimize several sub-tasks and minimize the mutual negative influence between them. However, sub-tasks in these environments are timing dependent. The proposed methods above are helpless in a multi-task environment that needs to fulfill multiple tasks simultaneously.\nArchitectural solutions use hierarchical structure to decompose tasks into action primitives. Sutton et al. (1999) models temporal abstraction as an option on top of extended actions, Bacon et al. (2017) proposes an actor-critic option method based on it. Henderson et al. (2017) extend the options framework to learn joint reward-policy options. Besides, Jiang et al. (2019) construct a compositional structure with languages as abstraction or instruction. Due to specific structure design of these methods, high-level agent is unable to execute multiple sub-policies simultaneously in any form.\nRecent HRL works learn intra-goals to instruct sub-policies. Vezhnevets et al. (2017) proposes a manager-worker model, manager abstracts goals and instructs worker. This architecture uses directional goal rather than absolute update goal. Oh et al. (2017) learns a meta controller to instruct implementation and update of sub-tasks. Igl et al. (2020) presents a new soft hierarchy method based on option, it learns with shared prior and hierarchical posterior policies. Yu et al. (2020) proposes a method that projects the conflict gradient onto the normal plane to avoid some task gradients dominating others. Compared with the hard hierarchy methods, these methods use the state’s natural features to update the upper-level policy, avoiding the timing constraints of handcrafted sub-tasks. Due to the lack of meaningful learning goals of sub-policies, the low-level policies fail to focus on explainable sub-tasks.\n1https://github.com/MeSH-ICLR/MEtaSoftHierarchy.git\nHRL has been recently combined with meta-learning. Frans et al proposes Meta-Learning Shared Hierarchies (MLSH) (Frans et al., 2017), which updates master policy and sub-policy by metalearning. Zou et al. (2019) uses meta-learning to learn reward shaping. Rakelly et al. (2019) extends the idea of disentangling task inference and control to help agents leverage knowledges between tasks. Although these methods reduced the limitation of policy structures, it is difficult to learn multi-tasks in parallel due to the fixed time steps selected by the master policy for every sub-policy.\nIt is also reasonable to learn sub-tasks based on decomposition of rewards. Saxe et al. (2017) expresses behaviors as different weighted task blends by the communication between layers so that can maintain a parallel distributed representation of tasks, but the structure is hard to abstract the relationship between different sub-tasks. Grimm & Singh (2019) and Lin et al. (2019) allocate each state with one sub-reward function, which is a non-overlapping decomposition of spatial space. In contrast to these approaches, our method can represent complex and subtle relationship between multiple sub-tasks and perform them simultaneously with explainable sub-goals." }, { "heading": "3 ENVIRONMENT", "text": "To better understand multi-task decision-making problems, we firstly introduce a first-person shooting environment with random terrain, as shown in Figure 1. The game contains a 3D scene, which analogy to human perspective. This makes the behavior of trained agent similar to human intelligence. The real-time information is shown in Table 1. The condition of winning the game is to defeat all the enemies in the game. When the player drops to the ground or losses all health points, the game will be judged as failed. This game is very risky as it’s easy to drop. Thus the environment contains two key tasks: navigation and combat. The terrain and enemies in the game are randomly generated. This ensures: 1) the agent cannot learn useful information by memorizing coordinates; 2) the possibility of over-fitting is restrained and the generalization ability of learned policy is enhanced. The state information of the agent is expressed in the way of raycast, as shown in Figure 2. This representation of environment information requires much less computing resources than raw image representation. It can be trained and tested even with only CPU machines, which makes us pay more attention to the reinforcement learning algorithm itself rather than the computing ability related to image processing.\nThe generation rules of random terrain are as follows. The maximum generated height of random terrain is set to 5. In the initial state, there is only one parcel, which is also the place where the player is born. We add this parcel to the parcel queue. If the parcel queue is not empty, the parcel at the head of the queue will be taken out. When the maximum height is not reached, we expand the parcel to four directions of the higher level with equal probability. If there is no new parcel in the corresponding position of the higher level, a new parcel is generated. A ramp is established between the new parcel and the current parcel and a random number of enemy is created at the random position of the new parcel, and then the new parcel is added to the parcel queue. If there are new parcels in the corresponding position of the higher level, the new parcels are not generated repeatedly, and only the ramp between the two parcels is added. Repeat adding parcels until the parcel queue is empty, then terrain generation is completed.\nThe FPS environment is typically a combination of two sub-tasks: navigation and combat. The relationship between these two sub-tasks is subtle and complex. Sometimes they compete with\neach other, but at other times, they cooperate for a common objective. For instance, in navigation missions, in order to explore more unseen terrain, sometimes we need to fight to clear enemies along the way; but at other times, we have to focus on navigation to pass through narrow terrain. Similarly, in combat missions, sometimes we need to move to get a better position to shoot; but at other times, we need to focus on shooting to kill the enemy quickly. This environment is very representative of practical problems, since a large number of them can be divided into several parts which are contradictory and unified. In addition, each sub-task in this environment is simple and clear, but the combination of them greatly increases the difficulty of solving the problem. This forces the RL algorithm to focus more on dealing with these complex relationships, rather than the specific techniques for solving a single problem." }, { "heading": "4 POLICY OPTIMIZATION", "text": "" }, { "heading": "4.1 FRAMEWORK", "text": "The proposed MeSH framework includes two policies: high-level policy and low-level policy, as shown in Figure 3. The high-level policy is a differentiable meta parameter generator, and the lowlevel policy contains N sub-policies, which are in the same form and correspond to N sub-tasks respectively. The high-level policy automatically selects and combines sub-policies of the low-level through the meta parameter generator and interacts with the environment.\nIn the proposed framework, firstly, shared encoder layer and RNN layer are deployed to learn the environmental state representation st from the observation history. Based on st, a high-level meta-\nparameter network is established to generate meta-parameters α = (α1, α2, · · · , αN ). And N low-level policy networks are established to generate N different sub-policies (π1, π2, · · · , πN ), respectively. The policy π ultimately used to choose actions is the weighted sum of the N policies:\nπ = N∑ i=1 αi · πi. (1)\nWhen the policy π chooses the action and interacts with the environment, the environment moves to next state and returns corresponding rewards (R1, R2, · · · , RN ). Among them, Ri is the reward for corresponding sub-tasks. The final reward R received by the agent is weighted sum of the N rewards:\nR = N∑ i=1 αi ·Ri. (2)\nThis setting has two advantages: 1) α can automatically select the weight of the corresponding policy according to the state of the environment; 2) derive the differentiation of the N policies in an implicit way.\nIn the training process, we use IMPALA (Espeholt et al., 2018) as the basic framework of large-scale distributed reinforcement learning. Since the process of sample collection and parameter updating are decoupled, learning is off-policy and V-trace technique is used to reduce this difference. The Loss of the framework is\nLoss = c1 ·min(clip(ρ) ·A, ρ ·A) + c2 ·MSE(v, vs) + c3 · Entropy. (3) where A and vs are advantages and target V -value estimated by V-trace.\nDue to the complexity of composite tasks, it is usually difficult to obtain positive reward from naive policy. We utilize the idea of self-imitation learning (SIL) (Oh et al., 2018) to speed up the learning of positive behaviors. Specifically, the original SIL algorithm has not been adopted. Only those samples whose return exceeds the current value estimation are saved in a special buffer, from which a mini-batch data is extracted and learned together with the normal samples in every update step." }, { "heading": "4.2 META-GRADIENT", "text": "We divide the extracted buffer into two parts to calculate the loss to be optimized respectively, which is denoted as Ltrain and Lval. Both losses are determined not only by meta-parameter α but also the parameters of policy networks ω. This implies a bilevel optimization problem with α as the upper-level variable and ω as the lower-level variable (Liu et al., 2018):\nmin α\nLval(ω ∗(α), α),\ns.t. ω∗(α) = argmin ω\nLtrain(ω, α). (4)\nDue to the expensive inner optimization of evaluating the gradient exactly, we use the approximation scheme as follows:\n5αLval(ω∗(α), α) ≈ 5αLval(ω − ξ 5ω Ltrain(ω, α), α). (5)\nwhere ω denotes the current policy weights, and ξ is the learning rate for a step of inner optimization. The idea is to approximate ω∗(α) by adapting ω using only a single training step, without solving the inner optimization completely. Denote ω̂ = ω − ξ5ω Ltrain(ω, α), we can approximate (5) by\n5α Lval(ω∗(α), α) ≈5ω Lval(ω̂, α) · (−ξ 5ω,α Ltrain(ω, α)) +5αLval(ω̂, α)\n≈5ω Lval(ω̂, α) · − ξ\n2 · (5αLtrain(ω + , α)−5αLtrain(ω + , α)) +5αLval(ω̂, α).\n(6)" }, { "heading": "4.3 TRAINING ALGORITHM", "text": "Our hierarchical framework is end-to-end, and the influence of high-level on low-level is realized by differentiable meta parameters. Therefore, in the process of forward inference, we regard the whole framework as a whole, and no longer emphasize the concept of hierarchy. Only in the process of backward update, we need to use meta-gradient to update the meta parameters α. Thus we will distinguish the different levels of the framework.\nAlgorithm 1 Meta Soft Hierarchical reinforcement learning framework (MeSH). 1: Initialize parameters ω and α. 2: Initialize replay buffer DN and SIL buffer DS . 3: Initialize t← 0 4: while True do 5: //Stage 1. Transition Generating Stage. 6: Sample At ∼ π(At|St, ω, α). 7: Generate St+1, R1t, ..., RNt ∼ p(St+1, R1t, ..., RNt|St, At). 8: Calculate Rt by (2). 9: Store (St, At, Rt, St+1) in DN . 10: //Stage 2. Parameter updating stage. 11: Sample mini-batch of transitions from DN and DS . 12: Update ω by minimizing (3). 13: Compute and accumulate meta-gradient of α by (6). 14: Update SIL buffer DS . 15: if t ≡ 0(mod c) then 16: Apply meta-gradient of α. 17: end if 18: t← t+ 1 19: end while" }, { "heading": "5 EVALUATION", "text": "In this section, we conduct a series experiments in the proposed first-person shooting environment. There are two questions we mainly focus on: 1) how the proposed framework performs compared to representative baselines; and 2) whether the proposed framework can learn different meaningful sub-policies and combine them appropriately." }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "In the training process, we use IMPALA as the basic framework of large-scale distributed reinforcement learning. Four CPU-only machines are used as workers, responsible for interacting with the game environment and collecting the transition sequences. A machine with GPU serves as a learner, receives the transition sequences transmitted by the worker and updates the parameters. Since the proposed first-person shooter environment is typically a combination of two sub-tasks (navigation\nand combat), we set the number of sub-policies as N = 2. The rewards for navigation R1 are set as 0.01 per step on the level, and 2.0 per meter on the ramps. The reward for combat R2 is set as 10.0 per enemy killed. Thus we hope that π1 to learn the navigation sub-policy and π2 to learn the combat sub-policy. We set the discount factor γ = 0.997. We use the Adam optimizer to minimize the losses, and the initial learning rate is set to 10−3 with linear decay. The time interval for meta-gradient update is set as c = 8. The batch size for normal buffer and SIL buffer is 512 and 64 respectively with the sequences’ length set as 40. All experiments in this work use the same state encoder layer and RNN layer with LSTM units. To ensure the stability of training LSTM in the dynamic environment, we utilize the previous hidden state as initial state as introduced in R2D2 (Kapturowski et al., 2018). In addition, SIL buffer is also implemented in all experiments to accelerate the learning speed of behaviors with sparse but large reward." }, { "heading": "5.2 PERFORMANCE", "text": "" }, { "heading": "5.2.1 BASELINES", "text": "To verify the effectiveness of our proposed framework, two types of baselines are chosen for comparison. The first is the classical methods in reinforcement learning (a ’+’ indicates that a SIL module is added to the original method) and the second is the variant of the framework to show its ablation performance. To construct a fair comparison, we also do parameter-tuning for all baselines.\n• IMPALA+: A typical distributed reinforcement learning framework with high throughput, scalability, and data-efficiency.\n• FuNs+: A goal-based hierarchical reinforcement learning framework with abstract goal generated by the high-level agent and guiding the behaviors of low-level agent.\n• HardHrl: A variant of our proposed framework with meta-parameter α constrained as the form of one-hot encoding, only one sub-policy is executed per step." }, { "heading": "5.2.2 PERFORMANCE COMPARISON", "text": "Figure 4 and Figure 5 show the average and maximum return of the collected buffer in the training process. Only the proposed framework MeSH achieves good performance, which can acquire high return in both navigation and combat sub-tasks. Other methods perform poorly. Among them, HardHrl performs slightly better than IMPALA+. From the observation of rollout result, HardHrl can execute both navigation and combat sub-tasks in a lower-level. While IMPALA+ can only execute navigation task with none of enemy killed, which shows that a single policy cannot handle multiple sub-tasks at the same time. FuNs+ has hardly learned any reasonable behavior, which indicates that abstract goal can hardly deal with the complex relationship between multiple subtasks simultaneously." }, { "heading": "5.3 DISCUSSION ON DEALING WITH COMPLEX RELATIONSHIP", "text": "Figure 6 shows a fragment in an episode in the test process. We can observe that when the agent execute navigation sub-task, the value of α2 is small, which indicates that the current behavior is\nmore influenced by the navigation policy π1; when the agent execute combat sub-task, the value of α2 increases rapidly and approaches 1.0, which indicates that the current behavior is almost controlled by the combat policy π2. Therefore, α can combine different sub-policies appropriately to adapt to complex conditions.\nThe JS divergence shows the difference between the two sub-policies. We can observe that when α2 is significantly small or large (inclined to one sub-policy), the JS divergence is larger; while α2 is close to 0.5 (influenced by the two sub-policies equally), the JS divergence is small. Besides, we performed single sub-policy in rollout test. The agent with only executing π1 can move flexibly without shooting any enemy, while the agent with only executing π2 can kill the enemies but fall easily. Therefore, the framework has learned different meaningful sub-policies without specify the objectives of each sub-policy artificially.\nIn addition, we also observed that agent has learned a variety of combat policies. The agent tends to shoot at long distances when facing a single enemy. When facing multiple enemies at the same time, the agent are more inclined to close combat. On the one hand, it can avoid being attacked intensively by moving. On the other hand, it can get health point packets while fighting to supplement its own consumption, so as to ensure continuous combat." }, { "heading": "6 CONCLUSION", "text": "In order to research on practical problems with multi-task combination, we implement a first-person shooting environment with random terrain, which is a typical representative of such problems. To deal with complex and subtle relationships between multiple sub-tasks, we propose a Meta Soft Hierarchical reinforcement learning agent, in which the high-level policy learns to combine the low-level sub-policies end-to-end through meta-gradients. Experiments show that the proposed framework outperforms state-of-art baselines and learns different meaningful sub-policies and combine them appropriately. We provide the open-sourced environment and code to encourage further research." } ]
2,020
null
SP:1f7be784b1ff0491f0d78b5eefe1a7706036feeb
[ "+ This paper generalizes the supernet search problem on a broader horizon. Specifically, some of the current NAS methods use supernet to co-training different neural architectures for further architecture search. This paper does not just consider supernet as a tool for NAS, but also consider supernet as a graphical model and extend supernet to several general tasks in the form of graph data. (+)" ]
Recently, a special kind of graph, i.e., supernet, which allows two nodes connected by multi-choice edges, has exhibited its power in neural architecture search (NAS) by searching better architectures for computer vision (CV) and natural language processing (NLP) tasks. In this paper, we discover that the design of such discrete architectures also appears in many other important learning tasks, e.g., logical chain inference in knowledge graphs (KGs) and meta-path discovery in heterogeneous information networks (HINs). Thus, we are motivated to generalize the supernet search problem on a broader horizon. However, none of the existing works are effective since the supernet’s topology is highly task-dependent and diverse. To address this issue, we propose to tensorize the supernet, i.e., unify the subgraph search problems by a tensor formulation and encode the topology inside the supernet by a tensor network. We further propose an efficient algorithm that admits both stochastic and deterministic objectives to solve the search problem. Finally, we perform extensive experiments on diverse learning tasks, i.e., architecture design for CV, logic inference for KG, and meta-path discovery for HIN. Empirical results demonstrate that our method leads to better performance and architectures.
[]
[ { "authors": [ "Martín Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard", "Manjunath Kudlur", "Josh Levenberg", "Rajat Monga", "Sherry Moore", "Derek G. Murray", "Benoit Steiner", "Paul Tucker", "Vijay Vasudevan", "Pete Warden", "Martin Wicke", "Yuan Yu", "Xiaoqiang Zheng" ], "title": "TensorFlow: A system for large-scale machine learning", "venue": null, "year": 2016 }, { "authors": [ "Youhei Akimoto", "Shinichi Shirakawa", "Nozomu Yoshinari", "Kento Uchida", "Shota Saito", "Kouhei Nishida" ], "title": "Adaptive stochastic natural gradient method for one-shot neural architecture", "venue": null, "year": 2019 }, { "authors": [ "Cichocki andrzej", "Namgil. Lee", "Ivan. Oseledets", "Anh-Huy Phan", "Qibin Zhao", "Danilo Mandic" ], "title": "Tensor networks for dimensionality reduction and large-scale optimization: Part 1 low-rank tensor decompositions", "venue": "Foundations and Trends in Machine Learning,", "year": 2016 }, { "authors": [ "Cichocki andrzej", "Anh-Huy. Phan", "Qibin. Zhao", "Namgil. Lee", "Ivan. Oseledets", "Masashi. Sugiyama", "Danilo Mandic" ], "title": "Tensor networks for dimensionality reduction and large-scale optimization: Part 2 applications and future perspectives", "venue": "Foundations and Trends in Machine Learning,", "year": 2017 }, { "authors": [ "Bowen Baker", "Otkrist Gupta", "Nikhil Naik", "Ramesh Raskar" ], "title": "Designing neural network architectures using reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Gabriel Bender", "Pieter-Jan Kindermans", "Barret Zoph", "Vijay Vasudevan", "Quoc Le" ], "title": "Understanding and simplifying one-shot architecture search", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Rasmus Bro" ], "title": "PARAFAC tutorial and applications", "venue": "Chemometrics and Intelligent Laboratory Systems,", "year": 1997 }, { "authors": [ "Perozzi Bryan", "Rami Al-Rfou", "Steven Skiena" ], "title": "Deepwalk: Online learning of social representations", "venue": "In KDD,", "year": 2014 }, { "authors": [ "Xin Chen", "Lingxi Xie", "Jun Wu", "Qi Tian" ], "title": "Progressive differentiable architecture search: Bridging the depth gap between search and evaluation", "venue": null, "year": 2019 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova" ], "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL,", "year": 2018 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "Searching for a robust neural architecture in four gpu hours", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Xuanyi Dong", "Yi Yang" ], "title": "NAS-Bench-201: Extending the scope of reproducible neural architecture search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yuxiao Dong", "Nitesh V. Chawla", "Ananthram Swami" ], "title": "metapath2vec: Scalable representation learning for heterogeneous networks", "venue": "In KDD,", "year": 2017 }, { "authors": [ "Thomas Elsken", "Jan Hendrik Metzen", "Frank Hutter" ], "title": "Neural architecture search: A survey", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Stefan Falkner", "Aaron Klein", "Frank Hutter" ], "title": "BOHB: Robust and efficient hyperparameter optimization at scale", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Luca Franceschi", "Mathias Niepert", "Massimiliano Pontil", "Xiao He" ], "title": "Learning discrete structures for graph neural networks", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Ian Goodfellow", "Yoshua Bengio", "Aaron Courville" ], "title": "Deep Learning", "venue": null, "year": 2017 }, { "authors": [ "Zichao Guo", "Xiangyu Zhang", "Haoyuan Mu", "Wen Heng", "Zechun Liu", "Yichen Wei", "Jian Sun" ], "title": "Single path one-shot neural architecture search with uniform sampling", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Will Hamilton", "Payal Bajaj", "Marinka Zitnik", "Dan Jurafsky", "Jure Leskovec" ], "title": "Embedding logical queries on knowledge graphs", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Kilian Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In CVPR,", "year": 2017 }, { "authors": [ "Thomas N. Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Tamara G. Kolda", "Brett W. Bader" ], "title": "Tensor decompositions and applications", "venue": "SIAM Review,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Chao Li", "Zhun Sun" ], "title": "Evolutionary topology search for tensor network decomposition", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Wei Li", "Shaogang Gong", "Xiatian Zhu" ], "title": "Neural graph embedding for neural architecture search", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "DARTS: Differentiable architecture search", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yu Liu", "Quanming Yao", "Yong Li" ], "title": "Generalizing tensor decomposition for n-ary relational knowledge bases", "venue": "In WWW,", "year": 2020 }, { "authors": [ "Denis Lukovnikov", "Asja Fischer", "Jens Lehmann", "Sören Auer" ], "title": "Neural network-based question answering over knowledge graphs on word and character level", "venue": "In WWW,", "year": 2017 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "ShuffleNet V2: Practical guidelines for efficient cnn architecture design", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "In NIPS,", "year": 2013 }, { "authors": [ "Alexander Novikov", "Dmitrii Podoprikhin", "Anton Osokin", "Dmitry P Vetrov" ], "title": "Tensorizing neural networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Ivan V. Oseledets" ], "title": "Tensor-train decomposition", "venue": "SIAM Journal on Scientific Computing,", "year": 2011 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": "In NIPS Workshop,", "year": 2017 }, { "authors": [ "Hieu Pham", "Melody Guan", "Barret Zoph", "Quoc Le", "Jeff Dean" ], "title": "Efficient neural architecture search via parameters sharing", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Meng Qu", "Jian Tang" ], "title": "Probabilistic logic neural networks for reasoning", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Esteban Real", "Alok Aggarwal", "Yanping Huang", "Quoc V Le" ], "title": "Regularized evolution for image classifier architecture search", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Ali Sadeghian", "Mohammadreza Armandpour", "Patrick Ding", "Daisy Zhe Wang" ], "title": "DRUM: End-to-end differentiable rule mining on knowledge graphs", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Chaun Shi", "Yitong Li", "Jiawei Zhang", "Yizhou Sun", "Philip S. Yu" ], "title": "A survey of heterogeneous information network analysis", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Amit Singhal" ], "title": "Introducing the knowledge graph: things, not strings", "venue": "Official google blog,", "year": 2012 }, { "authors": [ "Steven Skiena" ], "title": "Implementing discrete mathematics: combinatorics and graph theory with mathematica", "venue": "The Mathematical Gazette,", "year": 1992 }, { "authors": [ "Yizhou Sun", "Jiawei Han" ], "title": "Mining heterogeneous information networks: principles and methodologies", "venue": "Synthesis Lectures on Data Mining and Knowledge Discovery,", "year": 2012 }, { "authors": [ "Yizhou Sun", "Jiawei Han", "Xifeng Yan", "Philip S. Yu", "Tianyi Wu" ], "title": "PathSim: Meta path-based top-k similarity search in heterogeneous information networks", "venue": "Proceedings of the VLDB Endowment,", "year": 2011 }, { "authors": [ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ], "title": "Rotate: Knowledge graph embedding by relational rotation in complex space", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Richard S. Sutton", "Andrew G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "Komal K. Teru", "Etienne Denis", "William L. Hamilton" ], "title": "Inductive relation prediction by subgraph reasoning", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Ledyard Tucker" ], "title": "Some mathematical notes on three-mode factor analysis", "venue": null, "year": 1966 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Liò", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Guojia Wan", "Bo Du", "Shirui Pan", "Gholamreza Haffari" ], "title": "Reinforcement learning based meta-path discovery in large-scale heterogeneous information networks", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Quan Wang", "Zhendong Mao", "Bin Wang", "Li Guo" ], "title": "Knowledge graph embedding: A survey of approaches and applications", "venue": "IEEE Transactions on Knowledge and Data Engineering,", "year": 2017 }, { "authors": [ "Wenqi Wang", "Yifan Sun", "Brian Eriksson", "Wenlin Wang", "Vaneet Aggarwal" ], "title": "Wide compression: Tensor ring nets", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Xiao Wang", "Houye Ji", "Chuan Shi", "Bai Wang", "Yanfang Ye", "Peng Cui", "Philip S Yu" ], "title": "Heterogeneous graph attention", "venue": null, "year": 2019 }, { "authors": [ "Sirui Xie", "Hehui Zheng", "Chunxiao Liu", "Liang Lin" ], "title": "SNAS: stochastic neural architecture search", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Yuhui Xu", "Lingxi Xie", "Xiaopeng Zhang", "Xin Chen", "Guo-Jun Qi", "Qi Tian", "Hongkai Xiong" ], "title": "PC-DARTS: Partial channel connections for memory-efficient architecture search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Carl Yang", "Mengxiong Liu", "Frank He", "Xikun Zhang", "Jian Peng", "Jiawei Han" ], "title": "Similarity modeling on heterogeneous networks via automatic path discovery", "venue": "ECML-PKDD,", "year": 2018 }, { "authors": [ "Fan Yang", "Zhilin Yang", "William W Cohen" ], "title": "Differentiable learning of logical rules for knowledge base reasoning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Quanming Yao", "Ju Xu", "Wei-Wei Tu", "Zhanxing Zhu" ], "title": "Efficient neural architecture search via proximal iterations", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Jiaxuan You", "Jure Leskovec", "Kaiming He", "Saining Xie" ], "title": "Graph structure of neural networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Kaicheng Yu", "Christian Sciuto", "Martin Jaggi", "Claudiu Musat", "Mathieu Salzmann" ], "title": "Evaluating the search phase of neural architecture search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Seongjun Yun", "Minbyul Jeong", "Raehyun Kim", "Jaewoo Kang", "Hyunwoo J Kim" ], "title": "Graph transformer networks", "venue": "In NIPS,", "year": 2019 }, { "authors": [ "Arber Zela", "Thomas Elsken", "Tonmoy Saikia", "Yassine Marrakchi", "Thomas Brox", "Frank Hutter" ], "title": "Understanding and robustifying differentiable architecture search", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Fuzheng Zhang", "Nicholas Jing Yuan", "Defu Lian", "Xing Xie", "Wei-Ying Ma" ], "title": "Collaborative knowledge base embedding for recommender systems", "venue": null, "year": 2016 }, { "authors": [ "Huan Zhao", "Quanming Yao", "Jianda Li", "Yangqiu Song", "Dik Lun Lee" ], "title": "Meta-graph based recommendation fusion over heterogeneous information", "venue": null, "year": 2017 }, { "authors": [ "Hongpeng Zhou", "Minghao Yang", "Jun Wang", "Wei Pan" ], "title": "BayesNAS: A Bayesian approach for neural architecture search", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Barret Zoph", "Quoc V. Le" ], "title": "Neural architecture search with reinforcement learning", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Barret Zoph", "Vijay Vasudevan", "Jonathon Shlens", "Quoc V. Le" ], "title": "Learning transferable architectures for scalable image recognition", "venue": "In ICCV,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep learning (Goodfellow et al., 2017) has been successfully applied in many applications, such as image classification for computer vision (CV) (LeCun et al., 1998; Krizhevsky et al., 2012; He et al., 2016; Huang et al., 2017) and language modeling for natural language processing (NLP) (Mikolov et al., 2013; Devlin et al., 2018). While the architecture design is of great importance to deep learning, manually designing proper architectures for a certain task is hard and requires lots of human efforts or sometimes even impossible (Zoph & Le, 2017; Baker et al., 2016).\nRecently, neural architecture search (NAS) techniques (Elsken et al., 2019) have been developed to alleviate this issue, which mainly focuses on CV and NLP tasks. Behind existing NAS methods, a multi-graph (Skiena., 1992) structure, i.e., supernet (Zoph et al., 2017; Pham et al., 2018; Liu et al., 2018), where nodes are connected by edges with multiple choices, has played a central role. In such context, the choices on each edge are different operations, and the subgraphs correspond to different neural architectures. The objective here is to find a suitable subgraph in this supernet, i.e. better neural architectures for a given task.\nHowever, the supernet does not only arise in CV/NLP field and we find it also emerge in many other deep learning areas (see Table 1). An example is logical chain inference on knowledge graphs (Yang et al., 2017; Sadeghian et al., 2019; Qu & Tang, 2019), where the construction logical rules can be modeled by a supernet. Another example is meta-path discovery in heterogeneous information networks (Yun et al., 2019; Wan et al., 2020), where the discovery of meta-paths can also be modeled by a supernet. Therefore, we propose to broaden the horizon of NAS, i.e., generalize it to many deep learning fields and solve the new NAS problem under a unified framework.\nSince subgraphs are discrete objects (choices on each edge are discrete), it has been a common approach (Liu et al., 2018; Sadeghian et al., 2019; Yun et al., 2019) to transform it into a continuous optimization problem. Previous methods often introduce continuous parameters separately for each edge. However, this formulation cannot generalize to different supernets as the topological structures of supernets are highly task-dependent and diverse. Therefore, it will fail to capture the supernet’s topology and hence be ineffective.\nIn this paper, we propose a novel method TRACE to introduce a continuous parameter for each subgraph (all these parameters will form a tensor). Then, we propose to construct a tensor network (TN) (andrzej. et al., 2016; 2017) based on the topological structure of supernet. For different tensor networks, we introduce an efficient algorithm for optimization on supernets. Extensive experiments are conducted on diverse deep learning tasks. Empirical results demonstrate that TRACE performs better than the state-of-the-art methods in each domain. As a summary, our contributions are as follows:\n• We broaden the horizon of existing supernet-based NAS methods. Specifically, we generalize the concept of subgraph search in supernet from NAS to other deep learning tasks that have graph-like structures and propose to solve them in a unified framework by tensorizing the supernet.\n• While existing supernet-based NAS methods ignore the topological structure of the supernet, we encode the supernet in a topology-aware manner based on the tensor network and propose an efficient algorithm to solve the search problem.\n• We conduct extensive experiments on various learning tasks, i.e., architecture design for CV, logical inference for KG, and meta-path discovery for HIN. Empirical results demonstrate that our method can find better architectures, which lead to state-of-the-art performance on various applications." }, { "heading": "2 RELATED WORKS", "text": "" }, { "heading": "2.1 SUPERNET IN NEURAL ARCHITECTURE SEARCH (NAS)", "text": "There have been numerous algorithms proposed to solve the NAS problem. The first NAS work, NASRL (Zoph & Le, 2017), models the NAS problem as a multiple decision making problem and proposes to use reinforcement learning (RL) (Sutton & Barto, 2018) to solve this problem. However, this formulation does not consider the repetitively stacked nature of neural architectures and is very inefficient as it has to train many different networks to converge. To alleviate this issue, NASNet (Zoph et al., 2017) first models NAS as an optimization problem on supernet. The supernet formulation enables searching for transferrable architectures across different datasets and improves the searching efficiency. Later, based on the supernet formulation, ENAS (Pham et al., 2018) proposes weight-sharing techniques, which shares the weight of each subgraph in a supernet. This technique further improves searching efficiency and many different methods have been proposed under this framework (see Table 1), including DARTS (Liu et al., 2018), SNAS (Xie et al., 2018), and NASP (Yao et al., 2020). DARTS is the first to introduce deterministic formulation to NAS field, and SNAS uses a similar parametrized method with DARTS under stochastic formulation. NASP improves upon DARTS by using proximal operator (Parikh & Boyd, 2014) and activates only one subgraphs in each iteration to avoid co-adaptation between subgraphs." }, { "heading": "2.2 TENSOR METHODS IN MACHINE LEARNING", "text": "A tensor (Kolda & Bader, 2009) is a multi-dimensional array as an extension to a vector or matrix. Tensor methods have found wide applications in machine learning, including network\ncompression (Novikov et al., 2015; Wang et al., 2018) and knowledge graph completion (Liu et al., 2020). Recently, andrzej. et al. (2016; 2017) have proposed a unified framework called tensor network (TN), which uses an undirected graph to represent tensor decomposition methods. By using different graphs, TN can cover many tensor decomposition methods as a special case, e.g., CP (Bro, 1997), Tucker (Tucker, 1966), tensor train (Oseledets, 2011), and tensor ring decomposition (Zhao et al., 2016). However, it is not an easy task to construct a tensor network for a given problem, as the topological structure of tensor network is hard to design (Li & Sun, 2020)." }, { "heading": "3 PROPOSED METHOD", "text": "Here, we describe our method for the supernet search problem. Section 3.1-3.2 introduce how we tensorize the supernets. Section 3.3 propose an optimization algorithm for the search problem which can be utilized for both stochastic and deterministic objectives. Finally, how supernet appears beyond existing NAS works and can be generalized to these new tasks are presented in Section 3.4.\nNotations. In this paper, we will use S to denote a supernet, and P to denote a subgraph in supernet. For a supernet S with T edges, we let all edges be indexed by e1, . . . , eT , and define Ct to be the number of choices on edge et with t ∈ {1, . . . , T}. With subscript i1, . . . , iT denoted by i− for short, we use Si− to denote the subgraph with choices i1 ∈ {1, . . . , C1}, . . . , iT ∈ {1, . . . , CT }. And we use softmax(oi) = exp(oi)/ ∑n j=1 exp(oj) to denote the softmax operation over a vector oi ∈ Rn." }, { "heading": "3.1 A TENSOR FORMULATION FOR SUPERNET", "text": "While existing works (Liu et al., 2018; Zoph et al., 2017; Pham et al., 2018) introduce parameters separately for each edge et, since edges may correlate with each other, a more general and natural approach is to introduce a continuous parameter directly for each subgraph P ∈ S. Note that a subgraph P can be distinguished by its choices on each edge, we propose to encode all possible choices into a tensor T ∈ RC1×···×CT and take these choices as indices, i.e., i−, to the tensor T . As a consequence, the architecture of the subgraph P is indexed as Si− , and Ti− ∈ [0, 1] represent how “good” P can be.\nLet f(w,P) stand for a learning model with the parameter w and subgraph P ∈ T , L be the loss of f on the training dataset Dtr, and J measure the performance of f on the validation set Dval. Under above tensor formulation, the search objective here is:\nmax P∈T J (f(w∗(P),P);Dval), s.t. { w∗(P) = arg minw L (f(w,P);Dtr)∑ i− Ti− = 1 , (1)\nSame as existing supernet search works, the subgraph P is searched in the upper-level, while the network weight w is trained in the lower-level. Dtr is the training set and Dval is the validation set. However, we have the extra constraint ∑ i− Ti−=1 here, which is to ensure that the sum of probabilities for all subgraphs is one.\nNext, we show how P and T can be parameterized with topological information from the supernet in Section 3.2. Then, a gradient-based algorithm which can effectively handle the constrained bi-level optimization problem (1) is proposed in Section 3.3." }, { "heading": "3.2 ENCODING SUPERNET TOPOLOGY BY TENSOR NETWORK (TN)", "text": "Existing methods consider each edge separately and can be seen as rank-1 factorization of the full tensor T (see Table 1) under (1), i.e. Ti− = θ1i1 . . . θ T iT\nwhere θtit ∈ R is the continuous parameter for choice it on edge t. However, this formulation ignores the topological structure of different supernets, as it uses the same decomposition method for all supernets. Motivated by such limitation, we propose to introduce tensor network (TN) to better encode the topological structure of supernet. Our encoding\nprocess is described in Algorithm 1, where N (S) to denotes the set of nodes in this supernet and N ′(S) ⊆ N (S) denotes the set of nodes that are connected to more than one edge. Specifically, we introduce a third-order tensor αt for each edge, which is based on previous methods (e.g., DARTS and SNAS), but uses tensors instead of vectors to allow more flexibility. And we introduce hyper-parametersRN1(t) andRN2(t), which are corresponding to the rank of tensor network. Then, we use index summation to reflect the topological structure (common nodes) between different edges, i.e.,\nTi− = ∑Rn\nrn,n∈N ′(S) ∏T t=1 αtrN1(t),it,rN2(t) , (2)\nWe also give two examples in 1Figure 2 to illustrate our tensorizing process for two specific supernets.\nAlgorithm 1 Supernet encoding process (a step-by-step and graphical illustration is in Appendix.G). Input: Supernet S;\n1: Introduce αt ∈ RRN1(t)×Ct×RN2(t) for edge et which connects nodes N1(t) and N2(t); 2: Remove isolated nodes and obtain N ′(S); 3: Compute Ti− by (2); 4: return encoded supernet Ti− ;\nThe reason of using N ′(S) instead of N (S) is 2Proposition 1, which shows that using N ′(S) will not restrict the expressive power but allows the usage of less parameters.\nProposition 1. Any tensors that can be expressed by Ti− = ∑Rn rn,n∈N (S) ∏T t=1 α t rN1(t),it,rN2(t) can\nalso be expressed by Ti− = ∑Rn rn,n∈N ′(S) ∏T t=1 α t rN1(t),it,rN2(t) ." }, { "heading": "3.3 SEARCH ALGORITHM", "text": "To handle subgraph search problems for various applications (Table 1), we need an algorithm that can solve the generalized supernet search problem in (1), which is parameterized by (2). However, the resulting problem is hard to solve, as we need to handle the constraint on T , i.e., ∑ i− Ti− = 1. To address this issue, we propose to re-parameterize α in (2) using softmax trick, i.e.,\nT̄i− = 1/ ∏ n∈N′(S) Rn ∑Rn rn,n∈N ′(S) ∏T t=1 softmax(β(t)rN1(t),it,rN2(t)). (3)\nThen, the optimization objective J in (1) becomes maxβ Jβ(f(w∗(β),P(β));Dval), s.t. w∗(β) = arg minw L (f(w,P(β));Dtr) . (4)\n1Figure 2(b) and (d) also follow the diagram figure of tensor network, reader may refer (andrzej. et al., 2017) for more details.\n2All proofs are in Appendix D.\nwhere we have substitute discrete subgraphs P and normalization constraint on T with continuous parameters β. As shown in Proposition 2, we can now solve a unconstrained problem on β, while keeping the constraint on T satisfied. Proposition 2. ∑ i− T̄i− = 1, where T̄ is given in (3).\nThus, gradient-based training strategies can be reused, which makes the optimization very efficient. The complete steps for optimizing (4) are presented in Algorithm 2. Note that, our algorithm can solve both deterministic and stochastic formulation (see Appendix A). After the optimization of β is converged, we obtain P∗ from tensor T̄ and retrain the model to obtain w∗.\nAlgorithm 2 TRACE: Proposed subgraph search algorithm. Input: A subgraph search problem with training set Dtr, validation set Dval and supernet S;\n1: Tensorize the supernet T with Algorithm 1; 2: Re-parameterize T to T̄ using (3); 3: while not converged do 4: Obtain a subgraph P from T̄ (deterministic or stochastic); 5: Solve w∗(β) = arg minw L (f(w, β),Dtr); 6: Update supernet parameters β by gradient ascending∇βJβ ; 7: end while 8: Obtain P∗ = Si− from the final T̄ by setting i− = arg maxi− T̄i− ; 9: Obtain w∗(P∗) = arg minw;L (f(w,P∗),Dtr) by retraining f(w,P∗);\n10: return P∗ (searched architecture) and w∗ (fine-tuned parameters);" }, { "heading": "3.4 SUBGRAPH SEARCH BEYOND EXISTING NAS", "text": "Despite NAS, there are many important problems in machine learning that have a graph-like structure. Examples include meta-path discovery (Yang et al., 2018; Yun et al., 2019; Wan et al., 2020), logical chain inference (Yang et al., 2017; Sadeghian et al., 2019) and structure learning between data points (Franceschi et al., 2019). Inspired by the recent work that exploits graph-like structures in NAS (Li et al., 2020; You et al., 2020), we propose to model them also as a subgraph search problem on supernets.\nMeta-path discovery. Heterogeneous information networks (HINs) (Sun & Han, 2012; Shi et al., 2017) are networks whose nodes and edges have multiple types. HIN has been widely used in many real-world network mining scenarios, e.g., node classification (Wang et al., 2019) and recommendation (Zhao et al., 2017). For a heterogeneous network, a meta-path (Sun et al., 2011) is a path defined on it with multiple edge types. Intuitively, different meta-paths capture different semantic information from a heterogeneous network and it is important to find suitable meta-paths for different applications on HINs. However, designing a meta-path on a HIN is not a trivial task and requires much human effort and domain knowledge (Zhao et al., 2017; Yang et al., 2018). Thus, we propose to automatically discover informative meta-paths instead of to manually design.\nTo solve the meta-path discovery problem under the supernet framework, we first construct a supernet S (see Figure 2(a)) and a subgraph P ∈ S will be a meta-path on HIN. While GTN (Yun et al., 2019) introduces weights separately for each edge, our model f(w,P) uses a tensor T to model P as a whole. The performance metric L(·) andM(·) will depend on the downstream task. In our experiments on node classification, we use cross-entropy loss for L(·) and macro F1 score forM(·).\nLogical chain inference. A knowledge graph (KG) (Singhal, 2012; Wang et al., 2017) is a multirelational graph composed of entities (nodes) and relations (different types of edges). KG has found wide applications in many different areas, including question answering (Lukovnikov et al., 2017) and recommendation (Zhang et al., 2016). An important method to understand semantics in KG is logical chain inference, which aims to find underlying logic rules in KG. Specifically, a logical chain is a path on knowledge graph with the following form x−→B1z1−→B2 . . .−→BT y, where x, y, z1, . . . are entities and B1, B2, . . . , BT are different relations in the knowledge graph. And logical chain inference is to use a logical chain to approximate a relation in KG. Obviously, different logical chains can have critical influence on KG as incorrect logic rules will lead to wrong facts.\nHowever, directly solving the inference problem will have a exponential complexity as we have to enumerate all relations (Hamilton et al., 2018). Thus, we propose to model it as a supernet search problem to reduce complexity.\nSince logical chain has a chain structure, we construct a supernet as in Figure 2(a). Denote our target relation as Br, the adjacency matrix of relation Br as ABr , and the one-hot vector corresponding to entity x as vx. Our learning model f(w,P) now has no model parameter w, and the original bi-level problem is reduced to a single-level one with the following performance measureM (f(w,P),D) =∑ Br(x,y)=1 inD v > x ( ∏T i=1 ABi)vy , which counts the number of pairs (x, y) that has relation Br and is predicted by logical chain x −→B1z1 −→B2. . . −→BTy in the KG D." }, { "heading": "4 EXPERIMENTS", "text": "All experiments are implemented on PyTorch (Paszke et al., 2017) except for the logical chain inference, which is implemented on TensorFlow (Abadi et al., 2016) following DRUM (Sadeghian et al., 2019). We have done all experiments on a single NVIDIA RTX 2080 Ti GPU." }, { "heading": "4.1 BENCHMARK PERFORMANCE COMPARISON", "text": "Here, we compare our proposed method with the state-of-the-art methods on three different applications that can be seen as subgraph search problems, i.e., neural architecture design for image classification, logical chain inference from KG, and meta-path discovery in HIN." }, { "heading": "4.1.1 DESIGNING CONVOLUTIONAL NEURAL NETWORK (CNN) ARCHITECTURES", "text": "We first apply TRACE to the architecture design problem on CNN for image classification, which is currently the most famous application for supernet-based methods. We consider the following two different settings for our NAS experiments: (i). Stand-alone (Zoph & Le, 2017; Zoph et al., 2017): train each architecture to converge to obtain separate w∗(P); (ii). Weight-sharing (Liu et al., 2018; Xie et al., 2018; Yao et al., 2020): share the same parameter w across different architectures during searching. And for both stand-alone and weight-sharing experiments, we repeat our method for five times and report the mean±std of test accuracy of searched architectures. Stand-alone setting. To enable comparison under stand-alone setting, we use the NAS-Bench-201 dataset (Dong & Yang, 2020) where the authors exhaustively trained all subgraphs in a supernet and obtain a complete record of each subgraph’s accuracy on three different datasets: CIFAR-10, CIFAR100 and ImageNet-16-120 (details in Appendix E.1). We use the stochastic formulation and compare our method with (i) Random Search (Yu et al., 2020); (ii) REINFORCE (policy gradient) (Zoph & Le, 2017); (iii) BOHB (Falkner et al., 2018) and (iv) REA (regularized evolution) (Real et al., 2018). Results are in Table 2, and we can see that our method achieves better results than all existing stand-alone NAS methods and even finds the optimal architecture on CIFAR-10 and CIFAR-100 dataset.\nWeight-sharing setting. We use the deterministic formulation, construct supernet follows (Liu et al., 2018), and evaluate all methods on CIFAR-10 dataset (details are in Appendix E.1). These are the\nmost popular setups for weight-sharing NAS. Results are in Table 3 and 4, we can see that TRACE achieves comparable performances with existing weight-sharing NAS methods." }, { "heading": "4.1.2 LOGIC CHAIN INFERENCE FROM KNOWLEDGE GRAPH (KG)", "text": "For logical chain inference, we use the deterministic formulation and compare our method with the following methods: Neural LP (Yang et al., 2017), DRUM (Sadeghian et al., 2019), and GraIL (Teru et al., 2020). Neural LP and DRUM are restricted to logical chain inference, while GraIL considers more complex graph structures. We also compare our method with random generated rules to better demonstrate the effectiveness of the proposed method. We do not compare our method with embedding-based methods, e.g. RotatE (Sun et al., 2019) as those methods all need embeddings for entities and cannot generalize found rules to unseen entities.\nFollowing the setting of DRUM, we conduct experiments on three KG datasets: Family, UMLS and Kinship (details are in Appendix E.2), and report the best mean reciprocal rank (MRR), Hits at 1, 3, 10 across 5 different runs. Results are in Table 5, which demonstrates that our proposed method achieves better results than all existing methods. Besides, case studies in Section 4.2 further demonstrate that TRACE can find more accurate rules than others." }, { "heading": "4.1.3 META-PATH DISCOVERY IN HETEROGENEOUS INFORMATION NETWORK (HIN)", "text": "Finally, we apply TRACE to meta-path discovery problem on HINs. Following existing works (Wang et al., 2019; Yun et al., 2019), we use the deterministic formulation and conduct experiments on three benchmark datasets: DBLP, ACM and IMDB (details are in Appendix E.3) and compare our methods with the 1) baselines in GTN (Yun et al., 2019), i.e., DeepWalk (Bryan et al., 2014),\nmetapath2vec (Dong et al., 2017), GCN (Kipf & Welling, 2016), GAT (Veličković et al., 2018), HAN (Wang et al., 2019) and 2) random generated meta-paths. Results on different datasets are in Table 6, which demonstrate that TRACE performs better than other methods on different HINs." }, { "heading": "4.2 CASE STUDY", "text": "To further investigate the performance of TRACE, we list the top rules found by TRACE and other methods in Table 7. This result demonstrates that our method can find more accurate logic rules than other baselines, which contributes to the superior performance of our method. We also give the architectures and meta-paths found by TRACE in Appendix F." }, { "heading": "4.3 ABLATION STUDY", "text": "" }, { "heading": "4.3.1 IMPACT OF ENCODING APPROACH", "text": "We compare TRACE with the following encoding methods on supernet: (i) DARTS, which introduces continuous parameters for each edge separately; (ii) RNN, which uses a RNN to compute the weights for each edge; (iii) CP decomposition, which generalizes DARTS to higher rank; (iv) TRACE(Full), which does not adopt Proposition 1 in Algorithm 1. Results on NAS-Bench-201 using CIFAR-100 are shown in Figure 3(a), and we can see that DARTS performs worse than other methods (CP and TRACE) due to insufficient expressive power. And TRACE achieves better results than CP by being topological aware. It also shows that our simplified encoding scheme does not harm the final performance, as verified in Proposition 1." }, { "heading": "4.3.2 IMPACT OF RANK OF TENSOR NETWORK", "text": "We also investigate the impact of Rn’s, which are ranks in tensor network (TN) on supernets. For simplicity, we restrict Rn to be equal for all nodes n ∈ N (S) and compare the performance of different ranks with previous state-of-the-art REA (see Table 2) in Figure 3(b). Results demonstrate that while the rank can influence the final performance, it is easy to set rank properly for TRACE to beat other methods. We also adopt Rn = 2 for all other experiments." }, { "heading": "4.3.3 OPTIMIZATION ALGORITHMS", "text": "Finally, we compare TRACE with proximal algorithm (Parikh & Boyd, 2014), which is a popular and general algorithm for constrained optimization. Specifically, proximal algorithm is used to solve (1) with the constraint ∑ i− Ti− = 1 without Proposition 2. We solve the proximal step iteratively and numerically since there is no closed-form solutions. The comparison is in Figure 3(c), and we can see that TRACE beats proximal algorithm by a large margin, which demonstrates that the re-parameterized by Proposition 2 is useful for optimization." }, { "heading": "5 CONCLUSION", "text": "In this paper, we generalize supernet from neural architecture search (NAS) to other machine learning tasks. To expressively model the supernet, we introduce a tensor formulation to the supernet and represent it by tensor network (tensorizing supernets). We further propose an efficient gradient-based algorithm to solve the new supernet search problem. Empirical results across various applications demonstrate that our approach has superior performance on these machine learning tasks." }, { "heading": "A COMPLETE ALGORITHMS OF OPTIMIZATION ON TENSORIZED SUPERNET", "text": "Here, we give a more detailed description for our algorithm under deterministic and stochastic formulation in Algorithm 3 and 4, respectively. In our experiments, we use the stochastic formulation for NAS under standalone setting, and deterministic formulation for others.\nAlgorithm 3 TRACE (deterministic formulation with weight-sharing) Input: Training set Dtr, Validation set Dval\n1: Tensorize the supernet T with Algorithm 1; 2: Re-parameterize T to T̄ using (3); 3: while not converged do 4: Update model parameters w(β) by gradient descending∇wL (f(w, β),Dtr) 5: Update supernet parameters β by gradient ascending∇βJβ 6: end while 7: Obtain P∗ = Si− from the final T̄ by setting i− = arg maxi− T̄i− ; 8: Obtain w∗(P∗) = arg minw;L (f(w,P∗),Dtr) by retraining f(w,P∗); 9: return w∗, P∗;\nAlgorithm 4 TRACE (stochastic formulation) Input: Training set Dtr, Validation set Dval\n1: Tensorize the supernet T with Algorithm 1; 2: Re-parameterize T to T̄ using (3); 3: while not converged do 4: Sample a subgraph P from the probability distribution given by T̄ 5: Solve w∗(P) = arg minw L (f(w,P),Dtr) 6: Update supernet parameters β by gradient ascending∇βJβ 7: Save the best w∗(P),P so far 8: end while 9: return best w∗(P),P in Step 7;" }, { "heading": "B COMPARISON OF PARAMETERS FOR DIFFERENT METHODS", "text": "Here, we compare the number of parameters and computational cost (FLOPs) of different methods for logic rule inference. We use n as the length of logical chains, d as the dimension of embeddings, r as the rank of tensor networks and e as the number of all relations. Results are in Table 8. Since n and r is often significantly smaller than e and d (typical values are n = 3, r = 2, d = 128 and e ≈ 20− 30), TRACE has comparable number of parameters and computational cost with existing methods.\nFor comparison on HIN, we use n as the length of meta-paths, d as the dimension of embeddings, and r as the rank of tensor networks. Results are in Table 9. Note that r is often relatively small compared with d (typical values are r = 2 and d = 128), thus TRACE has the similar number of parameters and computational cost with existing methods." }, { "heading": "C REVIEW ON EXISTING METHODS THAT CAN BE COMBINED WITH TRACE", "text": "" }, { "heading": "D PROOFS", "text": "D.1 PROPOSITION 1\nProof. Denote those nodes in supernet whose degree is greater than one (connected by more than one edges) as N ′(S). Thus, N (S) \\ N ′(S) denotes all degree-1 nodes in supernet. Suppose one node N1(t) is only connected by edge t, while N2(t) is not, 3 then we can rewrite Ti− =∑Rn rn,n∈N (S) ∏T t=1 θ t rN1(t),it,rN2(t) as follows:\nTi− = ∑\nrn,n∈N (S)\nT∏ t=1 αtrN1(t),it,rN2(t)\n= ∑\nrn,n∈N ′(S) ∑ rn,n∈N (S)\\N ′(S) T∏ t=1 αtrN1(t),it,rN2(t)\n= ∑\nrn,n∈N ′(S)\nT∏ t=1 ∑ rn,n∈N (S)\\N ′(S) αtrN1(t),it,rN2(t)\n= ∑\nrn,n∈N ′(S) . . . ( ∑ rN1(t) αtrN1(t),it,rN2(t) ) . . .\n= ∑\nrn,n∈N ′(S)\n. . . α̃ (t) it,rN2(t) . . .\nwhere similar process is done for all edges. Thus, only nodes whose degree is greater than 1 is actually needed for index contraction. And for n ∈ N (S) \\ N ′(S), we can simply set Rn = 1 without loss of expressive power.\n3If N1(t) is only connected by edge t (degree is 1), N2(t) cannot be only connected by edge t unless the supernet has only one edge t connecting two nodes N1(t), N2(t), which is a trivial case.\nD.2 PROPOSITION 2\nProof. Following (3), we can have:∑ i− Ti− = 1∏ n∈N ′(S)Rn ∑ i− Rn∑ rn,n∈N ′(S) T∏ t=1 exp(αtrN1(t),it,rN2(t) )∑ j exp(α t rN1(t),j,rN2(t) )\n= 1∏\nn∈N ′(S)Rn Rn∑ rn,n∈N ′(S) T∏ t=1 (∑ it exp(αtrN1(t),it,rN2(t) )∑ j exp(α t rN1(t),j,rN2(t) ) )\n= 1∏\nn∈N ′(S)Rn Rn∑ rn,n∈N ′(S) 1T\n= 1∏\nn∈N ′(S)Rn ∏ n∈N ′(S) Rn = 1." }, { "heading": "E EXPERIMENT DETAILS", "text": "E.1 NEURAL ARCHITECTURE SEARCH (NAS)\nStand-alone setting The supernet used in NAS-Bench-201 has 3 nodes, and each pair of nodes is connected by a directed edge, which gives 6 edges in total. And for each edge, we have 5 different operations (“choices”): zero, skip connect, 1× 1 convolution, 3× 3 convolution and 3× 3 average pooling. And the details of datasets used in NAS-Bench-201 are in Table 11.\nWeight-sharing setting Our construction of supernet follows (Liu et al., 2018). The supernet has 7 nodes, where the first two nodes are the output from previous two cells, respectively, and the last node performs depthwise concatenation to the output of the rest four nodes. Thus, the supernet has 8 edges with multiple choices (operations), and for each edge, we consider 8 different operations: zero, skip connect, 3× 3 and 5× 5 separable convolution, 3× 3 and 5× 5 dilated separable convolution, 3× 3 max pooling and 3× 3 average pooling. We evaluate all weight-sharing methods on CIFAR-10 dataset and the dataset division is the same as in Table 11.\nE.2 LOGIC CHAIN INFERENCE\nIn our experiments, we follow the setting in DRUM (Sadeghian et al., 2019) and set the max length of rules T to be 3 for all datasets. And we set the rank L to be 4 in DRUM based on best validation performances. The details of KG datasets used in experiments are in Table 12.\nE.3 META-PATH DISCOVERY\nThe details of HIN datasets used in our experiments are in Table 13." }, { "heading": "F MORE EXPERIMENT RESULTS", "text": "F.1 CORRELATION ANALYSIS\nIndeed, the correlation is a good criterion to show the rationality of one-shot architecture search methods (Bender et al., 2018; Liu et al., 2018; Yu et al., 2020; Guo et al., 2020). However, it is only a sufficient not necessary condition. Specifically, the goal of tensor T here is to capture good subgraph in the whole supernet, thus we expect the possibilities of Pi will concentrate on some top subgraphs, which is shown in below Figure 4.\nF.2 CASE STUDIES\nF.3 RUNNING PLOTS\nF.4 ABLATION STUDIES\nG ILLUSTRATION OF TENSORIZATION PROCESS FOR SUPERNETS" } ]
2,020
null
SP:f346cf947c90327e475698f0d0018064c2497b64
[ "The key idea of this paper is to apply MixUp-style regularization to self-supervised contrastive learning techniques (SimCLR, MoCo-v2, BYOL). This is combined with another form of MixUp that involves only the images (not the labels), but the precise nature of this component is unclear. For large networks trained on small datasets, the proposed method improves downstream classification performance by reducing overfitting (Table 1, Figure 2). The results are mixed for large-scale datasets (Table 2, Figure 3). The proposed method is also investigated as a potential domain-agnostic augmentation. Though much better than not using any augmentations at all, the results are not typically better than using standard augmentations (Table 3)." ]
Contrastive representation learning has shown to be effective to learn representations from unlabeled data. However, much progress has been made in vision domains relying on data augmentations carefully designed using domain knowledge. In this work, we propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning. We cast contrastive learning as training a non-parametric classifier by assigning a unique virtual class to each data in a batch. Then, data instances are mixed in both the input and virtual label spaces, providing more augmented data during training. In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains, including image, speech, and tabular data. Furthermore, we confirm its regularization effect via extensive ablation studies across model and dataset sizes. The code is available at https://github.com/kibok90/imix.
[ { "affiliations": [], "name": "Kibok Lee" }, { "affiliations": [], "name": "Yian Zhu" }, { "affiliations": [], "name": "Kihyuk Sohn" }, { "affiliations": [], "name": "Chun-Liang Li" }, { "affiliations": [], "name": "Jinwoo Shin" }, { "affiliations": [], "name": "Honglak Lee" } ]
[ { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "venue": null, "year": 2016 }, { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj" ], "title": "Saunshi. A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 2019 }, { "authors": [ "Yuki Markus Asano", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Christopher Beckham", "Sina Honari", "Vikas Verma", "Alex M Lamb", "Farnoosh Ghadiri", "R Devon Hjelm", "Yoshua Bengio", "Chris Pal" ], "title": "On adversarial mixup resynthesis", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Pascal Lamblin", "Dan Popovici", "Hugo Larochelle" ], "title": "Greedy layer-wise training of deep networks", "venue": "In NIPS,", "year": 2007 }, { "authors": [ "Yoshua Bengio", "Aaron Courville", "Pascal Vincent" ], "title": "Representation learning: A review and new perspectives", "venue": null, "year": 2013 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Julien Mairal", "Armand Joulin" ], "title": "Unsupervised pre-training of image features on non-curated data", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Olivier Chapelle", "Jason Weston", "Léon Bottou", "Vladimir Vapnik" ], "title": "Vicinal risk minimization", "venue": "In NIPS,", "year": 2001 }, { "authors": [ "Ting Chen", "Xiaohua Zhai", "Marvin Ritter", "Mario Lucic", "Neil Houlsby" ], "title": "Self-supervised gans via auxiliary rotation loss", "venue": null, "year": 2019 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V Le" ], "title": "Autoaugment: Learning augmentation policies from data", "venue": "In CVPR,", "year": 2019 }, { "authors": [ "Ekin D Cubuk", "Barret Zoph", "Jonathon Shlens", "Quoc V Le" ], "title": "Randaugment: Practical data augmentation with no separate search", "venue": "arXiv preprint arXiv:1909.13719,", "year": 2019 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In CVPR,", "year": 2009 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Dataset augmentation in feature space", "venue": "arXiv preprint arXiv:1702.05538,", "year": 2017 }, { "authors": [ "Terrance DeVries", "Graham W Taylor" ], "title": "Improved regularization of convolutional neural networks with cutout", "venue": "arXiv preprint arXiv:1708.04552,", "year": 2017 }, { "authors": [ "Carl Doersch", "Andrew Zisserman" ], "title": "Multi-task self-supervised visual learning", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Carl Doersch", "Abhinav Gupta", "Alexei A Efros" ], "title": "Unsupervised visual representation learning by context prediction", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks", "venue": "In NeurIPS,", "year": 2014 }, { "authors": [ "Alexey Dosovitskiy", "Philipp Fischer", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with exemplar convolutional neural networks", "venue": null, "year": 2015 }, { "authors": [ "Mark Everingham", "Luc Van Gool", "Christopher KI Williams", "John Winn", "Andrew Zisserman" ], "title": "The pascal visual object classes (voc) challenge", "venue": null, "year": 2010 }, { "authors": [ "Spyros Gidaris", "Praveer Singh", "Nikos Komodakis" ], "title": "Unsupervised representation learning by predicting image rotations", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch SGD: training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Hongyu Guo" ], "title": "Nonlinear mixup: Out-of-manifold data augmentation for text classification", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Hongyu Guo", "Yongyi Mao", "Richong Zhang" ], "title": "Augmenting data with mixup for sentence classification: An empirical study", "venue": "arXiv preprint arXiv:1905.08941,", "year": 2019 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In AISTATS,", "year": 2010 }, { "authors": [ "Ethan Harris", "Antonia Marcu", "Matthew Painter", "Mahesan Niranjan", "Adam PrügelBennett Jonathon Hare" ], "title": "Fmix: Enhancing mixed sample data augmentation", "venue": "arXiv preprint arXiv:2002.12047,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": null, "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Dan Hendrycks", "Mantas Mazeika", "Saurav Kadavath", "Dawn Song" ], "title": "Using self-supervised learning can improve model robustness and uncertainty", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "Augmix: A simple data processing method to improve robustness and uncertainty", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Yannis Kalantidis", "Mert Bulent Sariyildiz", "Noe Pion", "Philippe Weinzaepfel", "Diane Larlus" ], "title": "Hard negative mixing for contrastive learning", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Prannay Khosla", "Piotr Teterwak", "Chen Wang", "Aaron Sarna", "Yonglong Tian", "Phillip Isola", "Aaron Maschinot", "Ce Liu", "Dilip Krishnan" ], "title": "Supervised contrastive learning", "venue": "arXiv preprint arXiv:2004.11362,", "year": 2020 }, { "authors": [ "Dahun Kim", "Donghyeon Cho", "Donggeun Yoo", "In So Kweon" ], "title": "Learning image representations by completing damaged jigsaw puzzles", "venue": "In WACV,", "year": 2018 }, { "authors": [ "Sungnyun Kim", "Gihun Lee", "Sangmin Bae", "Se-Young Yun" ], "title": "Mixco: Mix-up contrastive learning for visual representation", "venue": "arXiv preprint arXiv:2010.06300,", "year": 2020 }, { "authors": [ "Bruno Korbar", "Du Tran", "Lorenzo Torresani" ], "title": "Cooperative learning of audio and video models from self-supervised synchronization", "venue": "In NeurIPS,", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, University of Toronto,", "year": 2009 }, { "authors": [ "Kimin Lee", "Honglak Lee", "Kibok Lee", "Jinwoo Shin" ], "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Michelle A Lee", "Yuke Zhu", "Krishnan Srinivasan", "Parth Shah", "Silvio Savarese", "Li Fei-Fei", "Animesh Garg", "Jeannette Bohg" ], "title": "Making sense of vision and touch: Self-supervised learning of multimodal representations for contact-rich", "venue": null, "year": 2019 }, { "authors": [ "Yibo Lin", "Yuki Watanabe", "Taiki Kimura", "Tetsuaki Matsunawa", "Shigeki Nojima", "Meng Li", "David Z Pan" ], "title": "Data efficient lithography modeling with residual neural networks and transfer learning", "venue": "In Proceedings of the 2018 International Symposium on Physical Design,", "year": 2018 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "Sgdr: Stochastic gradient descent with warm restarts", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Thomas Lucas", "Corentin Tallec", "Jakob Verbeek", "Yann Ollivier" ], "title": "Mixed batches and symmetric discriminators for gan training", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Tomas Mikolov", "Ilya Sutskever", "Kai Chen", "Greg S Corrado", "Jeff Dean" ], "title": "Distributed representations of words and phrases and their compositionality", "venue": "NeurIPS,", "year": 2013 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In CVPR,", "year": 2020 }, { "authors": [ "Takeru Miyato", "Shin-ichi Maeda", "Masanori Koyama", "Shin Ishii" ], "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "venue": null, "year": 1979 }, { "authors": [ "Yair Movshovitz-Attias", "Alexander Toshev", "Thomas K Leung", "Sergey Ioffe", "Saurabh Singh" ], "title": "No fuss distance metric learning using proxies", "venue": null, "year": 2017 }, { "authors": [ "Mehdi Noroozi", "Paolo Favaro" ], "title": "Unsupervised learning of visual representations by solving jigsaw puzzles", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Mehdi Noroozi", "Hamed Pirsiavash", "Paolo Favaro" ], "title": "Representation learning by learning to count", "venue": "In ICCV,", "year": 2017 }, { "authors": [ "Mehdi Noroozi", "Ananth Vinjimoor", "Paolo Favaro", "Hamed Pirsiavash" ], "title": "Boosting self-supervised learning via knowledge transfer", "venue": null, "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Andrew Owens", "Alexei A Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In ECCV,", "year": 2018 }, { "authors": [ "Tianyu Pang", "Kun Xu", "Jun Zhu" ], "title": "Mixup inference: Better exploiting mixup to defend adversarial attacks", "venue": "arXiv preprint arXiv:1909.11515,", "year": 2019 }, { "authors": [ "Daniel S Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D Cubuk", "Quoc V Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": null, "year": 1904 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": null, "year": 2016 }, { "authors": [ "Mirco Ravanelli", "Jianyuan Zhong", "Santiago Pascual", "Pawel Swietojanski", "Joao Monteiro", "Jan Trmal", "Yoshua Bengio" ], "title": "Multi-task self-supervised learning for robust speech recognition", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "venue": "NeurIPS,", "year": 2015 }, { "authors": [ "Zhiqiang Shen", "Zechun Liu", "Zhuang Liu", "Marios Savvides", "Trevor Darrell" ], "title": "Rethinking image mixture for unsupervised visual representation learning", "venue": "arXiv preprint arXiv:2003.05438,", "year": 2020 }, { "authors": [ "Woojoo Sim", "Kibok Lee", "Dingdong Yang", "Jaeseung Jeong", "Ji-Suk Hong", "Sooryong Lee", "Honglak Lee" ], "title": "Automatic correction of lithography hotspots with a deep generative model", "venue": "In Optical Microlithography XXXII,", "year": 2019 }, { "authors": [ "Kihyuk Sohn" ], "title": "Improved deep metric learning with multi-class n-pair loss objective", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In CVPR,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Vincent Vanhoucke", "Sergey Ioffe", "Jon Shlens", "Zbigniew Wojna" ], "title": "Rethinking the inception architecture for computer vision", "venue": null, "year": 2016 }, { "authors": [ "Sunil Thulasidasan", "Gopinath Chennupati", "Jeff A Bilmes", "Tanmoy Bhattacharya", "Sarah Michalak" ], "title": "On mixup training: Improved calibration and predictive uncertainty for deep neural networks", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Leonid Nisonovich Vaserstein" ], "title": "Markov processes over denumerable products of spaces, describing large systems of automata", "venue": "Problemy Peredachi Informatsii,", "year": 1969 }, { "authors": [ "Vikas Verma", "Alex Lamb", "Christopher Beckham", "Amir Najafi", "Ioannis Mitliagkas", "Aaron Courville", "David Lopez-Paz", "Yoshua Bengio" ], "title": "Manifold mixup: Better representations by interpolating hidden states", "venue": null, "year": 2019 }, { "authors": [ "Vikas Verma", "Minh-Thang Luong", "Kenji Kawaguchi", "Hieu Pham", "Quoc V Le" ], "title": "Towards domainagnostic contrastive learning", "venue": "arXiv preprint arXiv:2011.04419,", "year": 2020 }, { "authors": [ "Pete Warden" ], "title": "Speech commands: A dataset for limited-vocabulary speech recognition", "venue": "arXiv preprint arXiv:1804.03209,", "year": 2018 }, { "authors": [ "Jason W Wei", "Kai Zou" ], "title": "Eda: Easy data augmentation techniques for boosting performance on text classification", "venue": null, "year": 1901 }, { "authors": [ "Yue Wu", "Yinpeng Chen", "Lijuan Wang", "Yuancheng Ye", "Zicheng Liu", "Yandong Guo", "Zhengyou Zhang", "Yun Fu" ], "title": "Incremental classifier learning with generative adversarial networks", "venue": "arXiv preprint arXiv:1802.00853,", "year": 2018 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via non-parametric instance discrimination", "venue": "In CVPR,", "year": 2018 }, { "authors": [ "Mang Ye", "Xu Zhang", "Pong C Yuen", "Shih-Fu Chang" ], "title": "Unsupervised embedding learning via invariant and spreading instance feature", "venue": null, "year": 2019 }, { "authors": [ "Sangdoo Yun", "Dongyoon Han", "Seong Joon Oh", "Sanghyuk Chun", "Junsuk Choe", "Youngjoon Yoo" ], "title": "Cutmix: Regularization strategy to train strong classifiers with localizable features", "venue": null, "year": 2019 }, { "authors": [ "Xiaohua Zhai", "Avital Oliver", "Alexander Kolesnikov", "Lucas Beyer" ], "title": "S4l: Self-supervised semisupervised learning", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Richard Zhang", "Phillip Isola", "Alexei A Efros" ], "title": "Colorful image colorization", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Xiang Zhang", "Junbo Zhao", "Yann LeCun" ], "title": "Character-level convolutional networks for text classification", "venue": "In NeurIPS,", "year": 2015 }, { "authors": [ "Zhun Zhong", "Liang Zheng", "Guoliang Kang", "Shaozi Li", "Yi Yang" ], "title": "Random erasing data augmentation", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Hong-Yu Zhou", "Shuang Yu", "Cheng Bian", "Yifan Hu", "Kai Ma", "Yefeng Zheng" ], "title": "Comparing to learn: Surpassing imagenet pretraining on radiographs by comparing image representations", "venue": "In MICCAI,", "year": 2020 }, { "authors": [ "Chen" ], "title": "Note that the learning rate is scaled by the batch size (Goyal et al., 2017): ScaledLearningRate = LearningRate× BatchSize/256", "venue": "Image. The experiments on CIFAR-10 and 100 (Krizhevsky & Hinton, 2009) and ImageNet (Deng et al.,", "year": 2009 }, { "authors": [ "He" ], "title": "2016)14 followed by the two-layer multilayer perceptron (MLP) projection head (output dimensions are 2048 and 128, respectively) is trained on the unlabeled pretext dataset with a batch size of 256 (i.e., 512 augmented data) with the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 over up to 4000 epochs. BYOL has an additional prediction head (output dimensions are the same with the projection head), which follows the projection head", "venue": null, "year": 2048 }, { "authors": [ "Chen" ], "title": "2020a): We apply a set of data augmentations randomly", "venue": "For data augmentation,", "year": 2020 }, { "authors": [ "Goyal" ], "title": "2017), we also decrease the learning rate by half", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Representation learning (Bengio et al., 2013) is a fundamental task in machine learning since the success of machine learning relies on the quality of representation. Self-supervised representation learning (SSL) has been successfully applied in several domains, including image recognition (He et al., 2020; Chen et al., 2020a), natural language processing (Mikolov et al., 2013; Devlin et al., 2018), robotics (Sermanet et al., 2018; Lee et al., 2019), speech recognition (Ravanelli et al., 2020), and video understanding (Korbar et al., 2018; Owens & Efros, 2018). Since no label is available in the unsupervised setting, pretext tasks are proposed to provide self-supervision: for example, context prediction (Doersch et al., 2015), inpainting (Pathak et al., 2016), and contrastive learning (Wu et al., 2018b; Hjelm et al., 2019; He et al., 2020; Chen et al., 2020a). SSL has also been used as an auxiliary task to improve the performance on the main task, such as generative model learning (Chen et al., 2019), semi-supervised learning (Zhai et al., 2019), and improving robustness and uncertainty (Hendrycks et al., 2019). Recently, contrastive representation learning has gained increasing attention by showing state-ofthe-art performance in SSL for large-scale image recognition (He et al., 2020; Chen et al., 2020a), which outperforms its supervised pre-training counterpart (He et al., 2016) on downstream tasks. However, while the concept of contrastive learning is applicable to any domains, the quality of learned representations rely on the domain-specific inductive bias: as anchors and positive samples are obtained from the same data instance, data augmentation introduces semantically meaningful variance for better generalization. To achieve a strong, yet semantically meaningful data augmentation, domain knowledge is required, e.g., color jittering in 2D images or structural information in video understanding. Hence, contrastive representation learning in different domains requires an effort to develop effective data augmentations. Furthermore, while recent works have focused on largescale settings where millions of unlabeled data is available, it would not be practical in real-world applications. For example, in lithography, acquiring data is very expensive in terms of both time and cost due to the complexity of manufacturing process (Lin et al., 2018; Sim et al., 2019). Meanwhile, MixUp (Zhang et al., 2018) has shown to be a successful data augmentation for supervised learning in various domains and tasks, including image classification (Zhang et al., 2018), generative model learning (Lucas et al., 2018), and natural language processing (Guo et al., 2019; Guo, 2020).\nIn this paper, we explore the following natural, yet important question: is the idea of MixUp useful for unsupervised, self-supervised, or contrastive representation learning across different domains? To this end, we propose instance Mix (i-Mix), a domain-agnostic regularization strategy for contrastive representation learning. The key idea of i-Mix is to introduce virtual labels in a batch and mix data instances and their corresponding virtual labels in the input and label spaces, respectively. We first introduce the general formulation of i-Mix, and then we show the applicability of i-Mix to state-ofthe-art contrastive representation learning methods, SimCLR (Chen et al., 2020a) and MoCo (He et al., 2020), and a self-supervised learning method without negative pairs, BYOL (Grill et al., 2020). Through the experiments, we demonstrate the efficacy of i-Mix in a variety of settings. First, we show the effectiveness of i-Mix by evaluating the discriminative performance of learned representations in multiple domains. Specifically, we adapt i-Mix to the contrastive representation learning methods, advancing state-of-the-art performance across different domains, including image (Krizhevsky & Hinton, 2009; Deng et al., 2009), speech (Warden, 2018), and tabular (Asuncion & Newman, 2007) datasets. Then, we study i-Mix in various conditions, including when 1) the model and training dataset is small or large, 2) domain knowledge is limited, and 3) transfer learning.\nContribution. In summary, our contribution is three-fold:\n• We propose i-Mix, a method for regularizing contrastive representation learning, motivated by MixUp (Zhang et al., 2018). We show how to apply i-Mix to state-of-the-art contrastive representation learning methods (Chen et al., 2020a; He et al., 2020; Grill et al., 2020).\n• We show that i-Mix consistently improves contrastive representation learning in both vision and non-vision domains. In particular, the discriminative performance of representations learned with i-Mix is on par with fully supervised learning on CIFAR-10/100 (Krizhevsky & Hinton, 2009) and Speech Commands (Warden, 2018).\n• We verify the regularization effect of i-Mix in a variety of settings. We empirically observed that i-Mix significantly improves contrastive representation learning when 1) the training dataset size is small, or 2) the domain knowledge for data augmentations is not enough." }, { "heading": "2 RELATED WORK", "text": "Self-supervised representation learning (SSL) aims at learning representations from unlabeled data by solving a pretext task that is derived from self-supervision. Early works on SSL proposed pretext tasks based on data reconstruction by autoencoding (Bengio et al., 2007), such as context prediction (Doersch et al., 2015) and inpainting (Pathak et al., 2016). Decoder-free SSL has made a huge progress in recent years. Exemplar CNN (Dosovitskiy et al., 2014) learns by classifying individual instances with data augmentations. SSL of visual representation, including colorization (Zhang et al., 2016), solving jigsaw puzzles (Noroozi & Favaro, 2016), counting the number of objects (Noroozi et al., 2017), rotation prediction (Gidaris et al., 2018), next pixel prediction (Oord et al., 2018; Hénaff et al., 2019), and combinations of them (Doersch & Zisserman, 2017; Kim et al., 2018; Noroozi et al., 2018) often leverages image-specific properties to design pretext tasks. Meanwhile, alhough deep clustering (Caron et al., 2018; 2019; Asano et al., 2020) is often distinguished from SSL, it also leverages unsupervised clustering assignments as self-supervision for representation learning.\nContrastive representation learning has gained lots of attention for SSL (He et al., 2020; Chen et al., 2020a). As opposed to early works on exemplar CNN (Dosovitskiy et al., 2014; 2015), contrastive learning maximizes similarities of positive pairs while minimizes similarities of negative pairs instead of training an instance classifier. As the choice of negative pairs is crucial for the quality of learned representations, recent works have carefully designed them. Memory-based approaches (Wu et al., 2018b; Hjelm et al., 2019; Bachman et al., 2019; Misra & van der Maaten, 2020; Tian et al., 2020a) maintain a memory bank of embedding vectors of instances to keep negative samples, where the memory is updated with embedding vectors extracted from previous batches. In addition, MoCo (He et al., 2020) showed that differentiating the model for anchors and positive/negative samples is effective, where the model for positive/negative samples is updated by the exponential moving average of the model for anchors. On the other hand, recent works (Ye et al., 2019; Misra & van der Maaten, 2020; Chen et al., 2020a; Tian et al., 2020a) showed that learning invariance to different views is important in contrastive representation learning. The views can be generated through data augmentations carefully designed using domain knowledge (Chen et al., 2020a), splitting\ninput channels (Tian et al., 2020a), or borrowing the idea of other pretext tasks, such as creating jigsaw puzzles or rotating inputs (Misra & van der Maaten, 2020). In particular, SimCLR (Chen et al., 2020a) showed that a simple memory-free approach with a large batch size and strong data augmentations has a comparable performance to memory-based approaches. InfoMin (Tian et al., 2020b) further studied a way to generate good views for contrastive representation learning and achieved state-of-the-art performance by combining prior works. Different from other contrastive representation learning methods, BYOL (Grill et al., 2020) does not require negative pairs, where the proposed pretext task aims at predicting latent representations of one view from another. While prior works have focused on SSL on large-scale visual recognition tasks, our work focuses on contrastive representation learning in both small- and large-scale settings in different domains.\nData augmentation is a technique to increase the diversity of data, especially when training data are not enough for generalization. Since the augmented data must be understood as the original data, data augmentations are carefully designed using the domain knowledge about images (DeVries & Taylor, 2017b; Cubuk et al., 2019a;b; Zhong et al., 2020), speech (Amodei et al., 2016; Park et al., 2019), or natural languages (Zhang et al., 2015; Wei & Zou, 2019). Some works have studied data augmentation with less domain knowledge: DeVries & Taylor (2017a) proposed a domain-agnostic augmentation strategy by first encoding the dataset and then applying augmentations in the feature space. MixUp (Zhang et al., 2018) is an effective data augmentation strategy in supervised learning, which performs vicinal risk minimization instead of empirical risk minimization, by linearly interpolating input data and their labels on the data and label spaces, respectively. On the other hand, MixUp has also shown its effectiveness in other tasks and non-vision domains, including generative adversarial networks (Lucas et al., 2018), improved robustness and uncertainty (Hendrycks et al., 2020), and sentence classification in natural language processing (Guo, 2020; Guo et al., 2019). Other variations have also been investigated by interpolating in the feature space (Verma et al., 2019) or leveraging domain knowledge (Yun et al., 2019). MixUp would not be directly applicable to some domains, such as point clouds, but its adaptation can be effective (Harris et al., 2020). i-Mix is a kind of data augmentation for better generalization in contrastive representation learning, resulting in better performances on downstream tasks.\nConcurrent works have leveraged the idea of MixUp for contrastive representation learning. As discussed in Section 3.3, only input data can be mixed for improving contrastive representation learning (Shen et al., 2020; Verma et al., 2020; Zhou et al., 2020), which can be considered as injecting data-driven noises. Kalantidis et al. (2020) mixed hard negative samples on the embedding space. Kim et al. (2020) reported similar observations to ours but focused on small image datasets." }, { "heading": "3 APPROACH", "text": "In this section, we review MixUp (Zhang et al., 2018) in supervised learning and present i-Mix in contrastive learning (He et al., 2020; Chen et al., 2020a; Grill et al., 2020). Throughout this section, let X be a data space, RD be a D-dimensional embedding space, and a model f :X →RD be a mapping between them. For conciseness, fi = f(xi) and f̃i = f(x̃i) for xi, x̃i ∈ X , and model parameters are omitted in loss functions." }, { "heading": "3.1 MIXUP IN SUPERVISED LEARNING", "text": "Suppose an one-hot label yi ∈ {0, 1}C is assigned to a data xi, where C is the number of classes. Let a linear classifier predicting the labels consists of weight vectors {w1, . . . , wC}, where wc ∈ RD.1 Then, the cross-entropy loss for supervised learning is defined as:\n`Sup(xi, yi) = − C∑ c=1 yi,c log exp(w>c fi)∑C k=1 exp(w > k fi) . (1)\nWhile the cross-entropy loss is widely used for supervised training of deep neural networks, there are several challenges of training with the cross-entropy loss, such as preventing overfitting or networks being overconfident. Several regularization techniques have been proposed to alleviate these issues, including label smoothing (Szegedy et al., 2016), adversarial training (Miyato et al., 2018), and confidence calibration (Lee et al., 2018).\n1We omit bias terms for presentation clarity.\nMixUp (Zhang et al., 2018) is an effective regularization with negligible computational overhead. It conducts a linear interpolation of two data instances in both input and label spaces and trains a model by minimizing the cross-entropy loss defined on the interpolated data and labels. Specifically, for two labeled data (xi, yi), (xj , yj), the MixUp loss is defined as follows:\n`MixUpSup ( (xi, yi), (xj , yj);λ ) = `Sup(λxi + (1− λ)xj , λyi + (1− λ)yj), (2)\nwhere λ∼Beta(α, α) is a mixing coefficient sampled from the beta distribution. MixUp is a vicinal risk minimization method (Chapelle et al., 2001) that augments data and their labels in a data-driven manner. Not only improving the generalization on the supervised task, it also improves adversarial robustness (Pang et al., 2019) and confidence calibration (Thulasidasan et al., 2019).\n3.2 i-MIX IN CONTRASTIVE LEARNING\nWe introduce instance mix (i-Mix), a data-driven augmentation strategy for contrastive representation learning to improve the generalization of learned representations. Intuitively, instead of mixing class labels, i-Mix interpolates their virtual labels, which indicates their identity in a batch. Let B= {(xi, x̃i)}Ni=1 be a batch of data pairs, where N is the batch size, xi, x̃i ∈X are two views of the same data, which are usually generated by different augmentations. For each anchor xi, we call x̃i and x̃j 6=i positive and negative samples, respectively.2 Then, the model f learns to maximize similarities of positive pairs (instances from the same data) while minimize similarities of negative pairs (instances from different data) in the embedding space. The output of f is L2-normalized, which has shown to be effective (Wu et al., 2018a; He et al., 2020; Chen et al., 2020a). Let vi ∈{0, 1}N be the virtual label of xi and x̃i in a batch B, where vi,i = 1 and vi,j 6=i = 0. For a general sample-wise contrastive loss with virtual labels `(xi, vi), the i-Mix loss is defined as follows:\n`i-Mix ( (xi, vi), (xj , vj);B, λ ) = `(Mix(xi, xj ;λ), λvi + (1− λ)vj ;B), (3)\nwhere λ∼Beta(α, α) is a mixing coefficient and Mix is a mixing operator, which can be adapted depending on target domains: for example, MixUp(xi, xj ;λ) =λxi + (1−λ)xj (Zhang et al., 2018) when data values are continuous, and CutMix(xi, xj ;λ) =Mλ xi + (1−Mλ) xj (Yun et al., 2019) when data values have a spatial correlation with neighbors, where Mλ is a binary mask filtering out a region whose relative area is (1−λ), and is an element-wise multiplication. Note that some mixing operators might not work well for some domains: for example, CutMix would not be valid when data values and their spatial neighbors have no correlation. However, the MixUp operator generally works well across domains including image, speech, and tabular; we use it for i-Mix formulations and experiments, unless otherwise specified. In the following, we show how to apply i-Mix to contrastive representation learning methods.\nSimCLR (Chen et al., 2020a) is a simple contrastive representation learning method without a memory bank, where each anchor has one positive sample and (2N−2) negative samples. Let xN+i = x̃i for conciseness. Then, the (2N−1)-way discrimination loss is written as follows:\n`SimCLR(xi;B) = − log exp\n( s(fi, f(N+i) mod 2N )/τ )∑2N k=1,k 6=i exp ( s(fi, fk)/τ\n) , (4) where τ is a temperature scaling parameter and s(f, f̃) = (f>f̃)/‖f‖‖f̃‖ is the inner product of two L2-normalized vectors. In this formulation, i-Mix is not directly applicable because virtual labels are defined differently for each anchor.3 To resolve this issue, we simplify the formulation of SimCLR by excluding anchors from negative samples. Then, with virtual labels, the N -way discrimination loss is written as follows:\n`N-pair(xi, vi;B) = − N∑ n=1 vi,n log exp\n( s(fi, f̃n)/τ )∑N k=1 exp ( s(fi, f̃k)/τ\n) , (5) where we call it the N-pair contrastive loss, as the formulation is similar to the N-pair loss in the context of metric learning (Sohn, 2016).4 For two data instances (xi, vi), (xj , vj) and a batch of data\n2Some literature (He et al., 2020; Chen et al., 2020b) refers to them as query and positive/negative keys. 3We present the application of i-Mix to the original SimCLR formulation in Appendix A. 4InfoNCE (Oord et al., 2018) is a similar loss inspired by the idea of noise-contrastive estimation (Gutmann\n& Hyvärinen, 2010).\nAlgorithm 1 Loss computation for i-Mix on N-pair contrastive learning in PyTorch-like style.\na, b = aug(x), aug(x) # two different views of input x lam = Beta(alpha, alpha).sample() # mixing coefficient randidx = randperm(len(x)) a = lam * a + (1-lam) * a[randidx] logits = matmul(normalize(model(a)), normalize(model(b)).T) / t loss = lam * CrossEntropyLoss(logits, arange(len(x))) + \\\n(1-lam) * CrossEntropyLoss(logits, randidx)\npairs B= {(xi, x̃i)}Ni=1, the i-Mix loss is defined as follows: `i-MixN-pair ( (xi, vi), (xj , vj);B, λ ) = `N-pair(λxi + (1− λ)xj , λvi + (1− λ)vj ;B). (6)\nAlgorithm 1 provides the pseudocode of i-Mix on N-pair contrastive learning for one iteration.5\nPair relations in contrastive loss. To use contrastive loss for representation learning, one needs to properly define a pair relation {(xi, x̃i)}Ni=1. For contrastive representation learning, where semantic class labels are not provided, the pair relation would be defined in that 1) a positive pair, xi and x̃i, are different views of the same data and 2) a negative pair, xi and x̃j 6=i, are different data instances. For supervised representation learning, xi and x̃i are two data instances from the same class, while xi and x̃j 6=i are from different classes. Note that two augmented versions of the same data also belong to the same class, so they can also be considered as a positive pair. i-Mix is not limited to self-supervised contrastive representation learning, but it can also be used as a regularization method for supervised contrastive representation learning (Khosla et al., 2020) or deep metric learning (Sohn, 2016; Movshovitz-Attias et al., 2017).\nMoCo (He et al., 2020). In contrastive representation learning, the number of negative samples affects the quality of learned representations (Arora et al., 2019). Because SimCLR mines negative samples in the current batch, having a large batch size is crucial, which often requires a lot of computational resources (Chen et al., 2020a). For efficient training, recent works have maintained a memory bankM= {µk}Kk=1, which is a queue of previously extracted embedding vectors, where K is the size of the memory bank (Wu et al., 2018b; He et al., 2020; Tian et al., 2020a;b). In addition, MoCo introduces an exponential moving average (EMA) model to extract positive and negative embedding vectors, whose parameters are updated as θfEMA←mθfEMA + (1−m)θf , where m∈ [0, 1) is a momentum coefficient and θ is model parameters. The loss is written as follows:\n`MoCo(xi;B,M) = − log exp\n( s(fi, f̃ EMA i )/τ ) exp ( s(fi, f̃EMAi )/τ ) + ∑K k=1 exp ( s(fi, µk)/τ\n) . (7) The memory bankM is then updated with {f̃EMAi } in the first-in first-out order. In this (K+1)-way discrimination loss, data pairs are independent to each other, such that i-Mix is not directly applicable because virtual labels are defined differently for each anchor. To overcome this issue, we include the positive samples of other anchors as negative samples, similar to the N-pair contrastive loss in Eq. (5). Let ṽi ∈{0, 1}N+K be a virtual label indicating the positive sample of each anchor, where ṽi,i = 1 and ṽi,j 6=i = 0. Then, the (N+K)-way discrimination loss is written as follows:\n`MoCo(xi, ṽi;B,M) = − N∑ n=1 ṽi,n log exp\n( s(fi, f̃ EMA n )/τ )∑N k=1 exp ( s(fi, f̃EMAk )/τ ) + ∑K k=1 exp ( s(fi, µk)/τ\n) . (8)\nAs virtual labels are bounded in the same set in this formulation, i-Mix is directly applicable: for two data instances (xi, ṽi), (xj , ṽj), a batch of data pairs B= {(xi, x̃i)}Ni=1, and the memory bankM, the i-Mix loss is defined as follows:\n`i-MixMoCo ( (xi, ṽi), (xj , ṽj);B,M, λ ) = `MoCo(λxi + (1− λ)xj , λṽi + (1− λ)ṽj ;B,M). (9)\n5For losses linear with respect to labels (e.g., the cross-entropy loss), they are equivalent to λ`(λxi + (1− λ)xj , vi)+(1−λ)`(λxi+(1−λ)xj , vj), i.e., optimization to the mixed label is equivalent to joint optimization to original labels. The proof for losses in contrastive learning methods is provided in Appendix B.\nBYOL (Grill et al., 2020). Different from other contrastive representation learning methods, BYOL is a self-supervised representation learning method without contrasting negative pairs. For two views of the same data xi, x̃i ∈X , the model f learns to predict a view embedded with the EMA model f̃EMAi from its embedding fi. Specifically, an additional prediction layer g is introduced, such that the difference between g(fi) and f̃EMAi is learned to be minimized. The BYOL loss is written as follows:\n`BYOL(xi, x̃i) = ∥∥∥g(fi)/‖g(fi)‖ − f̃i/‖f̃i‖∥∥∥2 = 2− 2 · s(g(fi), f̃i). (10)\nThis formulation can be represented in the form of the general contrastive loss in Eq. (3), as the second view x̃i can be accessed from the batch B with its virtual label vi. To derive i-Mix in BYOL, let F̃ = [f̃1/‖f̃1‖, . . ., f̃N/‖f̃N‖]∈RD×N be the collection of L2-normalized embedding vectors of the second views, such that f̃i/‖f̃i‖= F̃ vi. Then, the BYOL loss is written as follows:\n`BYOL(xi, vi;B) = ∥∥∥g(fi)/‖g(fi)‖ − F̃ vi∥∥∥2 = 2− 2 · s(g(fi), F̃ vi). (11)\nFor two data instances (xi, vi), (xj , vj) and a batch of data pairs B= {(xi, x̃i)}Ni=1, the i-Mix loss is defined as follows:\n`i-MixBYOL ( (xi, vi), (xj , vj);B, λ ) = `BYOL(λxi + (1− λ)xj , λvi + (1− λ)vj ;B). (12)" }, { "heading": "3.3 INPUTMIX", "text": "The contribution of data augmentations to the quality of learned representations is crucial in contrastive representation learning. For the case when the domain knowledge about efficient data augmentations is limited, we propose to apply InputMix together with i-Mix, which mixes input data but not their labels. This method can be viewed as introducing structured noises driven by auxiliary data to the principal data with the largest mixing coefficient λ, and the label of the principal data is assigned to the mixed data (Shen et al., 2020; Verma et al., 2020; Zhou et al., 2020). We applied InputMix and i-Mix together on image datasets in Table 3." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we demonstrate the effectiveness of i-Mix. In all experiments, we conduct contrastive representation learning on a pretext dataset and evaluate the quality of representations via supervised classification on a downstream dataset. We report the accuracy averaged over up to five runs. In the first stage, a convolutional neural network (CNN) or multilayer perceptron (MLP) followed by the two-layer MLP projection head is trained on an unlabeled dataset. Then, we replace the projection head with a linear classifier and train only the linear classifier on a labeled dataset for downstream task. Except for transfer learning, datasets for the pretext and downstream tasks are the same. For i-Mix, we sample a mixing coefficient λ∼Beta(α, α) for each data, where α= 1 unless otherwise stated.6 Additional details for the experimental settings and more experiments can be found in Appendix C." }, { "heading": "4.1 EXPERIMENTAL SETUP", "text": "Baselines and datasets. We consider 1) N-pair contrastive learning as a memory-free contrastive learning method,7 2) MoCo v2 (He et al., 2020; Chen et al., 2020b) 8 as a memory-based contrastive learning method, and 3) BYOL (Grill et al., 2020), which is a self-supervised learning method without negative pairs. We apply i-Mix to these methods and compare their performances. To show the effectiveness of i-Mix across domains, we evaluate the methods on datasets from multiple domains, including image, speech, and tabular datasets. CIFAR-10/100 (Krizhevsky & Hinton, 2009) consist of 50k training and 10k test images, and ImageNet (Deng et al., 2009) has 1.3M training and 50k validation images, where we use them for evaluation. For ImageNet, we also use a subset of randomly chosen 100 classes out of 1k classes to experiment at a different scale. We apply a set of data augmentations randomly in sequence including\n6Beta(α, α) is the uniform distribution when α=1, bell-shaped when α> 1, and bimodal when α< 1. 7We use the N-pair formulation in Eq. (5) instead of SimCLR as it is simpler and more efficient to integrate\ni-Mix. As shown in Appendix C.2, the N-pair formulation results in no worse performance than SimCLR. 8MoCo v2 improves the performance of MoCo by cosine learning schedule and more data augmentations.\nrandom resized cropping, horizontal flipping, color jittering, gray scaling, and Gaussian blurring for ImageNet, which has shown to be effective (Chen et al., 2020a;b). We use ResNet-50 (He et al., 2016) as a backbone network. Models are trained with a batch size of 256 (i.e., 512 including augmented data) for up to 4000 epochs on CIFAR-10 and 100, and with a batch size of 512 for 800 epochs on ImageNet. For ImageNet experiments, we use the CutMix (Yun et al., 2019) version of i-Mix. The Speech Commands dataset (Warden, 2018) contains 51k training, 7k validation, and 7k test data in 12 classes. We apply a set of data augmentations randomly in sequence: changing amplitude, speed, and pitch in time domain, stretching, time shifting, and adding background noise in frequency domain. Augmented data are then transformed to a 32×32 mel spectogram. We use the same architecture with image experiments. Models are trained with a batch size of 256 for 500 epochs. For tabular dataset experiments, we consider Forest Cover Type (CovType) and Higgs Boson (Higgs) from UCI repository (Asuncion & Newman, 2007). CovType contains 15k training and 566k test data in 7 classes, and Higgs contains 10.5M training and 0.5M test data for binary classification. For Higgs, we use a subset of 100k and 1M training data to experiment at a different scale. Since the domain knowledge for data augmentations on tabular data is limited, only a masking noise with the probability 0.2 is considered as a data augmentation. We use a 5-layer MLP with batch normalization (Ioffe & Szegedy, 2015) as a backbone network. Models are trained with a batch size of 512 for 500 epochs. We use α= 2 for CovType and Higgs100k, as it is slightly better than α= 1." }, { "heading": "4.2 MAIN RESULTS", "text": "Table 1 shows the wide applicability of i-Mix to state-of-the-art contrastive representation learning methods in multiple domains. i-Mix results in consistent improvements on the classification accuracy, e.g., up to 6.5% when i-Mix is applied to MoCo v2 on CIFAR-100. Interestingly, we observe that linear classifiers on top of representations learned with i-Mix without fine-tuning the pre-trained part often yield a classification accuracy on par with simple end-to-end supervised learning from random initialization, e.g., i-Mix vs. end-to-end supervised learning performance is 96.3% vs. 95.5% on CIFAR-10, 78.6% vs. 78.9% on CIFAR-100, and 98.2% vs. 98.0% on Speech Commands.9\n4.3 REGULARIZATION EFFECT OF i-MIX\nA better regularization method often benefits from longer training of deeper models, which is more critical when training on a small dataset. To investigate the regularization effect of i-Mix, we first\n9Supervised learning with improved methods, e.g., MixUp, outperforms i-Mix. However, linear evaluation on top of self-supervised representation learning is a proxy to measure the quality of representations learned without labels, such that it is not supposed to be compared with the performance of supervised learning.\n*InputMix is applied when no other data augmentations are used.\nmake a comparison between MoCo v2 and i-Mix by training with different model sizes and number of training epochs on the pretext task. We train ResNet-18, 50, 101, and 152 models with varying number of training epochs from 200 to 2000. Figure 1 shows the performance of MoCo v2 (solid box) and i-Mix (dashed box). The improvement by applying i-Mix to MoCo v2 is consistent over the different architecture size and the number of training epochs. Deeper models benefit from i-Mix, achieving 96.7% on CIFAR-10 and 79.1% on CIFAR-100 when the backbone network is ResNet-152. On the other hand, models trained without i-Mix start to show decrease in performance, possibly due to overfitting to the pretext task when trained longer. The trend clearly shows that i-Mix results in better representations via improved regularization. Next, we study the effect of i-Mix with varying dataset sizes for the pretext tasks. Table 2 shows the effect of i-Mix on large-scale datasets10 from image and tabular domains. We observe that i-Mix is particularly effective when the amount of training data is reduced, e.g., ImageNet-100 consists of images from 100 classes, thus has only 10% of training data compared to ImageNet-1k. However, the performance gain is reduced when the amount of training data is large. we further study representations learned with different pretext dataset sizes from 1% to 100% of the ImageNet training data in Figure 2. Here, different from ImageNet-100, we reduce the amount of data for each class, but maintain the number of classes the same. We observe that the performance gain by i-Mix is more significant when the size of the pretext dataset is small. Our study suggests that i-Mix is effective for regularizing self-supervised representation learning when training from a limited amount of data. We believe that this is aligned with findings in Zhang et al. (2018) for MixUp in supervised learning. Finally, when a large-scale unlabeled dataset is available, we expect i-Mix would still be useful in obtaining better representations when trained longer with deeper and larger models." }, { "heading": "4.4 CONTRASTIVE LEARNING WITHOUT DOMAIN-SPECIFIC DATA AUGMENTATION", "text": "Data augmentations play a key role in contrastive representation learning, and therefore it raises a question when applying them to domains with a limited or no knowledge of such augmentations. In this section, we study the effectiveness of i-Mix as a domain-agnostic strategy for contrastive representation learning, which can be adapted to different domains. Table 3 shows the performance of MoCo v2 and i-Mix with and without data augmentations. We observe significant performance gains with i-Mix when other data augmentations are not applied. For example, compared to the accuracy of 93.5% on CIFAR-10 when other data augmentations are applied, contrastive learning achieves 47.7% when trained without any data augmentations. This suggests that data augmentation is an essential part for the success of contrastive representation learning (Chen et al., 2020a). However, i-Mix is able to learn meaningful representations without other data augmentations and achieves the accuracy of 83.4% on CIFAR-10.\n10Here, “scale” corresponds to the amount of data rather than image resolution.\nIn Table 3, InputMix is applied together with i-Mix to further improve the performance on image datasets. For each principal data, we mix two auxiliary data, with mixing coefficients (0.5λ1 + 0.5, 0.5λ2, 0.5λ3), where λ1, λ2, λ3∼Dirichlet(1, 1, 1).11 In the above example, while i-Mix is better than baselines, adding InputMix further improves the performance of i-Mix, i.e., from 75.1% to 83.4% on CIFAR-10, and from 50.7% to 54.0% on CIFAR-100. This confirms that InputMix can further improve the performance when domain-specific data augmentations are not available, as discussed in Section 3.3. Moreover, we verify its effectiveness on other domains beyond the image domain. For example, the performance improves from 76.9% to 92.8% on the Speech Commands dataset when we assume no other data augmentations are available. We also observe consistent improvements in accuracy for tabular datasets, even when the training dataset size is large. Although the domain knowledge for data augmentations is important to achieve state-of-the-art results, our demonstration shows the potential of i-Mix to be used for a wide range of application domains where domain knowledge is particularly limited.\n4.5 TRANSFERABILITY OF i-MIX\nIn this section, we show the improved transferability of the representations learned with i-Mix. The results are provided in Table 4. First, we train linear classifiers with downstream datasets different from the pretext dataset used to train backbone networks and evaluate their performance, e.g., CIFAR-10 as pretext and CIFAR-100 as downstream datasets or vice versa. We observe consistent performance gains when learned representations from one dataset are evaluated on classification tasks of another dataset. Next, we transfer representations trained on ImageNet to the PASCAL VOC object detection task (Everingham et al., 2010). We follow the settings in prior works (He et al., 2020; Chen et al., 2020b): the parameters of the pre-trained ResNet-50 are transferred to a Faster R-CNN detector with the ResNet50-C4 backbone (Ren et al., 2015), and fine-tuned end-to-end on the VOC 07+12 trainval dataset and evaluated on the VOC 07 test dataset. We report the average precision (AP) averaged over IoU thresholds between 50% to 95% at a step of 5%, and AP50 and AP75, which are AP values when IoU threshold is 50% and 75%, respectively. Similar to Table 2, we observe small but consistent performance gains in all metrics. Those results confirm that i-Mix improves the quality of learned representations, such that performances on downstream tasks are improved." }, { "heading": "5 CONCLUSION", "text": "We propose i-Mix, a domain-agnostic regularization strategy applicable to a class of self-supervised learning. The key idea of i-Mix is to introduce a virtual label to each data instance, and mix both inputs and the corresponding virtual labels. We show that i-Mix is applicable to state-of-the-art self-supervised representation learning methods including SimCLR, MoCo, and BYOL, which consistently improves the performance in a variety of settings and domains. Our experimental results indicate that i-Mix is particularly effective when the training dataset size is small or data augmentation is not available, each of which are prevalent in practice.\n11This guarantees that the mixing coefficient for the principal data is larger than 0.5 to prevent from training with noisy labels. Note that Beckham et al. (2019) also sampled mixing coefficients from the Dirichlet distribution for mixing more than two data." }, { "heading": "B PROOF OF THE LINEARITY OF LOSSES WITH RESPECT TO VIRTUAL LABELS", "text": "Cross-entropy loss. The loss used in contrastive representation learning works, which is often referred to as InfoNCE (Oord et al., 2018), can be represented in the form of the cross-entropy loss as we showed for N-pair contrastive learning, SimCLR (Chen et al., 2020a), and MoCo (He et al., 2020). Here we provide an example in the case of N-pair contrastive learning. Let fλij = f(λxi + (1−λ)xj) for conciseness.\n`i-MixN-pair ( (xi, vi), (xj , vj);B, λ ) = `N-pair(λxi + (1− λ)xj , λvi + (1− λ)vj ;B)\n= − N∑ n=1 (λvi,n + (1− λ)vj,n) log exp\n( s(fλij , f̃n)/τ )∑N k=1 exp ( s(fλij , f̃k)/τ\n) = −λ\nN∑ n=1 vi,n log exp\n( s(fλij , f̃n)/τ )∑N k=1 exp ( s(fλij , f̃k)/τ ) − (1− λ) N∑ n=1 vj,n log exp ( s(fλij , f̃n)/τ )∑N k=1 exp ( s(fλij , f̃k)/τ\n) = λ`N-pair(λxi + (1− λ)xj , vi;B) + (1− λ)`N-pair(λxi + (1− λ)xj , vj ;B). (B.1)\nL2 loss between L2-normalized feature vectors. The BYOL (Grill et al., 2020) loss is in this type. Let F̃ = [f̃1/‖f̃1‖, . . ., f̃N/‖f̃N‖]∈RD×N such that f̃i/‖f̃i‖= F̃ vi, and ḡ= g(f(λxi + (1−λ)xj))/‖g(f(λxi + (1−λ)xj))‖ for conciseness.\n`i-MixBYOL ( (xi, vi), (xj , vj);B, λ ) = `BYOL(λxi + (1− λ)xj , λvi + (1− λ)vj)\n= ∥∥∥ḡ − F̃ (λvi + (1− λ)vj)∥∥∥2 = ∥∥∥ḡ − (λF̃vi + (1− λ)F̃ vj)∥∥∥2\n= 1− 2 · ḡ> ( λF̃vi + (1− λ)F̃ vj ) + ∥∥∥λF̃vi + (1− λ)F̃ vj∥∥∥2\n= 2− 2 · ḡ> ( λF̃vi + (1− λ)F̃ vj ) + const\n= λ‖ḡ − F̃ vi‖2 + (1− λ)‖ḡ − F̃ vj‖2 + const = λ`BYOL(λxi + (1− λ)xj , vi;B) + (1− λ)`BYOL(λxi + (1− λ)xj , vj ;B) + const. (B.2)\nBecause F̃ is not backpropagated, it can be considered as a constant." }, { "heading": "C MORE ON EXPERIMENTS", "text": "We describe details of the experimental settings and more experimental results. For additional experiments below, we adapted the code for supervised contrastive learning (Khosla et al., 2020).13\nC.1 SETUP\nIn this section, we describe details of the experimental settings. Note that the learning rate is scaled by the batch size (Goyal et al., 2017): ScaledLearningRate = LearningRate× BatchSize/256. Image. The experiments on CIFAR-10 and 100 (Krizhevsky & Hinton, 2009) and ImageNet (Deng et al., 2009) are conducted in two stages: following Chen et al. (2020a), the convolutional neural network (CNN) part of ResNet-50 (He et al., 2016)14 followed by the two-layer multilayer perceptron (MLP) projection head (output dimensions are 2048 and 128, respectively) is trained on the unlabeled pretext dataset with a batch size of 256 (i.e., 512 augmented data) with the stochastic gradient descent (SGD) optimizer with a momentum of 0.9 over up to 4000 epochs. BYOL has an additional prediction head (output dimensions are the same with the projection head), which follows the projection head, only for the model updated by gradient. 10 epochs of warmup with a linear schedule to an initial learning rate of 0.125, followed by the cosine learning rate schedule (Loshchilov & Hutter, 2017) is used. We use the weight decay of 0.0001 for the first stage. For ImageNet, we use the same hyperparameters except that the batch size is 512 and the initial learning rate is 0.03.\n13https://github.com/HobbitLong/SupContrast 14For small resolution data from CIFAR and Speech Commands, we replaced the kernal, stride, and padding size from (7,2,3) to (3,1,1) in the first convolutional layer, and removed the first max pooling layer, following Chen et al. (2020a).\nThen, the head of the CNN is replaced with a linear classifier, and only the linear classifier is trained with the labeled downstream dataset. For the second stage, we use a batch size of 256 with the SGD optimizer with a momentum of 0.9 and an initial learning rate chosen among {1, 3, 5, 10, 30, 50, 70} over 100 epochs, where the learning rate is decayed by 0.2 after 80, 90, 95 epochs. No weight decay is used at the second stage. The quality of representation is evaluated by the top-1 accuracy on the downstream task. We sample a single mixing coefficient λ∼Beta(1, 1) for each training batch. The temperature is set to τ = 0.2. Note that the optimal distribution of λ and the optimal value of τ varies over different architectures, methods, and datasets, but the choices above result in a reasonably good performance. The memory bank size of MoCo is 65536 for ImageNet and 4096 for other datasets, and the momentum for the exponential moving average (EMA) update is 0.999 for MoCo and BYOL. We do not symmetrize the BYOL loss, as it does not significantly improve the performance while increasing computational complexity. For data augmentation, we follow Chen et al. (2020a): We apply a set of data augmentations randomly in sequence including resized cropping (Szegedy et al., 2015), horizontal flipping with a probability of 0.5, color jittering,15 and gray scaling with a probability of 0.2. A Gaussian blurring with σ ∈ [0.1, 2] and kernel size of 10% of the image height/width is applied for ImageNet. For evaluation on downstream tasks, we apply padded cropping with the pad size of 4 and horizontal flipping for CIFAR-10 and 100, and resized cropping and horizontal flipping for ImageNet.\nSpeech. In the experiments on Speech Commands (Warden, 2018), the network is the same with the image domain experiments, except that the number of input channels is one instead of three. The temperature is set to τ = 0.5 for the standard setting and τ = 0.2 for the no augmentation setting. 10% of silence data (all zero) are added when training. At the first stage, the model is trained with the SGD optimizer with a momentum of 0.9 and an initial learning rate of 0.125 over 500 epochs, where the learning rate decays by 0.1 after 300 and 400 epochs and the weight decay is 0.0001. The other settings are the same with the experiments on CIFAR. For data augmentation,16 we apply a set of data augmentations randomly in sequence including changing amplitude, speed, and pitch in time domain, stretching, time shifting, and adding background noise in frequency domain. Each data augmentation is applied with a probability of 0.5. Augmented data are then transformed to the mel spectogram in the size of 32× 32. Tabular. In the experiments on CovType and Higgs (Asuncion & Newman, 2007), we take a fivelayer MLP with batch normalization as a backbone network. The output dimensions of layers are (2048-2048-4096-4096-8192), where all layers have batch normalization followed by ReLU except for the last layer. The last layer activation is maxout (Goodfellow et al., 2013) with 4 sets, such that the output dimension is 2048. On top of this five-layer MLP, we attach two-layer MLP (2048-128) as a projection head. We sample a single mixing coefficient λ∼Beta(α, α) for each training batch, where α= 2 for CovType and Higgs100k, and α= 1 for Higgs1M. The temperature is set to τ = 0.1. The other settings are the same with the experiments on CIFAR, except that the batch size is 512 and the number of training epochs is 500. At the second stage, the MLP head is replaced with a linear classifier. For Higgs, the classifier is computed by linear regression from the feature matrix obtained without data augmentation to the label matrix using the pseudoinverse. Since the prior knowledge on tabular data is very limited, only the masking noise with a probability of 0.2 is considered as a data augmentation.\nC.2 VARIATIONS OF i-MIX\nWe compare the MixUp (Zhang et al., 2018) and CutMix (Yun et al., 2019) variation of i-Mix on N-pair contrastive learning and SimCLR. To distinguish them, we call them i-MixUp and i-CutMix, respectively. To be fair with the memory usage in the pretext task stage, we reduce the batch size of i-MixUp and i-CutMix by half (256 to 128) for SimCLR. Following the learning rate adjustment strategy in Goyal et al. (2017), we also decrease the learning rate by half (0.125 to 0.0625) when the batch size is reduced. We note that i-MixUp and i-CutMix on SimCLR take approximately 2.5 times more training time to achieve the same number of training epochs. The results are provided in Table C.1. We first verify that the N-pair formulation results in no worse performance than that of SimCLR. This justifies to conduct experiments using the N-pair formulation instead of that of\n15Specifically, brightness, contrast, and saturation are scaled by a factor uniformly sampled from [0.6, 1.4] at random, and hue is rotated in the HSV space by a factor uniformly sampled from [−0.1, 0.1] at random.\n16https://github.com/tugstugi/pytorch-speech-commands\nSimCLR, which is simpler and more efficient, especially when applying i-Mix, while not losing the performance. When pretext and downstream tasks share the training dataset, i-CutMix often outperforms i-MixUp, though the margin is small. However, i-CutMix shows a worse performance in transfer learning. Table C.2 compares the performance of SimCLR, N-pair contrastive learning, and i-Mix on N-pair contrastive learning when the pretext task is self-supervised and supervised contrastive learning. We confirm that the N-pair formulation results in no worse performance than that of SimCLR in supervised contrastive learning as well. i-Mix improves the performance of supervised contrastive learning from 95.7% to 97.0% on CIFAR-10, similarly to improvement achieved by MixUp for supervised learning where it improves the performance of supervised classifier learning from 95.5% to 96.6%. On the other hand, when the pretext dataset is CIFAR-100, the performance of supervised contrastive learning is not better than that of supervised learning: MixUp improves the performance of supervised classifier learning from 78.9% to 82.2%, and i-Mix improves the performance of supervised contrastive learning from 74.6% to 78.4%. While supervised i-Mix improves the classification accuracy on CIFAR-10 when trained on CIFAR10, the representation does not transfer well to CIFAR-100, possibly due to overfitting to 10 class classification. When pretext dataset is CIFAR-100, supervised contrastive learning shows a better performance than self-supervised contrastive learning regardless of the distribution shift, as it learns sufficiently general representation for linear classifier to work well on CIFAR-10 as well.\nC.3 QUALITATIVE EMBEDDING ANALYSIS\nFigure C.1 visualizes embedding spaces learned by N-pair contrastive learning and i-Mix on CIFAR10 and 100. When the downstream dataset is the same with the pretext task, both contrastive learning and i-Mix cluster classes well, as shown in Figure C.1(a) and C.1(b). However, when the downstream task is transferred to CIFAR-100, i-Mix in Figure C.1(d) clusters classes better than contrastive\nlearning in Figure C.1(c). Specifically, clusters of “apple,” “chair,” and “dolphin,” can be found in Figure C.1(d) while they spread out in Figure C.1(c). Also, “rose” and “squirrel” are more separated in Figure C.1(d) than C.1(c). This shows that the representation learned with i-Mix is more generalizable than vanilla contrastive learning.\nC.4 QUANTITATIVE EMBEDDING ANALYSIS\nTo estimate the quality of representation by the similarity between training and test data distribution, we measure the Fréchet embedding distance (FED): similarly to the Fréchet inception distance (FID) introduced in Heusel et al. (2017), FED is the Fréchet distance (Fréchet, 1957; Vaserstein, 1969) between the set of training and test embedding vectors under the Gaussian distribution assumption. For conciseness, let f̄i = f(xi)/‖f(xi)‖ be an `2 normalized embedding vector; we normalize embedding vectors as we do when we measure the cosine similarity. Then, with the estimated mean m = 1N ∑N i=1 f̄i and the estimated covariance S = 1 N ∑N i=1(f̄i −m)(f̄i −m)>, the FED can be\ndefined as d2 ( (mtr, Str), (mte, Ste) ) = ‖mtr −mte‖2 + Tr ( Str + Ste − 2(StrSte) 12 ) . (C.1)\nAs shown in Table C.3, i-Mix improves FED over contrastive learning, regardless of the distribution shift. Note that the distance is large when the training dataset of the downstream task is the same with that of the pretext task. This is because the model is overfit to the training dataset, such that the distance from the test dataset, which is unseen during training, has to be large. On the other hand, Table C.3 shows that i-Mix reduces the gap between the training and test accuracy. This implies that i-Mix is an effective regularization method for pretext tasks, such that the learned representation is more generalizable on downstream tasks." } ]
2,021
i-MIX: A DOMAIN-AGNOSTIC STRATEGY FOR CONTRASTIVE REPRESENTATION LEARNING
SP:1355359d2a6ca8940e4c3fa3f858779f49156d49
[ "The paper presents an approach to find canonical directions in a streaming fashion, i.e. without direct calculation of covariance matrices (which becomes hard when the number of examples is large). This solution to that task is not obvious, because the objective function of CCA, together with whitening constraints, does not allow simple additive decomposition." ]
We present an efficient stochastic algorithm (RSG+) for canonical correlation analysis (CCA) derived via a differential geometric perspective of the underlying optimization task. We show that exploiting the Riemannian structure of the problem reveals natural strategies for modified forms of manifold stochastic gradient descent schemes that have been variously used in the literature for numerical optimization on manifolds. Our developments complement existing methods for this problem which either require O(d) time complexity per iteration with O( 1 √ t ) convergence rate (where d is the dimensionality) or only extract the top 1 component with O( 1t ) convergence rate. In contrast, our algorithm achieves O(dk) runtime complexity per iteration for extracting top k canonical components with O( 1t ) convergence rate. We present our theoretical analysis as well as experiments describing the empirical behavior of our algorithm, including a potential application of this idea for training fair models where the label of protected attribute is missing or otherwise unavailable.
[]
[ { "authors": [ "P-A Absil", "Robert Mahony", "Rodolphe Sepulchre" ], "title": "Riemannian geometry of grassmann manifolds with a view on algorithmic computation", "venue": "Acta Applicandae Mathematica,", "year": 2004 }, { "authors": [ "Pierre-Antoine Absil", "Robert E. Mahony", "Rodolphe Sepulchre" ], "title": "Optimization algorithms on matrix manifolds", "venue": null, "year": 2007 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li" ], "title": "Doubly accelerated methods for faster cca and generalized eigendecomposition", "venue": "In ICML,", "year": 2016 }, { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep canonical correlation analysis", "venue": "In International conference on machine learning,", "year": 2013 }, { "authors": [ "Raman Arora", "Teodor Vanislavov Marinov", "Poorya Mianjy", "Nati Srebro" ], "title": "Stochastic approximation for canonical correlation analysis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Gary Bécigneul", "Octavian-Eugen Ganea" ], "title": "Riemannian adaptive optimization methods", "venue": "arXiv preprint arXiv:1810.00760,", "year": 2018 }, { "authors": [ "Kush Bhatia", "Aldo Pacchiano", "Nicolas Flammarion", "Peter L Bartlett", "Michael I Jordan" ], "title": "Gen-oja: Simple & efficient algorithm for streaming generalized eigenvector computation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Silvere Bonnabel" ], "title": "Stochastic gradient descent on riemannian manifolds", "venue": "IEEE Transactions on Automatic Control,", "year": 2013 }, { "authors": [ "William M Boothby" ], "title": "An introduction to differentiable manifolds and Riemannian geometry", "venue": "Academic press,", "year": 1986 }, { "authors": [ "Nicolas Boumal", "Bamdev Mishra", "P-A Absil", "Rodolphe Sepulchre" ], "title": "Manopt, a matlab toolbox for optimization on manifolds", "venue": "The Journal of Machine Learning Research,", "year": 2014 }, { "authors": [ "Rudrasis Chakraborty", "Liu Yang", "Soren Hauberg", "Baba Vemuri" ], "title": "Intrinsic grassmann averages for online linear, robust and nonlinear subspace learning", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Kamalika Chaudhuri", "Sham M Kakade", "Karen Livescu", "Karthik Sridharan" ], "title": "Multi-view clustering via canonical correlation analysis", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Heather D. Couture", "Roland Kwitt", "J.S. Marron", "Melissa A. Troester", "Charles M. Perou", "Marc Niethammer" ], "title": "Deep multi-view learning via task-optimal", "venue": "cca. ArXiv,", "year": 2019 }, { "authors": [ "Michele Donini", "Luca Oneto", "Shai Ben-David", "John S Shawe-Taylor", "Massimiliano Pontil" ], "title": "Empirical risk minimization under fairness constraints", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Alan Edelman", "Tomás A. Arias", "Steven Thomas Smith" ], "title": "The geometry of algorithms with orthogonality constraints", "venue": "SIAM J. Matrix Anal. Appl.,", "year": 1998 }, { "authors": [ "Chao Gao", "Dan Garber", "Nathan Srebro", "Jialei Wang", "Weiran Wang" ], "title": "Stochastic canonical correlation analysis", "venue": "Journal of Machine Learning Research,", "year": 2019 }, { "authors": [ "Rong Ge", "Chi Jin", "Praneeth Netrapalli", "Aaron Sidford" ], "title": "Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Gene H Golub", "Christian Reinsch" ], "title": "Singular value decomposition and least squares solutions", "venue": "In Linear Algebra,", "year": 1971 }, { "authors": [ "Gene H. Golub", "Hongyuan Zha" ], "title": "The canonical correlations of matrix pairs and their numerical computation", "venue": null, "year": 1992 }, { "authors": [ "Moritz Hardt", "Eric Price", "Nati Srebro" ], "title": "Equality of opportunity in supervised learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Sigurdur Helgason" ], "title": "Differential geometry and symmetric spaces, volume 341", "venue": "American Mathematical Soc.,", "year": 2001 }, { "authors": [ "Mariya Ishteva", "Pierre-Antoine Absil", "Sabine Van Huffel", "Lieven De Lathauwer" ], "title": "Best low multilinear rank approximation of higher-order tensors, based on the riemannian trust-region scheme", "venue": "SIAM J. Matrix Anal. Appl.,", "year": 2011 }, { "authors": [ "Tetsuya Kaneko", "Simone Fiori", "Toshihisa Tanaka" ], "title": "Empirical arithmetic averaging over the compact stiefel manifold", "venue": "IEEE Transactions on Signal Processing,", "year": 2012 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Yann LeCun", "Corinna Cortes", "CJ Burges" ], "title": "Mnist handwritten digit database", "venue": "ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist,", "year": 2010 }, { "authors": [ "Jun Li", "Li Fuxin", "Sinisa Todorovic" ], "title": "Efficient riemannian optimization on the stiefel manifold via the cayley transform", "venue": "arXiv preprint arXiv:2002.01113,", "year": 2020 }, { "authors": [ "Vishnu Suresh Lokhande", "Aditya Kumar Akash", "Sathya N Ravi", "Vikas Singh" ], "title": "Fairalm: Augmented lagrangian method for training fair models with little regret", "venue": "arXiv preprint arXiv:2004.01355,", "year": 2020 }, { "authors": [ "Yong Luo", "Dacheng Tao", "Kotagiri Ramamohanarao", "Chao Xu", "Yonggang Wen" ], "title": "Tensor canonical correlation analysis for multi-view dimension reduction", "venue": "IEEE transactions on Knowledge and Data Engineering,", "year": 2015 }, { "authors": [ "Zhuang Ma", "Yichao Lu", "Dean P. Foster" ], "title": "Finding linear structure in large datasets with scalable canonical correlation analysis", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Ari Morcos", "Maithra Raghu", "Samy Bengio" ], "title": "Insights on representational similarity in neural networks with canonical correlation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Arkadi Nemirovski", "Anatoli B. Juditsky", "Guanghui Lan", "Alexander Shapiro" ], "title": "Robust stochastic approximation approach to stochastic programming", "venue": "SIAM J. Optimization,", "year": 2009 }, { "authors": [ "Erkki Oja" ], "title": "Simplified neuron model as a principal component analyzer", "venue": "Journal of Mathematical Biology,", "year": 1982 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "W Nicholson Price", "I Glenn Cohen" ], "title": "Privacy in the age of medical big data", "venue": "Nature medicine,", "year": 2019 }, { "authors": [ "Markus Reiß", "Martin Wahl" ], "title": "Nonasymptotic upper bounds for the reconstruction error of pca", "venue": "Annals of Statistics,", "year": 2020 }, { "authors": [ "Danilo Jimenez Rezende", "George Papamakarios", "Sébastien Racanière", "Michael S Albergo", "Gurtej Kanwar", "Phiala E Shanahan", "Kyle Cranmer" ], "title": "Normalizing flows on tori and spheres", "venue": null, "year": 2002 }, { "authors": [ "Jan Rupnik", "John Shawe-Taylor" ], "title": "Multi-view canonical correlation analysis", "venue": "In Conference on Data Mining and Data Warehouses (SiKDD", "year": 2010 }, { "authors": [ "Cees GM Snoek", "Marcel Worring", "Jan C Van Gemert", "Jan-Mark Geusebroek", "Arnold WM Smeulders" ], "title": "The challenge problem for automated detection of 101 semantic concepts in multimedia", "venue": "In Proceedings of the 14th ACM international conference on Multimedia,", "year": 2006 }, { "authors": [ "Raghav Subbarao", "Peter Meer" ], "title": "Nonlinear mean shift over riemannian manifolds", "venue": "International journal of computer vision,", "year": 2009 }, { "authors": [ "Mingkui Tan", "Ivor Wai-Hung Tsang", "Li Wang", "Bart Vandereycken", "Sinno Jialin Pan" ], "title": "Riemannian pursuit for big matrix recovery", "venue": "In ICML,", "year": 2014 }, { "authors": [ "Roman Vershynin" ], "title": "Four lectures on probabilistic methods for data science", "venue": "The Mathematics of Data, IAS/Park City Mathematics Series,", "year": 2017 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Nathan Srebro" ], "title": "Stochastic optimization for deep cca via nonlinear orthogonal iterations", "venue": "In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton),", "year": 2015 }, { "authors": [ "Weiran Wang", "Jialei Wang", "Dan Garber", "Nati Srebro" ], "title": "Efficient globally convergent stochastic optimization for canonical correlation analysis", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Sirui Yao", "Bert Huang" ], "title": "Beyond parity: Fairness objectives for collaborative filtering", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Florian Yger", "Maxime Berar", "Gilles Gasso", "Alain Rakotomamonjy" ], "title": "Adaptive canonical correlation analysis based on matrix manifolds", "venue": "In ICML,", "year": 2012 }, { "authors": [ "Brian Hu Zhang", "Blake Lemoine", "Margaret Mitchell" ], "title": "Mitigating unwanted biases with adversarial learning", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "Hardt" ], "title": "A classifier h is said to satisfy EO if the prediction is independent of the protected attribute s (in our experiment s is a binary variable where", "venue": "Equality of Opportunity", "year": 2016 } ]
[ { "heading": null, "text": "t ) convergence\nrate (where d is the dimensionality) or only extract the top 1 component with O( 1t ) convergence rate. In contrast, our algorithm achieves O(d2k) runtime complexity per iteration for extracting top k canonical components with O( 1t ) convergence rate. We present our theoretical analysis as well as experiments describing the empirical behavior of our algorithm, including a potential application of this idea for training fair models where the label of protected attribute is missing or otherwise unavailable." }, { "heading": "1 INTRODUCTION", "text": "Canonical correlation analysis (CCA) is a popular method for evaluating correlations between two sets of variables. It is commonly used in unsupervised multi-view learning, where the multiple views of the data may correspond to image, text, audio and so on Rupnik & Shawe-Taylor (2010); Chaudhuri et al. (2009); Luo et al. (2015). Classical CCA formulations have also been extended to leverage advances in representation learning, for example, Andrew et al. (2013) showed how the CCA can be interfaced with deep neural networks enabling modern use cases. Many results over the last few years have used CCA or its variants for problems including measuring representational similarity in deep neural networks Morcos et al. (2018), speech recognition Couture et al. (2019), etc.\nThe goal in CCA is to find linear combinations within two random variables X and Y which have maximum correlation with each other. Formally, the CCA problem is defined in the following way. Given a pair of random variables, a dx-variate random variable X and a dy-variate random variable Y, with unknown joint probability distribution, find the projection matrices U ∈ Rdx×k and V ∈ Rdy×k, with k ≤ min{dx, dy}, such that the correlation is maximized:\nmaximize trace ( UTEX,Y [ XTY ] V ) s.t. UTEX [ XTX ] U = Ik, V TEY [ Y TY ] V = Ik (1)\nHere, X,Y are samples of X and Y respectively. The objective function in (1) is the expected crosscorrelation in the projected space and the constraints specify that different canonical components should be decorrelated. Let CX = EX[XTX] and CY = EY[Y TY ] be the covariance matrices, and CXY = E(X,Y)[XTY ] denote cross-covariance. Let us define the whitened covariance T := C −1/2 X CXY C −1/2 Y and Φk (and Ψk) contains the top-k left (and right) singular vectors of T . It is known Golub & Zha (1992) that the optimum of (1) is achieved at U∗ = C−1/2X Φk, V ∗ = C −1/2 Y Ψk.\nIn practice, we may be given two views of N samples as X ∈ RN×dx and Y ∈ RN×dy . A natural approach to solving CCA is based on on the following sequence of steps. We first compute the empirical covariance and cross-covariance matrices, namely, C̃X = 1/NXTX , C̃Y = 1/NY TY and C̃XY = 1/NX\nTY . We then calculate the empirical whitened cross-covariance matrix T̃ , finally, compute U∗, V ∗ by applying a k-truncated SVD to T̃ .\nRuntime and memory considerations. The above procedure is simple but is only feasible when the data matrices are small. But in most modern applications, not only are the datasets large but also the dimension d (let d = max{dx, dy}) of each sample can be quite high, especially if representations are being learned using deep neural network models. As a result, the computational footprint of the foregoing algorithm can be quite high. This has motivated the study of stochastic optimization routines for solving CCA. Observe that in contrast to the typical settings where stochastic optimization schemes are most effective, the CCA objective does not decompose over samples in the dataset. Many efficient strategies have been proposed in the literature: for example, Ge et al. (2016); Wang et al. (2016) present Empirical Risk Minimization (ERM) models which optimize the empirical objective. More recently, Gao et al. (2019); Bhatia et al. (2018); Arora et al. (2017) describe proposals that optimize the population objective. To summarize the approaches succinctly, if we are satisfied with identifying the top 1 component of CCA, effective schemes are available by utilizing either extensions of the Oja’s rule Oja (1982) to generalized eigenvalue problem Bhatia et al. (2018) or the alternating SVRG algorithm Gao et al. (2019)). Otherwise, a stochastic approach must make use of an explicit whitening operation which involves a cost of d3 for each iteration Arora et al. (2017).\nObservation. Most approaches either directly optimize (1) or instead a reparametrized or regularized form Ge et al. (2016); Allen-Zhu & Li (2016); Arora et al. (2017). Often, the search space for U and V corresponds to the entire Rd×k (ignoring the constraints for the moment). But if the formulation could be cast in a form which involved approximately writing U and V as a product of several matrices with nicer properties, we may obtain specialized routines which are tailored to exploit those properties. Such a reformulation is not difficult to derive – where the matrices used to express U and V can be identified as objects that live in well studied geometric spaces. Then, utilizing the geometry of the space and borrowing relevant tools from differential geometry leads to an efficient approximate algorithm for top-k CCA which optimizes the population objective in a streaming fashion.\nContributions. (a) First, we re-parameterize the top-k CCA problem as an optimization problem on specific matrix manifolds, and show that it is equivalent to the original formulation in equation 1. (b) Informed by the geometry of the manifold, we derive stochastic gradient descent algorithms for solving the re-parameterized problem with O(d2k) cost per iteration and provide convergence rate guarantees. (c) This analysis provides a direct mechanism to obtain an upper bound on the number of iterations needed to guarantee an error w.r.t. the population objective for the CCA problem. (d) The algorithm works in a streaming manner so it easily scales to large datasets and we do not need to assume access to the full dataset at the outset. (e) We present empirical evidence for both the standard CCA model and the DeepCCA setting Andrew et al. (2013), describing advantages and limitations." }, { "heading": "2 STOCHASTIC CCA: REFORMULATION, ALGORITHM AND ANALYSIS", "text": "The formulation of Stochastic CCA and the subsequent optimization scheme will seek to utilize the geometry of the feasible set for computational gains. Specifically, we will use the following manifolds (please see Absil et al. (2007) for more details):\n(a) Stiefel: St(p, n). The manifold consists of n× p, with p < n, column orthonormal matrices, i.e., St(p, n) = { X ∈ Rn×p|XTX = Ip } . (b) Grassmanian: Gr(p, n). The manifold consists of p-dimensional subspaces in Rn, with p < n. (c) Rotations: SO(n). the manifold/group consists of n × n special orthogonal matrices, i.e.,\nSO(n) = { X ∈ Rn×n|XTX = XXT = In, det(X) = 1 } .\nWe summarize certain geometric properties/operations for these manifolds in the Appendix but have been leveraged in recent works for other problems also Li et al. (2020); Rezende et al. (2020).\nLet us recall the objective function for CCA as given in (1). We denote X ∈ RN×dx as the matrix consisting of the samples {xi} drawn from a zero mean random variable X ∼ X and Y ∈ RN×dy denotes the matrix consisting of samples {yi} drawn from a zero mean random variable Y ∼ Y . For notational and formulation simplicity, we assume that dx = dy = d in the remainder of the paper although the results hold for general dx and dy .\nLet CX , CY be the covariance matrices of X, Y. Also, let CXY be the cross-correlation matrix between X and Y. Then, we can write the CCA objective as\nmax U,V\nF = trace ( UTCXY V ) subject to UTCXU = Ik V TCY V = Ik (2)\nHere, U ∈ Rd×k (V ∈ Rd×k) is the matrix consisting of {uj} ({vj}) , where ({uj} , {vj}) are the canonical directions. The constraints in equation 2 are called whitening constraints.\nLet us define matrices Ũ , Ṽ ∈ Rd×k which lie on the Stiefel manifold, St(k, d). Also, let Su, Sv ∈ Rk×k denote upper triangular matrices and Qu, Qv ∈ SO(k). We can rewrite the above equation and the constraint as follows.\nA Reformulation for CCA\nmax Ũ,Ṽ ,Su,Sv,Qu,Qv\nU=ŨQuSu; V=Ṽ QvSv\nF̃ = trace ( UTCXY V ) (3a)\nsubject to UTCXU = Ik; V TCY V = Ik (3b)\nŨ , Ṽ ∈ St(k, d); Qu, Qv ∈ SO(k); Su, Sv is upper triangular\nHere, we will maximize (3a) with respect to Ũ , Ṽ , Su, Sv , Qu, and Qv satisfying equation 3b.\nMain adjustment from (2) to (3): In (2), while U and V should decorrelateCX andCY respectively, the optimization/search is unrestricted and treats them as arbitrary matrices. In contrast, equation 3 additionally decomposes U and V as U = ŨQuSu and V = Ṽ QvSv with the components as structured matrices. Hence, the optimization is regularized.\nThe above adjustment raises two questions: (i) does there exist a non-empty feasible solution set for (3)? (ii) if a solution to (3) can be found (which we will describe later), how “good” is this solution for the CCA objective problem, i.e., for (2)?\nExistence of a feasible solution: We need to evaluate if the constraints in (3b) can be satisfied at all. Observe that by using Ũ to be the top-k principal directions of X , Su to be the 1/ √\ntop-k eigen values of CX and Qu to be any orthogonal matrix, we can easily satisfy the “whitening constraint” and hence ŨQuSu is a feasible solution of U in (3) and similarly for V . From this starting point, we can optimize the objective while ensuring that we maintain feasibility.\nIs the solution for equation 3 a good approximation for equation 2?: We can show that under some assumptions, the estimator for canonical correlation, i.e., solution of equation 3, is consistent, i.e., solves equation 2. We will state this formally shortly.\nBefore characterizing the properties of a solution for equation 3, we first provide some additional intuition behind equation 3 and describe how it helps computationally.\nIntuition behind the decomposition U = ŨQuSu: A key observation is the following. Recall that by performing principal component analysis (PCA), the resultant projection matrix will exactly satisfy the decorrelation condition needed for the “whitening constraint” in equation 2 (projection matrix consists of the eigen-vectors of XTX). A natural question to ask is: Can we utilize streaming PCA algorithm to help us obtain an efficient streaming CCA algorithm? Let us assume that our estimate for canonical correlation directions, i.e., solutions of equation 3, lies in the principal subspace calculated above. If so, we can use the decomposition U = ŨAu (analogously for V ), where Ũ contains the principal directions, i.e., ∈ St(k, d) and Au is a full rank k × k matrix containing the coefficients of the span. But maintaining the full rank constraint during optimization is hard, so we further decompose Au into Au = QuSu with Qu ∈ SO(k); Su is upper triangular. Additionally, we ensure the diagonal of Su to be non-zero to maintain full-rank of Su. During optimization, we can maintain the non-zero diagonal entries by optimizing the log of the diagonal entries instead.\nWhy equation 3 helps? First, we note that CCA seeks to maximize the total correlation under the constraint that different components are decorrelated. The difficult part in the optimization is to ensure decorrelation, which leads to a higher complexity in existing streaming CCA algorithms. On the contrary, in equation 3, we separate equation 2 into finding the PCs and finding the linear coefficients for the span of principal directions. Then, by utilizing an efficient streaming PCA algorithm, a lower complexity can be achieved. We will defer describing the specific details of the optimization itself until the next sub-section. First, we will show formally why substituting equation 2 with equation 3a–equation 3b is sensible under some assumptions." }, { "heading": "2.1 HOW TO USE THE REFORMULATION IN EQUATION 3?", "text": "We first start by stating some mild assumptions needed for the analysis.\nAssumptions: (a) The random variables X ∼ N (0,Σx) and Y ∼ N (0,Σy) with Σx cId and Σy cId for some c > 0. (b) The samples X and Y drawn from X and Y respectively have zero mean. (c) For a given k ≤ d, Σx and Σy have non-zero top-k eigen values.\nA high-level solution to optimize F̃ in equation 3: Recall the following scheme which we briefly summarized earlier.\n(a) Initialize Ũ , Ṽ ∈ St(k, d) as the top-k eigen vectors of CX = (1/N)XTX and CY = (1/N)Y TY respectively; Initialize Qu and Qv to be random SO(k) matrices; (b) Set Su and Sv to be diagonal matrices with the diagonal entries to be the square root of the inverse of the top-k eigen values (to satisfy upper-triangular property); Observe that with this initialization, the constraints in equation 3b are satisfied. With a feasible solution for U and V in hand, we may optimize equation 3a while satisfying equation 3b. The specific details of how this is done is not critical at this point as long as we assume that a suitable numerical optimization scheme exists and can be implemented.\nWith the component matrices, we can construct the solution as U = ŨQuSu and V = Ṽ QvSv .\nWhy the solution makes sense? We now show how the presented solution, assuming access to an effective numerical procedure, approximates the CCA problem presented in equation 2. We formally state the result in the following theorem with a sketch of proof (appendix includes the full proof) by first stating the following proposition and a definition.\nDefinition 1. A random variable X is called sub-Gaussian if the norm given by ‖X‖? := inf {d ≥ 0|EX [exp (trace(XTX)/d2)] ≤ 2} is finite. Let U ∈ Rd×k, then XU is sub-Gaussian Vershynin (2017).\nProposition 1 (Reiß et al. (2020)). Let X be a random variable which follows a sub-Gaussian distribution. Let X̂ be the approximation of X ∈ RN×d (samples drawn from X ) with the top-k principal vectors. Let C̃X be the covariance of X̂ . Also, assume that λi is the ith eigen value of CX for i = 1, · · · , d− 1 and λi ≥ λi+1 for all i. Then, the PCA reconstruction error, denoted by Ek = ‖X − X̂‖ (in the Frobenius norm sense) can be upper bounded as follows\nEk ≤ min (√ 2k‖∆‖2, 2‖∆‖22\nλk − λk+1\n) , where ∆ = CX − C̃X .\nThe aforementioned proposition suggests that the error between the data matrix X and the reconstructed data matrix X̂ using the top-k principal vectors is bounded.\nRecall from (2) and (3) that the value of the CCA objective is denoted by F and F̃ . The following theorem states that we can bound the error, E = ‖F − F̃‖ (proof is in the Appendix). The proof includes upper-boundingE by the reconstruction error of the data projected on the principal directions using Prop. 1.\nTheorem 1. Using the hypothesis and assumptions above, the approximation error E = ‖F − F̃‖ is bounded and goes to zero while the whitening constraints in equation 3b are satisfied.\nNow, the only unresolved issue is an optimization scheme for equation 3a that keeps the constraints in equation 3b satisfied by leveraging the geometry of the feasible set." }, { "heading": "2.2 HOW TO NUMERICALLY OPTIMIZE (3A) SATISFYING CONSTRAINTS IN (3B)?", "text": "Overview. We now describe how to maximize the formulation in equation 3a–equation 3b with respect to Ũ , Ṽ , Qu, Qv, Su and Sv. We will first compute top-k principal vectors to get Ũ and Ṽ . Then, we will use a gradient update rule to solve for Qu, Qv, Su and Sv to improve the objective. Since all these matrices are “structured”, care must be taken to ensure that the matrices remain\non their respective manifolds – which is where the geometry of the manifolds will offer desirable properties. We re-purpose a Riemannian stochastic gradient descent (RSGD) to do this, so call our algorithm RSG+. Of course, more sophisticated Riemannian optimization techniques can be substituted in. For instance, different Riemannian optimization methods are available in Absil et al. (2007) and optimization schemes for many manifolds are offered in PyManOpt Boumal et al. (2014).\nThe algorithm block is presented in Algorithm 1 (a direct implementable block for the algorithm including the expression for gradients is presented in the Appendix A.3). Let F̃pri = trace ( UTCXU) ) + trace ( V TCY V ) ) be the contribution from the principal directions which we\nused to ensure the “whitening constraint”. Let F̃can = trace ( UTCXY V ) be the contribution from the canonical correlation directions. The algorithm consists of four main blocks denoted by different colors, namely (a) the Red block deals with gradient calculation of the objective function where we calculate the top-k principal vectors (denoted by F̃pri) with respect to Ũ , Ṽ ; (b) the Green block describes calculation of the gradient corresponding to the canonical directions (denoted by F̃can) with respect to Ũ , Ṽ , Su, Sv, Qu and Qv; (c) the Gray block combines the gradient computation from both F̃pri and F̃can with respect to unknowns Ũ , Ṽ , Su, Sv , Qu and Qv; and finally (d) the Blue block performs a batch update of the canonical directions F̃can using Riemannian gradient updates.\nGradient calculations. The gradient update for Ũ , Ṽ is divided into two parts (a) The (Red block) gradient updates the “principal” directions (denoted by ∇Ũ F̃pri and ∇Ṽ F̃pri), which is specifically designed to satisfy the whitening constraint. Since this requires updating the principal subspaces, so, the gradient descent needs to proceed on the manifold of subspaces, i.e., on the Grassmannian. (b) The (green block) gradient from the objective function in equation 3, is denoted by ∇Ũ F̃can and ∇Ṽ F̃can. In order to ensure that the Riemannian gradient update for Ũ and Ṽ stays on the manifold St(k, d), we need to make sure that the gradients, i.e.,∇Ũ F̃can and∇Ṽ F̃can lies in the tangent space of St(k, d). In order to do that, we need to first calculate the Euclidean gradient and then project on to the tangent space of St(k, d).\nThe gradient updates for Qu, Qv, Su, Sv are given in the green block, denoted by∇Qu F̃can,∇Qv F̃can, ∇Su F̃can and ∇Sv F̃can. Note that unlike the previous step, this gradient only has components from canonical correlation computation. As before, this step requires first computing the Euclidean gradient and then projecting on to the tangent space of the underlying Riemannian manifolds involved, i.e., SO(k) and the space of upper triangular matrices.\nFinally, we get the gradient to update the canonical directions by combining the gradients which is shown in gray block. With these gradients we can perform a batch update as shown in the blue block. Using convergence results presented next in Propositions 2–3, this scheme can be shown (under some assumptions) to approximately optimize the CCA objective in equation 2.\nWe can now move to the convergence properties of the algorithm. We present two results stating the asymptotic proof of convergence for top-k principal vectors and canonical directions in the algorithm.\nProposition 2 (Chakraborty et al. (2020)). (Asymptotically) If the samples, X , are drawn from a Gaussian distribution, then the gradient update rule presented in Step 5 in Algorithm 1 returns an orthonormal basis – the top-k principal vectors of the covariance matrix CX .\nProposition 3. (Bonnabel (2013)) Consider a connected Riemannian manifoldM with injectivity radius bounded from below by I > 0. Assume that the sequence of step sizes (γl) satisfy the condition (a) ∑ γ2l < ∞ (b) ∑ γl = ∞. Suppose {Al} lie in a compact set K ⊂ M. We also suppose that\n∃D > 0 such that, gAl ( ∇Al F̃ ,∇Al F̃ ) ≤ D. Then ∇Al F̃ → 0 and l→∞.\nNotice that in our problem, the manifold M can be Gr(p, n), St(p, n) or SO(p). Hence all the assumptions in Proposition 3 are satisfied if we guarantee the step sizes satisfy the aforementioned condition. One example of the step sizes that satisfies the property is γl = 1l+1 ." }, { "heading": "2.3 CONVERGENCE RATE AND COMPLEXITY OF THE RSG+ ALGORITHM", "text": "In this section, we describe the convergence rate and complexity of the algorithm proposed in Algorithm 1. Observe that the key component of Algorithm 1 is a Riemannian gradient update. Let\nAt be the generic entity needed to be updated in the algorithm using the Riemannian gradient update At+1 = ExpAt ( −γt∇At F̃ ) , where γt is the step size at time step t. Also assume {At} ⊂ M for a Riemannian manifoldM. The following proposition states that under certain assumptions, the Riemannian gradient update has a convergence rate of O ( 1 t ) .\nProposition 4. (Nemirovski et al. (2009); Bécigneul & Ganea (2018)) Let {At} lie inside a geodesic ball of radius less than the minimum of the injectivity radius and the strong convexity radius of M. AssumeM to be a geodesically complete Riemannian manifold with sectional curvature lower bounded by κ ≤ 0. Moreover, assume that the step size {γt} diverges and the squared step size converges. Then, the Riemannian gradient descent update given by At+1 = ExpAt ( −γt∇At F̃\n) with a bounded ∇At F̃ , i.e., ‖∇At F̃‖ ≤ C <∞ for some C ≥ 0, converges in the rate of O ( 1 t ) .\nAll Riemannian manifolds we used, i.e., Gr(k, d), St(k, d) and SO(k) are geodesically complete, and these manifolds have non-negative sectional curvatures, i.e., lower bounded by κ = 0. Now, as long as the Riemannian updates lie inside the geodesic ball of radius less than the minimum of injectivity and convexity radius, the convergence rate for RGD applies in our setting.\nRunning time. To evaluate time complexity, we must look at the main compute-heavy steps needed. The basic modules are Exp and Exp−1 maps for St(k, d), Gr(k, d) and SO(k) manifolds (see Table 4 in the appendix). Observe that the complexity of these modules is influenced by the complexity of svd needed for the Exp map for the St and Gr manifolds. Our algorithm involves structured matrices of size d× k and k× k, so any matrix operation should not exceed a cost of O(max(d2k, k3)), since in general d k. Specifically, the most expensive calculation is SVD of matrices of size d × k, which is O(d2k), see Golub & Reinsch (1971). All other calculations are dominated by this term." }, { "heading": "3 EXPERIMENTS", "text": "We first evaluate RSG+ for extracting top-k canonical components on three benchmark datasets and show that it performs favorably compared with Arora et al. (2017). Then, we show that RSG+ can also fits into feature learning in DeepCCA Andrew et al. (2013), and can scale to large feature dimensions where the non-stochastic method fails to. Finally we show that RSG+ can be used to improve fairness of deep neural networks without needing labels of protected attributes during training.\nAlgorithm 1: Riemannian SGD based algorithm (RSG+) to compute canonical directions Input: X ∈ RN×dx , Y ∈ RN×dy , k > 0 Output: U ∈ Rdx×k, V ∈ Rdy×k\n1 Initialize Ũ , Ṽ , Qu, Qv, Su, Sv; Partition X,Y into batches of size B. Batch jth denoted by Xj and Yj ; 2 for j ∈ { 1, · · · , bN B c } do\n3\nGradient for top-k principal vectors: calculating∇Ũ F̃pri,∇Ṽ F̃pri 1. Partition Xj (Yj) into L (L = bBk c) blocks of size dx × k (dy × k); 2. Let the lth block be denoted by Zxl (Z y l );\n3. Orthogonalize each block and let the orthogonalized block be denoted by Ẑxl (Ẑ y l ); 4. Let the subspace spanned by each Ẑxl (and Ẑ y l ) be Ẑ x l ∈ Gr(k, dx) (and Ẑ y\nl ∈ Gr(k, dy)); 5. Update∇Ũ F̃pri and∇Ṽ F̃pri based on Ẑ x l and Ẑ y l respectively.\n4\nGradient from equation 3: calculating∇Ũ F̃can,∇Ṽ F̃can,∇Qu F̃can,∇Qv F̃can,∇Su F̃can,∇Sv F̃can Calculation of the Riemannian gradients of Ũ , Ṽ , Qu, Qv , Su and Sv from equation 3, i.e., objective from CCA.\n5\nGradient to update canonical directions: calculating∇AF̃ where, A ∈ {Ũ , Ṽ , Qu, Qv, Su, Sv} The final gradients of Ũ and Ṽ is a combination of gradient from objective for principal vectors and CCA; On the other hand, the gradients for Qu, Qv, Su, Sv is from only CCA objective;\n6\nBatch update of canonical directions A = ExpA ( −γj∇AF̃ ) where, A is a generic entity: A ∈ {Ũ , Ṽ , Qu, Qv, Su, Sv};\n7 end 8 U = ŨQuSu and V = Ṽ QvSv;" }, { "heading": "3.1 CCA ON FIXED DATASETS", "text": "Datasets and baseline. We conduct experiments on three benchmark datasets (MNIST LeCun et al. (2010), Mediamill Snoek et al. (2006) and CIFAR-10 Krizhevsky (2009)) to evaluate the performance of RSG+ to extract top-k canonical components. To our best knowledge, Arora et al. (2017) is the only previous work which stochastically optimizes the population objective in a streaming fashion and can extract top-k components, so we compare our RSG+ with the matrix stochastic gradient (MSG) method proposed in Arora et al. (2017) (There are two methods proposed in Arora et al. (2017) and we choose MSG because it performs better in the experiments of Arora et al. (2017)). The details about the three datasets and how we process them are as follows:\nMNIST LeCun et al. (2010): MNIST contains grey-scale images of size 28 × 28. We use its full training set containing 60K images. Every image is split into left/right half, which are used as the two views. Mediamill Snoek et al. (2006): Mediamill contains around 25.8K paired features of videos and corresponding commentary of dimension 120, 101 respectively. CIFAR-10 Krizhevsky (2009): CIFAR-10 contains 60K 32× 32 color images. Like MNIST, we split the images into left/right half and use them as two views.\nEvaluation metric. We choose to use Proportion of Correlations Captured (PCC) which is widely used Ma et al. (2015); Ge et al. (2016), partly due to its efficiency, especially for relatively large datasets. Let Û ∈ Rdx×k, V̂ ∈ Rdy×k denote the estimated subspaces returned by RSG+, and U∗ ∈ Rdx×k, V ∗ ∈ Rdy×k denote the true canonical subspaces (all for top-k). The PCC is defined as PCC = TCC(XÛ,Y V̂ )TCC(XU∗,Y V ∗) , where TCC is the sum of canonical correlations between two matrices.\nPerformance. See A.4 for the implementation deails. The performance in terms of PCC as a function of # of seen samples (coming in a streaming way) are shown in Fig. 1, and the runtime is reported in A.5 . Our RSG+ captures more correlation than MSG Arora et al. (2017) while being 5− 10 times faster. One case where our RSG+ underperform Arora et al. (2017) is when the top-k eigenvalues are dominated by the top-l eigenvalues with l < k (Fig. 1b): on Mediamill dataset, the top-4 eigenvalues of the covariance matrix in view 1 are: 8.61, 2.99, 1.15, 0.37. The first eigenvalue is dominantly large compared with the rest and our RSG+ performs better for k = 1 and worse than Arora et al. (2017) for k = 2, 4. We also plot the runtime of RSG+ under different data dimension (set dx = dy = d) and number of total samples sampled from joint gaussian distribution in A.5.\nWe implemented the method from Yger et al. (2012) and conduct experiments on the three datasets above. The results are shown in Table 1. We tune the step size between [0.0001, 0.1] and β = 0.99 as used in their paper. On MNIST and MEDIAMILL, the method performs comparably with ours except k = 4 case on MNIST where it does not converge well. Since this algorithms also has a d3 complexity, the runtime is 100× more than ours on MNIST and 20× more on Mediamill. On CIFAR10, we fail to find a suitable step size for convergence." }, { "heading": "3.2 CCA FOR DEEP FEATURE LEARNING", "text": "Background and motivation. A deep neural network (DNN) extension of CCA was proposed by Andrew et al. (2013) and has become popular in the multi-view representation learning tasks. The idea is to learn a deep neural network as the mapping from original data space to a latent space where the canonical correlations are maximized. We refer the reader to Andrew et al. (2013) for details of the task. Since deep neural networks are usually trained using SGD on mini-batches, this requires\ngetting estimate of CCA objective at every iteration in a streaming fashion, thus our RSG+ can be a natural choice here. We conduct experiments on a noisy version of MNIST dataset to evaluate RSG+.\nDataset. We follow Wang et al. (2015a) to construct a noisy version of MNIST: View 1 is a randomly sampled image which is first rescaled to [0, 1] and then rotated by a random angle from [−π4 , π\n4 ]. View 2 is randomly sampled from the same class as view 1. Then we add independent uniform noise from [0, 1] to each pixel. Finally the image is truncated into [0, 1] to form the view 2.\nImplementation details. We use a simple 2-layer MLP with ReLU nonlinearity, where the hidden dimension in the middle is 512 and the output feature dimension is d ∈ {100, 500, 1000}. After the network is trained on the CCA objective, we use a linear Support\nVector Machine (SVM) to measure classification accuracy on output latent features. Andrew et al. (2013) uses the closed form CCA objective on the current batch directly, which costs O(d3) memory and time for every iteration.\nPerformance. Table 2 shows that we get similar performance when d = 100 and can scale to large latent dimensions d = 1000 while the batch method Andrew et al. (2013) encounters numerical difficulty on our GPU resources and Pytorch Paszke et al. (2019) platform in performing eigendecomposition of d× d matrix when d = 500, and becomes difficult if d is larger than 1000.\n3.3 CCA FOR FAIRNESS\nBackground and motivation. Fairness is becoming an important issue to consider in the design of learning algorithms. A common strategy to make an algorithm fair is to remove the influence of one/more protected attributes when training the models, see Lokhande et al. (2020). Most methods assume that the labels of protected attributes are known during training but\nthis may not always be possible. CCA enables considering a slightly different setting, where we may not have per-sample protected attributes which may be sensitive or hard to obtain for third-parties Price & Cohen (2019). On the other hand, we assume that a model trained to predict the protected attribute labels has been trained and is provided. For example, if the protected attribute is gender, we only assume that a well trained classifier which is trained to predict gender from the samples is available rather than sample-wise gender values themselves. We next demonstrate that fairness of the model, using standard measures, can be improved via constraints on correlation values from CCA.\nDataset. CelebA Wang et al. (2015b) consists of 200K celebrity face images from the internet. There are up to 40 labels, each of which is binary-valued. Here, we follow Lokhande et al. (2020) to focus on the attactiveness attribute (which we want to train a classifier to predict) and the gender is treated as “protected” since it may lead to an unfair classifier according to Lokhande et al. (2020).\nMethod. Our strategy is inspired by Morcos et al. (2018) which showed that canonical correlations can reveal the similarity in neural networks: when two networks (same architecture) are trained using different labels/schemes for example, canonical correlations can indicate how similar their features are. Our observation is the following. Consider a classifier that is trained on gender (the protected attribute), and another classifier that is trained on attractiveness, if the features extracted by the latter model share a high similarity with the one trained to predict gender, then it is more likely that the latter model is influenced by features in the image pertinent to gender, which will lead to an unfairly biased trained model. We show that by imposing a loss on the canonical correlation between the network being trained (but we lack per-sample protected attribute information) and a well trained classifier pre-trained on the protected attributes, we can get a more fair model. This may enable training fairer models in settings which would otherwise be difficult.\nImplementation details. To simulate the case where we only have a pretrained network on protected attributes, we train a Resnet-18 He et al. (2016) on gender attribute, and when we train the classifier to predict attractiveness, we add a loss using the canonical correlations between these two networks on intermediate layers: Ltotal = Lcross-entropy + LCCA where the first term is the standard cross entropy term and the second term is the canonical correlation. See A.7 for more details of training/evaluation.\nResults. We choose two commonly used error metrics for fairness: difference in Equality of Opportunity Hardt et al. (2016) (DEO), and difference in Demographic Parity Yao & Huang (2017) (DDP). See appendix A.6 for more detailed explaination of the two metrics. We conduct experiments by applying the canonical correlation loss on three different layers in Resnet-18. In Table 3, we can see that applying canonical correlation loss generally improves the DEO and DDP metrics (lower is better) over the standard model (trained using cross entropy loss only). Specifically, applying the loss on early layers like conv0 and conv1 gets better performance than applying at a relatively late layer like conv2. Another promising aspect of our approach is that is can easily handle the case where the protected attribute is a continuous variable (as long as a well trained regression network on the protected attribute is given) while other methods like Lokhande et al. (2020); Zhang et al. (2018) need to first discretize the variable and then enforce constraints which can be much more involved." }, { "heading": "4 RELATED WORK", "text": "Stochastic CCA: There has been much interest in designing scalable and provable algorithms for CCA: Ma et al. (2015) proposed the first stochastic algorithm for CCA, while only local convergence is proven for non-stochastic version. Wang et al. (2016) designed algorithm which uses alternating SVRG combined with shift-and-invert pre-conditioning, with global convergence. These stochastic methods, together with Ge et al. (2016) Allen-Zhu & Li (2016), which reduce CCA problem to generalized eigenvalue problem and solve it by performing efficient power method, all belongs to the methods that try to solve empirical CCA problem, it can be seen as ERM approxiamtion of the priginal population objective, which requires solving numerical optimization of the empirical CCA objective on a fixed data set. These methods usually assume the access to the full dataset in the beginning, which is not very suitable for many practical applications where data tend to come in a streaming way. Recently, there are increasingly interest in considering population CCA problem Arora et al. (2017) Gao et al. (2019). The main difficulty in population setting is we have limited knowledge about the objective unless we know the distribution of X and Y. Arora et al. (2017) handles this problem by deriving an estimation of gradient of population objecitve whose error can be properly bounded so that applying proximal gradient to a convex relexed objective will provably converge. Gao et al. (2019) provides tightened analysis of the time complexity of the algorithm in Wang et al. (2016), and provides sample complexity under certain distribution. The problem we are trying to solve in this work is the same as that in Arora et al. (2017); Gao et al. (2019): to optimize the population objective of CCA in a streaming fashion. Riemannian Optimization: Riemannian optimization is a generalization of standard Euclidean optimization methods to smooth manifolds, which takes the following form: Given f :M→ R, solve minx∈M f(x), whereM is a Riemannian manifold. One advantage is that it provides a nice way to express many constrained optimization problems as unconstrained problems. Applications include matrix and tensor factorization Ishteva et al. (2011), Tan et al. (2014), PCA Edelman et al. (1998), CCA Yger et al. (2012), and so on. Yger et al. (2012) rewrites CCA formulation as Riemannian optimization on Stiefel manifold. In our work, we further explore the ability of Riemannian optimization framework, decomposing the linear space spanned by canonical vectors into products of several matrices which lie in several different Riemannian manifolds." }, { "heading": "5 CONCLUSIONS", "text": "In this work, we presented a stochastic approach (RSG+) for the CCA model based on the observation that the solution of CCA can be decomposed into a product of matrices which lie on certain structured spaces. This affords specialized numerical schemes and makes the optimization more efficient. The optimization is based on Riemannian stochastic gradient descent and we provide a proof for its O( 1t ) convergence rate with O(d2k) time complexity per iteration. In experimental evaluations, we find that our RSG+ behaves favorably relative to the baseline stochastic CCA method in capturing the correlation in the datasets. We also show the use of RSG+ in the DeepCCA setting showing feasibility when scaling to large dimensions as well as in an interesting use case in training fair models." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 A BRIEF REVIEW OF RELEVANT DIFFERENTIAL GEOMETRY CONCEPTS", "text": "To make the paper self-contained, we briefly review certain differential geometry concepts. We only include a condensed description – needed for our algorithm and analysis – and refer the interested reader to Boothby (1986) for a comprehensive and rigorous treatment of the topic.\nRiemannian Manifold: A Riemannian manifold, M, (of dimension m) is defined as a (smooth) topological space which is locally diffeomorphic to the Euclidean space Rm. Additionally,M is equipped with a Riemannian metric g which can be defined as gX : TXM× TXM→ R, where TXM is the tangent space at X ofM, see Fig. 2. If X ∈ M, the Riemannian Exponential map at X , denoted by ExpX : TXM → M is defined\nas γ(1) where γ : [0, 1] → M. We can find γ by solving the following differential equation: γ(0) = X, (∀t0 ∈ [0, 1])dγdt ∣∣∣ t=t0 = U. In general ExpX is not invertible but the inverse Exp −1 X : U ⊂ M → TXM is defined only if U = Br(X), where r is called the injectivity radius Boothby (1986) ofM. This concept will be useful to define the mechanics of gradient descent on the manifold. In our reformulation, we will shortly make use of the following manifolds, specifically, when decomposing U and V into a product of several matrices. (a) St(p, n): the manifold consists of n× p column orthonormal matrices (b) Gr(p, n): the manifold consists of p-dimensional subspaces in Rn (c) SO(n), the manifold/group consists of n×n special orthogonal matrices, i.e., space of orthogonal matrices with determinant 1.\nDifferential Geometry of SO(n): SO(n) is a compact Riemannian manifold, hence by the HopfRinow theorem, it is also a geodesically complete manifold Helgason (2001). Its geometry is well understood and we recall a few relevant concepts here and refer the reader to Helgason (2001) for details. SO(n) has a Lie group structure and the corresponding Lie algebra, so(n), is defined as, so(n) = {W ∈ Rn×n|WT = −W}. In other words, so(n) (the set of Left invariant vector fields with associated Lie bracket) is the set of n×n anti-symmetric matrices. The Lie bracket, [, ], operator on so(n) is defined as the commutator, i.e., for U, V ∈ so(n), [U, V ] = UV − V U . Now, we can define a Riemannian metric on SO(n) as follows: 〈U, V 〉X = trace ( UTV ) where, U, V ∈ TX(SO(n)), X ∈ SO(n). Note that, it can be shown that this is a bi-invariant Riemannian metric. Under this bi-invariant metric, now we define the Riemannian exponential and inverse exponential map as follows. Let, X,Y ∈ SO(n), U ∈ TX(SO(n)). Then, Exp−1X (Y ) = X log(XTY ) and ExpX(U) = X exp(X TU) where, exp, log are the matrix exponential and logarithm respectively.\nDifferential Geometry of the Stiefel manifold: The set of all full column rank (n× p) dimensional real matrices form a Stiefel manifold, St(p, n), where n ≥ p. A compact Stiefel manifold is the set of all column orthonormal real matrices. When p < n, St(p, n) can be identified with SO(n)/SO(n− p). Note that, when we consider the quotient space, SO(n)/SO(n− p), we assume that SO(n−p) ' F (SO(n−p)) is a subgroup of SO(n), where, F : SO(n−p)→ SO(n) defined\nby X 7→ [ Ip 0 0 X ] is an isomorphism from SO(n− p) to F (SO(n− p)).\nDifferential Geometry of the Grassmannian Gr(p, n): The Grassmann manifold (or the Grassmannian) is defined as the set of all p-dimensional linear subspaces in Rn and is denoted by Gr(p, n), where p ∈ Z+, n ∈ Z+, n ≥ p. Grassmannian is a symmetric space and can be identified with the quotient space SO(n)/S (O(p)×O(n− p)), where S (O(p)×O(n− p)) is the set of all n × n matrices whose top left p × p and bottom right n − p × n − p submatrices are orthogonal and all other entries are 0, and overall the determinant is 1. A point X ∈ Gr(p, n) can be specified by a basis, X . We say that X = Col(X) if X is a basis of X , where Col(.) is the column span operator. It is easy to see that the general linear group GL(p) acts isometrically, freely and properly on St(p, n). Moreover, Gr(p, n) can be identified with the quotient space St(p, n)/GL(p). Hence, the projection\nmap Π : St(p, n)→ Gr(p, n) is a Riemannian submersion, where Π(X) , Col(X). Moreover, the triplet (St(p, n),Π,Gr(p, n)) is a fiber bundle.\nAt every point X ∈ St(p, n), we can define the vertical space, VX ⊂ TXSt(p, n) to be Ker(Π∗X). Further, given gSt, we define the horizontal space, HX to be the gSt-orthogonal complement of VX . Now, from the theory of principal bundles, for every vector field Ũ on Gr(p, n), we define the horizontal lift of Ũ to be the unique vector field U on St(p, n) for which UX ∈ HX and Π∗XUX = ŨΠ(X), for all X ∈ St(p, n). As, Π is a Riemannian submersion, the isomorphism Π∗X |HX : HX → TΠ(X)Gr(p, n) is an isometry from (HX , gStX) to (TΠ(X)Gr(p, n), gGrΠ(X)). So, gGrΠ(X) is defined as:\ngGrΠ(X)(ŨΠ(X), ṼΠ(X)) = g St X(UX , VX) = trace((X TX)−1UTXVX) (4)\nwhere, Ũ , Ṽ ∈ TΠ(X)Gr(p, n) and Π∗XUX = ŨΠ(X), Π∗XVX = ṼΠ(X), UX ∈ HX and VX ∈ HX .\nWe covered the exponential map and the Riemannian metric above, and their explicit formulation for manifolds listed above is provided for easy reference in Table 4." }, { "heading": "A.2 PROOF OF THEOREM 1", "text": "We first restate the assumptions from section 3.1:\nAssumptions: (a) The random variables X ∼ N (0,Σx) and Y ∼ N (0,Σy) with Σx cId and Σy cId for some c > 0. (b) The samples X and Y drawn from X and Y respectively have zero mean. (c) For a given k ≤ d, Σx and Σy have non-zero top-k eigen values.\nLet F be the trace value solution for Eq. (2), and F̃ be the trace value solution for Eqs. equation 3a, equation 3b, we next restate Theorem 1 and give its proof:\nTheorem. Under the assumptions and notations above, the approximation error E = ‖F − F̃‖ is bounded and goes to zero while the whitening constraints in equation 3b are satisfied.\nProof. Let Qu, Su, Qv, Sv be the solutions for Eqs. equation 3a and equation 3b, Ũ , Ṽ be matrices consisting of top-k eigen vectors of (1/N)XTX and (1/N)Y TY respectively, U, V be solutions for equation 2. Let X̃u = XŨQuSu and Ỹv = Y Ṽ QvSv. Also let Xu = XU and Yv = Y V . Observe that mean of Xu, Yv, X̃u and Ỹv are zero. Moreover the sample covariance of Xu and Yv are given by UTCXU and V TCY V respectively. Thus by the constraint in equation 2, XTuXu = Ik and Y T v Yv = Ik. Let these covariance matrices be denoted by C(Xu) and C(Yv) respectively. Analogously the sample covariance of X̃u and Ỹv are given by STuQ T u Ũ TCX ŨQuSu and STv Q T v Ṽ\nTCY ŨQvSv respectively. Let these covariance matrices be denoted by C(X̃u) and C(Ỹv) respectively.\nUsing Def. 1, we know Xu, Xv , X̃u and Ỹv follow sub-Gaussian distributions. Let F = trace ( UTCXY V ) which can be rewritten as F = trace ( XTu Yv ) . Moreover, let F̃ =\ntrace ( STuQ T u Ũ TCXY Ṽ QvSv ) which similarly can be rewritten by F̃ = trace ( X̃Tu Ỹv ) .\nConsider the approximation error between the objective functions as E = |F − F̃ |. We can rewrite E = |trace ( XTu Yv ) − trace ( X̃Tu Ỹv ) |. Due to von Neumann’s trace inequality and Cauchy–Schwarz\ninequality, we have E = |trace ( X̃Tu Ỹv −XTu Yv ) |\n≤ |trace (( X̃u −Xu )T ( Ỹv − Yv )) | (using Von Neumann’s trace inequality)\n≤ ∑ i σi(X̃u −Xu)σi(Ỹv − Yv)\n≤ ‖ ( X̃u −Xu ) ‖F ‖ ( Ỹv − Yv ) ‖F (using Cauchy-Scwartz’s inequality) (A.1)\nwhere σi(A) denote the i th singular value of matrix A and ‖ • ‖F denotes the Frobenius norm. Now, using Proposition 1, we get\n‖ ( X̃u −Xu ) ‖F ≤ min (√ 2k‖∆x‖2,\n2‖∆x‖22 λxk − λxk+1 ) ‖ ( Ỹv − Yv ) ‖F ≤ min (√ 2k‖∆y‖2,\n2‖∆y‖22 λyk − λ y k+1\n) (A.2)\nwhere, ∆x = C(Xu)− C(X̃u), ∆y = C(Yv)− C(Ỹv). Here λxs and λys are the eigen values of C(Xu) and C(Yv) respectively. Now, assume that C(Xu) = Ik and C(Yv) = Ik as Xu and Yv are solutions of Eq. equation 2. Furthermore assume λxk − λxk+1 ≥ Λ and λ y k − λ y k+1 ≥ Λ for some Λ > 0. Then, we can rewrite Eq. equation A.1 as\nE ≤ min\n( √\n2k‖Ik − C(X̃u)‖2, 2‖Ik − C(X̃u)‖22\nΛ\n) min ( √ 2k‖Ik − C(Ỹv)‖2, 2‖Ik − C(Ỹv)‖22\nΛ\n)\nAnd as C(X̃u) → Ik or C(Ỹv) → Ik, E → 0. Observe that, the limiting conditions for C(X̃u) and C(Ỹv) can be satisfied by the “whitening” constraint. In other words, as C(Xu) = Ik and C(Yv) = Ik, C(X̃u) and C(Ỹv) converge to C(Xu) and C(Yv), the approximation error goes to zero." }, { "heading": "A.3 RSG+ ALGORITHM", "text": "Here we show our algorithm with more details about the gradients in every step in Alg.2.\nA.4 IMPLEMENTATION DETAILS OF CCA ON FIXED DATASET\nImplementation details. On all three benchmark datasets, we only passed the data once for both our RSG+ and MSG Arora et al. (2017) and we use the code from Arora et al. (2017) to produce MSG results. We conducted experiments on different dimensions of target space: k = 1, 2, 4. The choice of k is motivated by the fact that the spectrum of the datasets decays quickly. Since our RSG+ processes data in small blocks, we let data come in mini-batches (mini-batch size was set to 100)." }, { "heading": "A.5 RUNTIME OF RSG+ AND BASELINE METHODS", "text": "The runtime comparison of RSG+ and MSG is reported in Table 5. Our algorithm is 5–10 times faster.\nWe also plot the runtime of our algorithm under different data dimension (set dx = dy = d) and number of total samples sampled from joint gaussian distribution in Fig. 3." }, { "heading": "A.6 ERROR METRICS FOR FAIRNESS", "text": "Equality of Opportunity (EO) Hardt et al. (2016): A classifier h is said to satisfy EO if the prediction is independent of the protected attribute s (in our experiment s is a binary variable where\nAlgorithm 2: Riemannian SGD based algorithm (RSG+) to compute canonical directions Input: X ∈ RN×dx , Y ∈ RN×dy , k > 0 Output: U ∈ Rdx×k, V ∈ Rdy×k\ns = 1 stands for Male and s = 0 stands for Female) for classification label y ∈ {0, 1}. We use the difference of false negative rate (conditioned on y = 1) across two groups identified by protected attribute s as the error metric, and we denote it as DEO.\nDemographic Parity (DP) Yao & Huang (2017): A classifier h satisfies DP if the likelihodd of making a misclassification among the positive predictions of the classifier is independent of the protected attribute s. We denote the difference of demographic parity between two groups identified by the protected attribute as DDP.\nA.7 IMPLEMENTATION DETAILS OF FAIRNESS EXPERIMENTS\nImplementation details. The network is trained for 20 epochs with learning rate 0.01 and batch size 256. We follow Donini et al. (2018) to use NVP (novel validation procedure) to evaluate our result: first we search for hyperparameters that achieves the highest classification score and then report the performance of the model which gets minimum fairness error metrics with accuracy within the highest 90% accuracies. When we apply our RSG+ on certain layers, we first use randomized projection to project the feature into 1k dimension, and then extract top-10 canonical components for training. Similar to our previous experiments on DeepCCA, the batch method does not scale to 1k dimension." }, { "heading": "A.8 RESNET-18 ARCHITECTURE AND POSITION OF CONV-0,1,2 IN TABLE 3", "text": "The Resnet-18 contains a first convolutional layer followed by normalization, nonlinear activation, and max pooling. Then it has four residual blocks, followed by average polling and a fully connected layer. We denote the position after the first convolutional layer as conv0, the position after the first residual block as conv1 and the position after the second residual block as conv2. We choose early layers since late layers close to the final fully connected layer will have feature that is more directly relevant to the classification variable (attractiveness in this case)." } ]
2,020
null
SP:7ee8acfa502077ecac20e98eb665697bf351407c
[ "This paper proposes a way to do batch mode model agnostic active learning. In this task, the agent has to query a batch of data points from a set of unlabeled examples for which it will get labels. The paper puts an additional requirement that the algorithm is model-agnostic. The key idea here is to sample a batch of points that provide the most \"information\" about the remaining unlabeled examples. Authors argue that this will result in higher performance on the unlabeled examples. The proposed approach called ICAL (Information Condensing Active Learning) uses Hilbert-Schmidt Independence Criterion (HSIC) to measure dependence between a chosen batch and the unlabeled examples. The goal is to pick a batch with a maximum value of HSIC which should intuitively give us a batch which is representative of the unlabeled set. HSIC can be easily estimated unlike other dependence measures such as mutual information. Given a batch size $|B|$, a dataset of unlabeled examples $D_u$ and $m$ samples to estimate HSIC, the ICAL algorithm computes a batch for label acquisition in $O(|D_U|m^2|B|)$ steps, where a greedy strategy is used to search over batch. Results are presented over MNIST, variants of MNIST and CIFA and show improvements over five previous active learning approaches and a random acquisition baseline. " ]
We introduce Information Condensing Active Learning (ICAL), a batch mode model agnostic Active Learning (AL) method targeted at Deep Bayesian Active Learning that focuses on acquiring labels for points which have as much information as possible about the still unacquired points. ICAL uses the Hilbert Schmidt Independence Criterion (HSIC) to measure the strength of the dependency between a candidate batch of points and the unlabeled set. We develop key optimizations that allow us to scale our method to large unlabeled sets. We show significant improvements in terms of model accuracy and negative log likelihood (NLL) on several image datasets compared to state of the art batch mode AL methods for deep learning.
[]
[ { "authors": [ "Jordan T Ash", "Chicheng Zhang", "Akshay Krishnamurthy", "John Langford", "Alekh Agarwal" ], "title": "Deep batch active learning by diverse, uncertain gradient lower bounds", "venue": null, "year": 1906 }, { "authors": [ "Alán Aspuru-Guzik", "Kristin Persson" ], "title": "Materials acceleration platform: Accelerating advanced energy materials discovery by integrating high-throughput methods and artificial intelligence", "venue": null, "year": 2018 }, { "authors": [ "Mikołaj Bińkowski", "Dougal J Sutherland", "Michael Arbel", "Arthur Gretton" ], "title": "Demystifying mmd gans", "venue": "arXiv preprint arXiv:1801.01401,", "year": 2018 }, { "authors": [ "Trevor Campbell", "Tamara Broderick" ], "title": "Bayesian coreset construction via greedy iterative geodesic ascent", "venue": "arXiv preprint arXiv:1802.01737,", "year": 2018 }, { "authors": [ "Travers Ching", "Daniel S Himmelstein", "Brett K Beaulieu-Jones", "Alexandr A Kalinin", "Brian T Do", "Gregory P Way", "Enrico Ferrero", "Paul-Michael Agapow", "Michael Zietz", "Michael M Hoffman" ], "title": "Opportunities and obstacles for deep learning in biology and medicine", "venue": "Journal of The Royal Society Interface,", "year": 2018 }, { "authors": [ "Gregory Cohen", "Saeed Afshar", "Jonathan Tapson", "André van Schaik" ], "title": "Emnist: an extension of mnist to handwritten letters", "venue": "arXiv preprint arXiv:1702.05373,", "year": 2017 }, { "authors": [ "Sebastien Da Veiga" ], "title": "Global sensitivity analysis with dependence measures", "venue": "Journal of Statistical Computation and Simulation,", "year": 2015 }, { "authors": [ "Melanie Ducoffe", "Frederic Precioso" ], "title": "Adversarial active learning for deep networks: a margin based approach", "venue": "arXiv preprint arXiv:1802.09841,", "year": 2018 }, { "authors": [ "Yarin Gal", "Zoubin Ghahramani" ], "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "venue": "In international conference on machine learning,", "year": 2016 }, { "authors": [ "Yarin Gal", "Riashat Islam", "Zoubin Ghahramani" ], "title": "Deep bayesian active learning with image data", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Yonatan Geifman", "Ran El-Yaniv" ], "title": "Deep active learning over the long tail", "venue": "arXiv preprint arXiv:1711.00941,", "year": 2017 }, { "authors": [ "Daniel Gissin", "Shai Shalev-Shwartz" ], "title": "Discriminative active learning", "venue": "arXiv preprint arXiv:1907.06347,", "year": 2019 }, { "authors": [ "Ian J Goodfellow", "Jonathon Shlens", "Christian Szegedy" ], "title": "Explaining and harnessing adversarial examples", "venue": "arXiv preprint arXiv:1412.6572,", "year": 2014 }, { "authors": [ "Arthur Gretton", "Karsten M Borgwardt", "Malte J Rasch", "Bernhard Schölkopf", "Alexander Smola" ], "title": "A kernel two-sample test", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Dan Guest", "Kyle Cranmer", "Daniel Whiteson" ], "title": "Deep learning and its application to lhc physics", "venue": "Annual Review of Nuclear and Particle Science,", "year": 2018 }, { "authors": [ "Yuhong Guo", "Russell Greiner" ], "title": "Optimistic active-learning using mutual information", "venue": "In IJCAI,", "year": 2007 }, { "authors": [ "Yuhong Guo", "Dale Schuurmans" ], "title": "Discriminative batch mode active learning", "venue": "In Advances in neural information processing systems,", "year": 2008 }, { "authors": [ "Steven CH Hoi", "Rong Jin", "Jianke Zhu", "Michael R Lyu" ], "title": "Batch mode active learning and its application to medical image classification", "venue": "In Proceedings of the 23rd international conference on Machine learning,", "year": 2006 }, { "authors": [ "Neil Houlsby", "Ferenc Huszár", "Zoubin Ghahramani", "Máté Lengyel" ], "title": "Bayesian active learning for classification and preference learning", "venue": "arXiv preprint arXiv:1112.5745,", "year": 2011 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Andreas Kirsch", "Joost van Amersfoort", "Yarin Gal" ], "title": "Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Andreas Krause", "Ajit Singh", "Carlos Guestrin" ], "title": "Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies", "venue": "Journal of Machine Learning Research,", "year": 2008 }, { "authors": [ "Alex Krizhevsky", "Geoffrey Hinton" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report, Citeseer,", "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Stephen Mussmann", "Percy Liang" ], "title": "On the relationship between data efficiency and error for uncertainty sampling", "venue": "arXiv preprint arXiv:1806.06123,", "year": 2018 }, { "authors": [ "Hanchuan Peng", "Fuhui Long", "Chris Ding" ], "title": "Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2005 }, { "authors": [ "Niklas Pfister", "Peter Bühlmann", "Bernhard Schölkopf", "Jonas Peters" ], "title": "Kernel-based tests for joint independence", "venue": "Journal of the Royal Statistical Society: Series B (Statistical Methodology),", "year": 2018 }, { "authors": [ "Robert Pinsler", "Jonathan Gordon", "Eric Nalisnick", "José Miguel Hernández-Lobato" ], "title": "Bayesian batch active learning as sparse subset approximation", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aaditya Ramdas", "Sashank Jakkam Reddi", "Barnabás Póczos", "Aarti Singh", "Larry Wasserman" ], "title": "On the decreasing power of kernel and distance based nonparametric hypothesis tests in high dimensions", "venue": "In Twenty-Ninth AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Benjamin Sanchez-Lengeling", "Alán Aspuru-Guzik" ], "title": "Inverse molecular design using machine learning: Generative models for matter", "venue": "engineering. Science,", "year": 2018 }, { "authors": [ "Dino Sejdinovic", "Arthur Gretton", "Wicher Bergsma" ], "title": "A kernel test for three-variable interactions", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Dino Sejdinovic", "Bharath Sriperumbudur", "Arthur Gretton", "Kenji Fukumizu" ], "title": "Equivalence of distance-based and rkhs-based statistics in hypothesis testing", "venue": "The Annals of Statistics,", "year": 2013 }, { "authors": [ "Ozan Sener", "Silvio Savarese" ], "title": "Active learning for convolutional neural networks: A core-set approach", "venue": "arXiv preprint arXiv:1708.00489,", "year": 2017 }, { "authors": [ "Burr Settles" ], "title": "Active learning literature survey", "venue": "Technical report, University of Wisconsin-Madison Department of Computer Sciences,", "year": 2009 }, { "authors": [ "Jiaming Song", "Stefano Ermon" ], "title": "Understanding the limitations of variational mutual information estimators", "venue": "arXiv preprint arXiv:1910.06222,", "year": 2019 }, { "authors": [ "Le Song", "Alex Smola", "Arthur Gretton", "Justin Bedo", "Karsten Borgwardt" ], "title": "Feature selection via dependence maximization", "venue": "Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "Gábor J Székely", "Maria L Rizzo", "Nail K Bakirov" ], "title": "Measuring and testing dependence by correlation of distances", "venue": "The annals of statistics,", "year": 2007 }, { "authors": [ "Zheng Wang", "Jieping Ye" ], "title": "Querying discriminative and representative samples for batch mode", "venue": null, "year": 2021 }, { "authors": [ "Pinsler" ], "title": "2019), they try to build a batch such that the log posterior after acquiring that batch best approximates the complete data log posterior (i.e. the log posterior after acquiring the entire pool set). Their approach closely follows the general Bayesian Coreset (Campbell and Broderick, 2018) approach which constructs a weighted subset of data that approximates the full dataset", "venue": "Crucially (Pinsler et al.,", "year": 2019 } ]
[ { "heading": "1 Introduction", "text": "Machine learning models are widely used for a vast array of real world problems. They have been applied successfully in a variety of areas including biology (Ching et al., 2018), chemistry (SanchezLengeling and Aspuru-Guzik, 2018), physics (Guest et al., 2018), and materials engineering (AspuruGuzik and Persson, 2018). Key to the success of modern machine learning methods is access to high quality data for training the model. However such data can be expensive to collect for many problems. Active learning (Settles, 2009) is a popular methodology to intelligently select the fewest new data points to be labeled while not sacrificing model accuracy. The usual active learning setting is pool-based active learning where one has access to a large unlabeled dataset DU and uses active learning to iteratively select new points from DU to label. Our goal in this paper is to develop an active learning acquisition function to select points that maximize the eventual test accuracy which is also one of the most popular criteria used to evaluate an active learning acquisition function.\nIn active learning, an acquisition function is used to select which new points to label. A large number of acquisition functions have been developed over the years, mostly for classification (Settles, 2009). Acquisition functions use model predictions or point locations (in input feature or learned representation space) to decide which points would be most helpful to label to improve model accuracy. We then query for the labels of those points and add them to the training set. While the past focus for acquisition functions has been the acquisition of one point at a time, each round of label acquisition and retraining of the ML model, particularly in the case of deep neural networks, can be expensive. Furthermore in several applications like biology, it can be much faster to do acquisition of a fixed number of points in parallel versus sequentially. There have been several papers, particularly in the past few years, that try to avoid this issue by acquiring points in batch. As our goal is to apply AL in the context of modern ML models and data, we focus in this paper on batch-mode AL.\nAcquisition functions can be broadly thought of as belonging to two categories. The ones from the first category directly focus on minimizing the error rate post-acquisition. A natural choice of such an acquisition function might be to acquire labels for points with the highest uncertainty or points closest to the decision boundary (Uncertainty sampling can be directly linked to minimizing error rate in the context of active learning Mussmann and Liang (2018)). In the other category, the goal is to get as close as possible to the true underlying model. Thus here, acquisition functions select points which give the most amount of knowledge regarding a model’s parameters where knowledge is defined as the statistical dependency between the parameters of the model and the predictions for the selected points. Mutual information (MI) is the usual choice for the dependency, though other choices are possible. For well-specified model spaces, e.g. in physics, such a strategy can identify the correct model. In machine learning, however, models are usually mis-specified, and thus the\nmetric of evaluation even for model-identification acquisition functions is how successful they are at reducing test error. Given this reality, we follow the viewpoint of trying to minimize the error rate of the model post-acquisition.\nOur strategy is to select points that we expect would provide substantial information about the labels of the rest of the unlabeled set, thus reducing model uncertainty. We propose acquiring a batch of points B such that the model’s predictions on B have as high a statistical dependency as possible with the model’s predictions on the entire unlabeled set DU . Thus we want a batch B that condenses the most amount of information about the model’s predictions on DU . We call our method Information Condensing Active Learning (ICAL).\nA key desideratum for our acquisition function is to be model agnostic. This is partly because the model distribution can be very heterogeneous. For example, ensembles which are often used as a model distribution can consist of just decision trees in a random forest or different architectures for a neural network. This means we cannot assume any closed form for the model’s predictive distribution, and have to resort to Monte Carlo sampling of the predictions from the model to estimate the dependency between the model’s predictions on the query batch and the unlabeled set. MI, however, is known to be hard to approximate using just samples (Song and Ermon, 2019). Thus to scale the method to larger batch sizes, we use the Hilbert-Schmidt Independence Criterion (HSIC), one of the most powerful extant statistical dependency measures for high dimensional settings. Another advantage of HSIC is that it is differentiable, which as we will discuss later, can allow applications of the acquisition function to areas where MI would be difficult to make work.\nTo summarize, we introduce Information Condensing Active Learning (ICAL) which maximizes the amount of information being gained with respect to the model’s predictions on the unlabeled set of points. ICAL is a batch mode acquisition function that is model agnostic and can be applied to both classification and regression tasks. We then develop an algorithm that can scale ICAL to large batch sizes when using HSIC as the dependency measure between random variables. As our method only needs samples from the posterior predictive distribution which can be obtained for both regression and classification tasks, it is applicable to both." }, { "heading": "2 Related work", "text": "A review of work on acquisition functions for active learning prior to the recent focus on deep learning is given by Settles (2009). The BALD (Bayesian Active Learning by Disagreement) (Houlsby et al., 2011) acquisition function chooses a query point which has the highest mutual information about the model parameters. This turns out to be the point on which individual models sampled from the model distribution are confident about in their prediction but the overall predictive distribution for that point has high entropy. In other words this is the point on which the models are individually confident but disagree on the most.\nIn Guo and Schuurmans (2008) which builds on Guo and Greiner (2007), they formulate the problem as an integer program where they select a batch such that the post acquisition model is highly confident on the training set and has low uncertainty on the unlabeled set. While the latter aspect is related to what we do, they need to retrain their model for every candidate batch they search over in the course of trying to find the optimal batch. As the total number of possible batches is exponential in the size of the unlabeled set, this can get too computationally expensive for neural networks limiting the applicability of this approach. Thus as far as we know, Guo and Schuurmans (2008) has only been applied to logistic regression. BMDR (Wang and Ye, 2015) queries points that are as close to the classifier decision boundary as possible while still being representative of the overall sample distribution. The representativeness is measured using the maximum mean discrepancy (MMD) (Gretton et al., 2012) of the input features between the query batch and the set of all points, with a lower MMD indicating a more representative query batch. However this approach is limited to classification problems, as it is based on a decision boundary. BMAL (Hoi et al., 2006) selects a batch such that the Fisher information matrices for the total unlabeled set and the selected batch are as close as possible. The Fisher information matrix is however quadratic in the number of parameters and thus infeasible to compute for modern deep neural networks. FASS (Filtered Active Subset Selection) (Wei et al., 2015) picks the most uncertain points and then selects a subset of those points\nthat are as similar as possible to the whole candidate batch which favors points that can represent the diversity of the initial set of most uncertain points.\nRecently active learning methods have been extended to the deep learning setting. Gal et al. (2017) adapts BALD (Houlsby et al., 2011) to the deep learning setting by using Monte Carlo Dropout (Gal and Ghahramani, 2016) to do inference for their Bayesian Neural Network. They extend BALD to the batch setting for neural networks with BatchBALD (Kirsch et al., 2019). In Pinsler et al. (2019), they adapt the Bayesian Coreset (Campbell and Broderick, 2018) approach for active learning, though their approach requires a batch size that changes for every acquisition. As the neural network decision boundary is intractable, DeepFool (Ducoffe and Precioso, 2018) uses the concept of adversarial examples (Goodfellow et al., 2014) to find points close to the decision boundary. However this approach is again limited to classification tasks. FF-Comp (Geifman and El-Yaniv, 2017), DAL (Gissin and Shalev-Shwartz, 2019), Sener and Savarese (2017), and BADGE (Ash et al., 2019) operate on the learned representation, as that is the only way the methods incorporate feedback from the training labels into the active learning acquisition function, and they are thus not model-agnostic, as they are not extendable to any model distribution where it is difficult to have a notion of a common representation – as in a random forests or ensembles, etc. where the learned representation is a distribution and not a single point. This is also the case with the model distribution – MC-dropout – we use in this paper.\nThere is also extensive prior work on exploiting Gaussian Processes (GPs) for Active Learning (Houlsby et al., 2011; Krause et al., 2008). However GPs are hard to scale especially for modern image datasets." }, { "heading": "3 Background", "text": "Statistical background The entropy of a distribution is defined as HpY q “ ´ ř xPX px logppxq, where px is the probability of the x. Mutual information (MI) between two random variables is defined as IrX;Y s “ ř\nxPX ř yPY ppx, yq logp ppx,yq ppxqppyq q, where ppx, yq is the joint probability of x, y.\nNote that IrX;Y s “ HpY q´HpY |Xq “ HpXq´HpX|Y q. By posterior predictive distribution yx we mean ş\nθ ppy|x, θqppθ|Dqdθ where y is the prediction, x the input point, θ the model parameters,\nand D the training data. M is the distribution of models (parametrized by θ) we wish to choose from via active learning. As mentioned before, we use MC-dropout for our model distribution by sampling random dropout masks and use the same set of dropout masks across points to generate joint predictions.\nHilbert-Schmidt Independence Criterion (HSIC) Suppose we have two (possibly multivariate) distributions X ,Y and we want to measure the dependence between them. A well known way to measure it is using distance covariance which intuitively, measures the covariance between the distances of pairs of samples from the joint distribution PXY and the product of marginal distributions P pXq, P pY q (Székely et al., 2007). HSIC can simply be thought of as distance covariance except in a kernel space (Sejdinovic et al., 2013b). A (sample) kernel matrix kX is a matrix whose ijth element is kpxi, xjq where k is the kernel function and xi, xj are the i, jth samples from X . Further details are in the Appendix.\nAcquisition Function Let the batch to acquire be denoted by B with B “ |B|. Given a model distribution M, training data Dtrain, unlabeled data DU , input space X , set of labels Y and an acquisition function αpx,Mq, we decide which batch of points to query next via:\nB˚ “ arg max B αpB,Mq" }, { "heading": "4 Motivation", "text": "As mentioned previously, our goal is to acquire points that will give us as much information as possible about the still-unlabeled points, thereby increasing the confidence of the model’s predictions.\nAs we will demonstrate shortly, there are situations where modern active learning methods do not select the points that optimally decrease the uncertainty of prediction on the unlabeled data. More formally, the examples below show that the choice of x P U that optimizes oft-used acquisition functions may not be optimal for decreasing the entropy of predictions ( ř\nx1PU,x1‰xHpyx1q) over the remaining points post-acquisition. If we wish to optimize test-set accuracy, this can be problematic: for well-calibrated models, we should expect worse average entropy (uncertainty) to roughly correspond to an increase in the number of errors. This is similar to cross entropy loss being a good proxy for 0-1 loss. Below we illustrate our points with two examples and from results on EMNIST.\nExample 1 Suppose we have an image dataset which is highly imbalanced with 90% cars, 9% planes, and 1% ships. Then a small increase in accuracy for the car category would lead to a much larger reduction in the overall error rate versus a large increase in accuracy for the ships category. However, given the dominance of the cars category in the loss, the uncertainty of prediction on the ships category is likely to be much higher. Thus the max-entropy criterion is more likely to choose points from the pool set that turn out to be ships.\nExample 2 Similar to the previous example, here we demonstrate that picking the point with the most amount of information with respect to the model parameters is not optimal for decreasing the prediction uncertainty on the still unlabeled data. The main idea behind this example is that if you have points which form a non-trivial fraction of the dataset and have a lot of correlation between their predictive distributions, then while any of the points may not give a lot of information about which underlying model is the best one, getting the labels for one of the points will greatly reduce the predictive uncertainty for the labels for the other points given the predictive distribution correlation. As these points are a non-trivial fraction of the dataset, reducing the predictive uncertainty on them will have a big impact on the error rate. The example in the Appendix formalizes this intuition.\nThese observations motivate our formulation of the Information Condensing Active Learning (ICAL) acquisition function that selects the set of points whose acquisition would maximize the information gained about the predictive distribution on the unlabeled set. As posterior prediction entropy should be minimized by maximizing Mutual Information (MI) between predictions for unlabeled points and prediction for selected points, ideally ICAL would use MI or related criteria to select points.\nEMNIST results In Figure 1, we show the average posterior entropy of the model’s predictions for our method compared to BatchBALD, BayesCoreset, and Random acquisition. As can be seen from the figure, ICAL reduces the average posterior entropy much more effectively compared to the other two. Details of this experiment are in Section 6.2." }, { "heading": "5 Information Condensing Active Learning (ICAL)", "text": "In this section we present our acquisition function. As before, let Dtrain be the training points, DU the unlabeled points, yx the random variable denoting the prediction for x by the model trained on Dtrain, and d the dependency measure being used. Then\nαICALptx1, . . . , xBu, dq “ 1 |DU | ÿ\nx1PDU\ndpyx1 , tyx1 , . . . , yxBuq\nthat is, we try to find the batch that has highest average dependency with respect to the unlabeled points’ marginal predictive distribution." }, { "heading": "Scaling αICAL estimation", "text": "As we mentioned in the introduction, we can use MI as the dependency measure d but it is tricky to estimate MI using just samples from the distribution, particularly high-dimensional or continuous variables. Furthermore, MI estimators are usually not differentiable. Thus if we wanted to apply ICAL to domains where the pool set is continuous and infinite (for example, if we wanted to query gene expression perturbations for a cell), we would run into obstacles. This motivates our choice of HSIC as the dependency measure. In addition to being differentiable, HSIC has better empirical sample complexity for measuring dependency as opposed to estimators for MI. Indeed, popular MI estimators have been found to have variance with respect to ground truth MI that increases exponentially with the MI value Song and Ermon (2019). HSIC has also been successfully used in the related context of feature selection via dependency maximization in the past Da Veiga (2015); Song et al. (2012). Furthermore, HSIC is the Maximum Mean Discrepancy (MMD) between the joint distribution and the production of marginals. MMD is known to be ď 12 KL-divergence Ramdas et al. (2015) and thus HSIC ď 12 MI. Thus we use HSIC as the dependency measure for the rest of the paper.\nNaively implementing αICALpB, HSICq would require Op|DU |m2B ¨Cq steps per candidate batch being evaluated where C is the number of classes, m is the number of samples taken from ppy1:Bq (Opm2Bq to estimate HSIC which we need to do |DU | times). However, recall thatHSIC is a function of solely the kernel matrices kx corresponding to the random variables (Appendix) – in this case yx, x P DU . Now one can define the matrix k˚ “ 1|DU | ř xPDU k x. We can then prove the following propositions (proofs are in the Appendix).\nProposition 1 k˚ is a valid kernel matrix.\nProposition 2 ř x1PDU {HSICpkx1 , kxPBq “ {HSICp ř xPDU k x, kxPBq\nwhere kxPB “ kx1 , . . . , kxB , xi P B and {HSIC denotes the sample for HSIC. Using this reformulation, we only have to compute k˚ “ 1|DU | ř xPDU k x once per acquisition round. This lowers the computation cost to Op|DU |m2 ¨ C `m2B ¨ Cq. Estimating HSIC would still require m to increase very rapidly with B (proportional to the dimension of the joint distribution). To get around this but still maintain batch diversity, we try two strategies.\nFor regular ICAL, we average the kernel matrices of points in the candidate batch. We then subsample r points from DU every time a point is added to the batch and only compare the dependency with those. This effectively introduces noise in the HSIC estimation. We find in practice, that this is sufficient to acquire a diverse batch, as evidenced by Figure 3. This seems to be the case even for very large batches, and has the added benefit of further lowering the computational cost for evaluating a candidate batch to Oprm2 ¨ C ` 2 ¨m2 ¨ Cq. We use r “ 200 for all our experiments. We develop another strategy we call ICAL-pointwise which computes the marginal increase in dependence as a result of adding a point to the batch. If a point is highly correlated with elements of the current batch, the marginal increase would be negligible, making the point much less likely to be selected. The two variants perform very similarly despite ICAL-pointwise’s slight advantage in the early acquisitions. ICAL-pointwise however requires much less time for equivalent performance which we discuss briefly in Section 5.2 and more fully in the Appendix. For ease of presentation, we use ICAL in the Results section and defer the full description and evaluation of ICAL-pointwise to the Appendix.\nAs there are an exponential number of candidate batches, an exhaustive search to find the optimal batch is infeasible. For ICAL we use a greedy forward selection strategy to build the batch and find that it performs well empirically. As the arg max over all of DU has to be computed every time a new point is being selected for the batch, and we have to perform this operation for each point that is added to the batch, this gives a computation cost ofOppr2m2`|DU |m2B`m2Bq¨Cq “ Op|DU |m2B ¨Cq. It is possible that global nonlinear optimization of the batch ICAL criterion would work even better than greedy optimization already does with respect to state of the art methods. Efficient techniques for doing this optimization are not obvious and beyond the scope of this work. Even if we used gradient based techniques to construct the batch, gradient based optimization for nonlinear problems usually only leads to local and not global optima. We note however that greedy forward selection is a popular technique that has been successfully used in a large variety of contexts (Da Veiga, 2015; Blanchet et al., 2008). Optimizations to scale ICAL even further as well as the full Algorithm are detailed in the Appendix." }, { "heading": "6 Results", "text": "We demonstrate the effectiveness of ICAL using standard image datasets including MNIST (LeCun et al., 1998), Repeated MNIST (Kirsch et al., 2019), Extended MNIST (EMNIST) (Cohen et al., 2017), fashion-MNIST, and CIFAR-10 (Krizhevsky et al., 2009). We compare ICAL with three state of the art methods for batched active learning acquisition – BatchBALD, FASS, and BayesCoreset. We also compare against BALD and Max Entropy (MaxEnt) which are not explicitly designed for batched selection, as well as against a Random acquisition baseline. Details of the acquisition functions are in the Appendix. ICAL consistently outperforms BatchBALD, FASS, and BayesCoreset on accuracy and negative log likelihood (NLL).\nThroughout our experiments, for each dataset we hold out a fixed test set for evaluating model performance after training and a fixed validation set for training purposes. We retrain the model from the beginning after each acquisition to avoid correlation of subsequently trained models, and we use early stopping after 3 (6 for ResNet18) consecutive epochs of validation accuracy drop. Following (Gal et al., 2017), we use Neural Networks with MC dropout (Gal and Ghahramani, 2016) as a variational approximation for Bayesian Neural Networks. We simply use a mixture of rational quadratic kernels for HSIC, which has been used successfully with kernel based statistical dependency measures in the past, with mixture length scales of t0.2, 0.5, 1, 2, 5u as in (Bińkowski et al., 2018). All models are optimized with the Adam optimizer (Kingma and Ba, 2014) using learning rate of 0.001 and betas (0.9,0.999). The small batch size experiments are repeated 6 times with different seeds and a different initial training set for each run, with balanced label distribution across all classes. The same set of seeds is used for different methods on the same task. 8 different seeds are used for large batch size experiments using CIFAR datasets." }, { "heading": "6.1 MNIST and Repeated MNIST", "text": "We first examine ICAL’s performance on MNIST, which is a standard image dataset for handwritten digits. We further test out the scenario where duplicated data points exist (repeated MNIST) as proposed in (Kirsch et al., 2019). Each data point in MNIST is replicated three times in repeated-\nMNIST, and isotropic Gaussian noise with std=0.1 is added after normalizing the image. We use a CNN consists of two convolutional layers with 32 and 64 5x5 convolution filters, each followed by MC dropout, max-pooling and ReLU. One fully connected layer with 128 hidden units and MC dropout is used after convolutional layers and the output soft-max layer has dimension of 10. All dropout uses probability of 0.5, and the architecture achieved over 99% accuracy on full MNIST. We use a validation set of size 1024 for MNIST and 3072 for repeated-MNIST, and a balanced test set of size 10,000 for both datasets. All models are trained for up to 30 epochs for MNIST and up to 40 epochs for repeated-MNIST. We sample an initial training set of size 20 (2 per class) and conduct 30 acquisitions of batch size 10 on both datasets, and we use 50 MC dropout samples to estimate the posterior.\nThe test accuracy and negative log-likelihood (NLL) are shown in Figure 2. ICAL significantly improves the NLL and outperforms all other baselines on accuracy, with higher margins on the earlier acquisition rounds. The performance is consistent across all runs (the variance is smaller than other baselines), and is robust even in the repeated-MNIST setup, where all the other greedy methods show worsen performance. We check the frequency that replicas of a single sample were included in acquired batch and as shown in Appendix Figure 8, our method (as well as BatchBALD, BayesCoreset and random) acquired no redundant sample whereas FASS and max entropy acquired up to 3 copies of some samples." }, { "heading": "6.2 EMNIST", "text": "We then extend the task to a more sophisticated dataset named Extended-MNIST, which consists of 47 classes of 28x28 images of both digits and letters. We used the balanced EMNIST where each class has 2400 training examples. We use a validation set of 16384 and test set of size 18800 (400 per class), and train for up to 40 epochs. We use a CNN consisting of three convolutional layers with 32, 64, and 128 3x3 convolution filters, each followed by MC dropout, 2x2 max-pooling and ReLU. A fully connected layer with 512 hidden units and MC dropout is used after convolutional layers. We use an initial train set of 47 (1 per class) and make 60 acquisitions of batch size 5. 50 MC dropout samples are used to estimate the posterior.\nThe results are in Figure 4. We do substantially better in terms of both accuracy and NLL compared to all other methods. A clue as to why our method outperforms on EMNIST can be found in Figure 3. ICAL is able to acquire more diversed and balanced batches while all other methods have overly/under-represented classes (note that BatchBALD, Random and MaxEnt each totally miss examples from one of classes). This indicates that our method is much more robust in terms of performance even when the number of classes increases, whereas other alternatives degenerate." }, { "heading": "6.3 Fashion-MNIST", "text": "We also examine ICAL’s performance on fashion-MNIST which consists of 10 classes of 28x28 Zalando’s article images. We use a validation set of 3072 and test set of size 10000 (1000 per class), and train for up to 40 epochs. The network architecture is the same as the one used in MNIST task. We use an initial train set of 20 (2 per class) and make 30 acquisitions of batch size 10. 100 MC dropout samples are used to estimate the posterior. As shown in Figure 4, we again do significantly better in terms of both accuracy and NLL compared to all other methods. Note that almost all baselines were inferior to random baseline except ICAL, showing the robustness of our method." }, { "heading": "6.4 CIFAR", "text": "Finally we test our method on the CIFAR-10 and CIFAR-100 datasets Krizhevsky et al. (2009) in a large batch size setting. CIFAR-10 consists of 10 classes with 6000 images per class whereas CIFAR-100 has 100 classes with 600 images per class. We use a validation set of size 1024, and a balanced test set of size 10,000 for both datasets. For CIFAR-10, we start with an initial training set of 10000 examples (1000 per class) while for CIFAR-100, we start with 20000 examples (200 per class). We do 10 acquisitions on CIFAR-10 and 7 acquisitions on CIFAR-100 with batch size of 3000. We use a ResNet18 with additional 2 fully connected layers with MC dropouts, and train for up to 60 epochs with learning rate 0.1 (allow early stopping). We run with 8 different seeds. The results are in Figure 5. Note that we are unable to compare against BatchBALD for either CIFAR dataset as it runs out of memory.\nFor CIFAR-10, ICAL dominates all other methods for all acquisitions except two – when the acquired dataset size is 19000 and when it is 28000. ICAL also has the highest area under curve (auc) for accuracy compared to all other methods; with p-value ď 0.007 except for BALD and Max Entropy for which we have better auc with p-value 0.24, 0.15 respectively. ICAL also achieves the highest accuracy at the end of all 10 acquisitions. With CIFAR-100, on all acquisitions ICAL outperforms a majority of the methods. Furthermore, ICAL again finishes with the highest accuracy by a significant margin at the end of the acquisition rounds and it again have the highest auc compared to all other methods. Detailed comparison results are in the Appendix Table 2." }, { "heading": "7 Conclusion", "text": "We develop a novel batch mode active learning acquisition function ICAL that is model agnostic and applicable to both classification and regression tasks (as it relies on only samples from the posterior predictive distribution). We develop key optimizations that enable us to scale our method to large acquisition batch and unlabeled set sizes. We show that we are robustly able to outperform state of the art methods for batch mode active learning on a variety of image classification tasks in a deep neural network setting." }, { "heading": "Appendix", "text": "" }, { "heading": "Motivating example 2", "text": "Suppose we have a model distribution with 10 possible models ω1, . . . , ω10 with equal prior probability of being the true model (ppwiq “ 0.1 for @i). Let the datapoints be x1, . . . , xL with their labels taking 4 possible values. We define pkij “ ppyi “ j|xi, ωkq as the probability of the jth class for the ith datapoint given by the kth model. Let\npk1j “ 1; j “ k, 1 ď k ď 3 pk14 “ 1; 4 ď k ď 10 pki1 “ 1, p10i2 “ 1; 1 ď k ď 9, 2 ď i ď L\nGiven that we have no other information about the models, we update the posterior probabilities for the models as follows – if a model ωk outputs label l for a point x but after acquisition, the label for x is not l, then we know that is not the correct model and thus its posterior probability is 0 (so it is eliminated). Otherwise we have no way of distinguishing between the remaining models so they all have equal posterior probability. Then for x1 the mutual information is\nIry1, ω|x1,Dtrains “ Hry1|x1s ´ Eppω|DtrainqrHry1|x1, ωss “ 0.94\nFor x2 . . . xL, Iry2´L, ω|x2...L,Dtrains “ 0.325. However selecting x1 would decrease the expected posterior entropy Hry2´L|x2...L, x1, y1,Dtrains from 0.325 to only 0.287. Acquiring any of x2...L instead of x1, however, would decrease that entropy to 0, which would cause a much larger decrease in the expected posterior entropy averaged over x1...L if L is large enough. The detailed calculations are in the later subsection.\nWhile x2...L may not contribute much to the entropy of the joint predictive distribution or to the MI with respect to the model parameters compared to x1, collectively they will be weighted L´ 1 times more than x1 when looking at the accuracy. We should thus expect a well-calibrated model to have a higher uncertainty, and thus make a lot more errors on x2...L, if x1 is acquired versus if any of x2...L are acquired. For instance, in the above example, as L increases, the expected error rate would approach « 0.7 ˆ p1{7 ˆ 6{7q ˆ 2 “ 0.17 (0.7 as 0.3 of the times the value of x1 would also fix what the true model is reducing error rate on all x to 0) if x1 is acquired as the errors for x2...L are correlated, whereas the rate would approach 0 were any of x2...L to be acquired." }, { "heading": "Derivation for Example 2", "text": "For x1, the mutual information between the predicted label y1 and model parameters is:\nIry1, ω|x1,Dtrains “ Hry1|x1s ´ Eppω|DtrainqrHry1|x1, ωss\n“ Hr 10 ÿ\nk“1 ppy1|x1, ωkqppωkqs ´\n10 ÿ k“1 ppωkqHrppy1|x1, ωkqs\n“ ´p3ˆ p 1 10 ˆ logp 1 10 qq ` 7 10 ˆ logp 7 10 qq\n´ 10ˆ 1 10 ˆ p´p1ˆ logp1q ` 0ˆ logp0qqq\n“ 0.940 For x2...L,\nIry2´L, ω|x2...L,Dtrains\n“ ´p 9 10 ˆ logp 9 10 q ` 1 10 ˆ logp 1 10 qq ´ 10ˆ 1 10 p´p1ˆ logp1q ` 0ˆ logp0qqq\n“ 0.325 After acquiring x1, assuming the true label for x1 is 1, then we update the posterior over the model parameter such that p1pw1q|y1“1 “ 1 and p1pwkq|y1“1 “ 0 for 1 ă k ď 10. Then the expected averaged posterior entropy for x1...L is:\n1\nL´ 1\nL ÿ i“2 Hryi|xis|y1“1\n“ 1 L´ 1\nL ÿ i“2 Hr 10 ÿ k“1 ppyi|xi, ωkqp1pωkq|y1“1s\n“ 1 L´ 1 ˆ pL´ 1q ˆ p´p1ˆ logp1q ` 0ˆ logp0qqq\n“ 0 Similarly, we could compute the case where the true label for x1 is 2-4:\n1\nL´ 1\nL ÿ i“2 Hryi|xis|y1“2 “ 0\n1\nL´ 1\nL ÿ i“2 Hryi|xis|y1“3 “ 0\n1\nL´ 1\nL ÿ i“2 Hryi|xis|y1“4\n“ 1 L´ 1 ˆ pL´ 1q ˆ p´p 6 7 logp6 7 q ` 1 7 logp1 7 qqq\n“ 0.41 The expectation of the averaged posterior entropy with respect to predicted label for y1 (since we don’t know the true label) is:\nHry2´L, ω|x2...L, x1, y1Dtrains\n“ Ey1„ppy1|Dtrainqr 1\nL´ 1\nL ÿ i“2 Hryi|xis|y1s\n“ 1 10 ˆ 0` 1 10 ˆ 0` 1 10 ˆ 0` 7 10 ˆ 0.41 “ 0.287\nBaseline acquisition function details\nMax entropy selects the points that maximize the predictive entropy\nαpx,Mq “ Hpy|x,Dtrainq\n“ ´ ÿ\nc\nppy “ c|x,Dtrainq logpppy “ c|x,Dtrainqq\nBatchBALD BatchBALD (Kirsch et al., 2019) tries to find a batch of points that has the highest mutual information with respect to the model parameters. BALD is the non-batched version of BatchBALD. Formally\nαBatchBALDptx1, . . . , xBu, ppωqq “ Hpy1, . . . , yBq ´ EppωqrHpy1, . . . , yB |ωqs\nFiltered active submodular selection (FASS) FASS (Wei et al., 2015) samples the β ˆ B most uncertain points B1 and then subselect B points that are as representative of B1 as possible. For the measure of uncertainty, FASS uses entropy Hpy|x,Dtrainq. To measure the representativeness of B to B1, FASS tries to choose B to maximize the following function\nfpBq “ ÿ\nyPY\nÿ\niPV y max sPBXV y wpi, sq\nHere V y Ď B1 is the set of points in B1 with predicted label, y and wpi, sq “ d´ ||xi ´ xs||22 is the similarity function between points indexed by i, s where xi, xs P X and d is the maximum distance between two points. The idea here is that if a point in B already exists that is close to some point x1 P B1, then fpBq will favor adding points to the batch that are close to points other than x1, thus increasing the batch diversity. Note that FASS is equivalent to Max Entropy if β “ 1.\nBayesian Coresets In Pinsler et al. (2019), they try to build a batch such that the log posterior after acquiring that batch best approximates the complete data log posterior (i.e. the log posterior after acquiring the entire pool set). Their approach closely follows the general Bayesian Coreset (Campbell and Broderick, 2018) approach which constructs a weighted subset of data that approximates the full dataset. Crucially (Pinsler et al., 2019) assume that the posterior predictive distribution Yp of a point p is independent of that of the corresponding distribution Yp1 of another point p1 – an assumption we do not make. We show in the next section why avoiding such an assumption lets us more effectively minimize the error with respect to the test distribution versus just optimizing for maxmizing information gain for the model posterior. As (Pinsler et al., 2019) require a variable batch size whereas all other methods (including ours) use a fixed batch size, for fairness of comparison, if the batch for this approach is smaller than the batch size being used, we fill the rest of the batch with random points. In practice, we only observe this being necessary for CIFAR.\nRandom The points are selected uniformly at random from the unlabeled pool. Thus αpx,Mq is the uniform distribution." }, { "heading": "Further statistical background", "text": "A divergence Λ between two distributions is a measure of the discrepancy or difference between two distributions P,Q. A key property of a divergence is that it is 0 if and only if P,Q are the same distribution. In this paper, we will be using the KL divergence and the MMD, which are respectively defined as\nDKLpP ||Qq “ ´ ÿ xPX P pxq logpQpxq P pxq q\nMMD2kpP,Qq “ EkpX,X 1q ` kpY, Y 1q ´ 2kpX,Y q\nwhere k is a kernel in the Reproducing Kernel Hilbert Space (RKHS) H and µk is the mean embedding of the distribution into H as per the kernel k. We can then use the notion of divergence to define the dependency d between a set of random variables X1:n as follows\ndpX1:nq “ ΛpP1:n,biPiq\nwhere P1:n is the joint distribution of X1:n, Pi the marginal of Xi with bPi being the product of marginals. For DKL the dependency is exactly MI as defined above. For MMD the dependency is the Hilbert-Schmidt Independence Criterion (HSIC).\nHilbert-Schmidt Independence Criterion (HSIC)\nFormally, if X,Y are drawn from the joint distribution PXY , then their HSIC is defined as –\nHSICpPXY , k, lq “ Ex,x1,y,y1rkpx, x1qlpy, y1qs ` Ex,x1rkpx, x1qsEy,y1rlpy, y1qs\n´ 2Ex,yrEx1rkpx, x1qsEy1rkpy, y1qss\nwhere px, yq and px1, y1q are independent pairs drawn from PXY . Note that HSICpPXY q “ 0 if and only if PXY “ PXPY , that is, if X,Y are independent, for chracteristic kernels k and l. For the case where we are measuring the joint dependence between d variables, we can use the HSIC statistic (Sejdinovic et al., 2013a; Pfister et al., 2018). The computational complexity of HSIC is bounded by the time taken to compute the kernel matrix which is Opm2dq where m is the number of samples and d the number of random variables. We use {HSIC to denote the empirical estimator of the HSIC statistic." }, { "heading": "Proof of Proposition 1", "text": "k˚ is positive semidefinite (psd) and symmetric as the sum of psd symmetric matrices is also psd symmetric." }, { "heading": "Proof of Proposition 2", "text": "We show here that\n{dHSICpk1, k3, . . . , kdq ` {dHSICpk2, k3, . . . , kdq\n“ {dHSICpk1 ` k2, k3, . . . , kdq\nbut the extension to the arbitrary sums is straightforward. Here {dHSIC is the estimator for dHSIC which is the d-variable version of HSIC. It is defined as\ndHSIC “ 1 n2\nn ÿ\na“1\nn ÿ b“1 Πdj“1k jpXjia , X j ib q` 1 n2d Πdj“1 n ÿ a“1 n ÿ b“1 kjpXjia , X j ib q´ 2 nd`1 n ÿ a“1 Πdj“1 n ÿ b“1 kjpXjia , X j ib q\nwhere kj is the kernel of the jth random variable and Xji is the ith observation for the jth random variable. The estimator {dHSIC is defined as (Sejdinovic et al., 2013a)\n{dHSIC “ 1 n2\nn ÿ\na“1\nn ÿ b“1 Πdj“1k jpxjia , x j ib q` 1 n2d Πdj“1 n ÿ a“1 n ÿ b“1 kjpxjia , x j ib q´ 2 nd`1 n ÿ a“1 Πdj“1 n ÿ b“1 kjpxjia , x j ib q\nAs dHSIC reduces to HSIC when d “ 2, the proof for HSIC also follows. Using the definition of {dHSIC above,\n{dHSICpk1, k3, . . . , kdq ` {dHSICpk2, k3, . . . , kdq “\n1\nn2\nn ÿ\na“1\nn ÿ b“1 k1px1ia , x 1 ib q d ź j“3 kjpxjia , x j ib q\n` 1 n2d\nn ÿ a“1 p n ÿ b“1 k1px1ia , x 1 ib qq d ź j“3 n ÿ b“1 kjpxjia , x j ib q\n´ 2 nd`1\np n ÿ\na“1\nn ÿ b“1 k1px1ia , x 1 ib qq d ź j“3 n ÿ a“1 n ÿ b“1 kjpxjia , x j ib q\n` 1 n2\nn ÿ\na“1\nn ÿ b“1 k2px2ia , x 2 ib q d ź j“3 kjpxjia , x j ib q\n` 1 n2d\nn ÿ a“1 p n ÿ b“1 k2px2ia , x 2 ib qq d ź j“3 n ÿ b“1 kjpxjia , x j ib q\n´ 2 nd`1\np n ÿ\na“1\nn ÿ b“1 k2px2ia , x 2 ib qq d ź j“3 n ÿ a“1 n ÿ b“1 kjpxjia , x j ib q\n“ ” 1\nn2\nn ÿ\na“1\nn ÿ b“1 k1px1ia , x 1 ib q d ź j“3 kjpxjia , x j ib q\n` 1 n2\nn ÿ\na“1\nn ÿ b“1 k2px2ia , x 2 ib q d ź j“3 kjpxjia , x j ib q ı\n` ” 1\nn2d\nn ÿ a“1 p n ÿ b“1 k1px1ia , x 1 ib qq d ź j“3 n ÿ b“1 kjpxjia , x j ib q\n` 1 n2d\nn ÿ a“1 p n ÿ b“1 k2px2ia , x 2 ib qq d ź j“3 n ÿ b“1 kjpxjia , x j ib q ı\n´ ” 2 nd`1 p n ÿ\na“1\nn ÿ b“1 k1px1ia , x 1 ib qq d ź j“3 n ÿ a“1 n ÿ b“1 kjpxjia , x j ib q\n` 2 nd`1\np n ÿ\na“1\nn ÿ b“1 k2px2ia , x 2 ib qq d ź j“3 n ÿ a“1 n ÿ b“1 kjpxjia , x j ib q ı\n“ 1 n2\nn ÿ\na“1\nn ÿ b“1 pk1px1ia , x 1 ib q ` k2px2ia , x 2 ib qq d ź j“3 kjpxjia , x j ib q\n` 1 n2d\nn ÿ\na“1\n” n ÿ\nb“1 pk1px1ia , x 1 ib q\n` k2px2ia , x 2 ib qq ı\nd ź\nj“3\nn ÿ b“1 kjpxjia , x j ib q\n´ 2 nd`1 ”\nn ÿ\na“1\nn ÿ b“1 pk1px1ia , x 1 ib q\n` k2px2ia , x 2 ib qq ı\nd ź\nj“3\nn ÿ\na“1\nn ÿ b“1 kjpxjia , x j ib q\n“ {dHSICpk1 ` k2, k3, . . . , kdq" }, { "heading": "7.1 Further scaling to large batch sizes", "text": "To scale to large batch sizes, instead of adding points to the batch to be acquired one at a time, we can add points in minibatches of size L. While this comes at the cost of possible diversity in the batch, we find that the tradeoff is acceptable for the datasets we experimented with. This gives a final computation cost of Op |DU |m\n2B¨C L q where C is the number of classes. By contrast the corresponding\nruntime for BatchBALD is OpDU | ¨B ¨C ¨m ¨m1q where m1 is the number of sampled configurations of y1:n´1. For all experiments with ICAL, we were able to use L “ 1 without any scaling difficulties. For ICAL-pointwise, we used L “ B15 only for CIFAR-10 and CIFAR-100. As alluded to previously, ICAL-pointwise can accommodate much larger L compared to ICAL before its performance degrades, allowing for much greater scaling. We evaluate this aspect of ICAL-pointwise in the Appendix.\nThe final algorithm is given in Algorithm 1." }, { "heading": "7.2 Algorithm", "text": "Algorithm 1 Information Condensing Active Learning (ICAL) (M, T,Dtrain,DU , B,K, r, L) Train M on Dtrain repeat B “ tu while |B| ă B do Y U “ the predictive distribution for x P DU according to M R “ Set of r randomly selected points from DU x1 “ argmaxx αICALpB Y txu, HSICq with the optimizations as specified in Section 5.1 and 5.2 B “ B Y tx1u\nend while Dtrain “ Dtrain Y B Retrain M on Dtrain\nuntil T iterations reached Return M" }, { "heading": "ICAL-pointwise", "text": "To evaluate the marginal dependency increase if a candidate point x is added to batch B, we sample a set R from the pool set DU and compute the pairwise dHSIC of both B and B1 “ B Y txu with respect to each point in R. Let the resulting vectors (each of length |R|) with the dHSIC scores be dB and dB1 . Then the marginal dependency increase statistic Mx for point p is Mx “ 1 |R| ř\ni maxppdiB1{diBq, 1q where i is the ith element of the vector. When then modify the αICAL as follows - α1ICALpBYtxuq “ αICALpBYtxuq ¨ pMx´1q and use the point with the highest value of α1ICAL as the point to acquire. Note that as we want to get as accurate an estimate of Mx as possible, we ideally want to choose as large a set R as possible. In general, we also want to choose |R| to be greater than the number of classes. This makes ICAL-pointwise more memory intensive compared to ICAL. We also tried another criterion for batch selection based on the minimal-redundancy-maximalrelevance Peng et al. (2005) but that had significantly worse performance compared to ICAL and ICAL-pointwise.\nIn Figure 6, we analyze the performance of ICAL versus ICAL-pointwise when their parameters are set such that computational cost is about the same. As can be seen they are broadly similar with ICAL-pointwise having a slight advantage in earlier acquisitions and ICAL being slightly better in later ones.\nWe also analyze the relative performance as the mini-batch size L changes in Figure 7. In the Figure, iter “ BL is the number of iterations taken to build the entire acquisition batch (note that the actual acquisition happens after the entire batch has been built). ICAL-pointwise requires more computation\ntime than ICAL in small L setup, however if time is the major constraint, ICAL-pointwise is to be preferred as its performance degrades more slowly as L, the size of the minibatch, increases. As the performance usually peaks at L “ 1, if one is trying to get the best performance or if memory is a constraint, then ICAL is to be preferred.\nDiversity of acquired samples in repeated-MNIST\nTo check if ICAL’s acquisition batches are diversed enough, we plot the number of times different number of copies of a same sample has been acquired by each method. As shown in figure 8, our method (as well as BatchBALD, BayesCoreset and Random) successfully avoided acquiring redundant copies of the same sample, whereas FASS and Max Entropy acquired up to 3 copies of the same replica in most acquisitions. This proves that the batched active learning strategies are better in diversity." }, { "heading": "Further CIFAR-10 and CIFAR-100 results", "text": "Further CIFAR results are in Table 2. For CIFAR-100, Random has a high p-value but that is mainly because it performs a bit better in the beginning vs. all other methods but its performance quickly degrades and it is far below ICAL in the final iteration." }, { "heading": "Runtime and memory considerations", "text": "BatchBALD runs out of memory on CIFAR-10 and CIFAR-100 and thus we are unable to compare against it for those two datasets. For the MNIST-variant datasets, ICAL takes about a minute for building the batch to acquire (batch sizes of 5 and 10). For CIFAR-10 (batch size 3000), with L “ 1, the runtime is about 20 minutes but it scales linearly with 1{L (Figure 10). Thus it is only 5 minutes for L “ 30 ( iter “ 100) which is already sufficient to give comparable performance to L “ 1 (Figure 9). For CIFAR-100 (batch size 3000), the performance does degrade with high L but as we mentioned previously, ICAL-pointwise holds up a lot better in terms of performance with high L (Figure 7) and thus if time is a strong consideration, that variant should be used instead." }, { "heading": "7.2 Algorithm 20", "text": "" } ]
2,020
null
SP:1b0eb87a5d014c94fb41b0e2322bb31ef2a11b78
[ "This paper is about optimizing discount offers to individual customers to maximize business value. A temporal convolutional network (TCN) is used to model the customer's purchase probability. This network is then used to estimate the customer's offer-elasticity. Finally a linear program is used to optimize offers across all customers subject to a retention-rate constraint. The method is demonstrated on a public dataset and a detailed analysis is performed on its results." ]
Lately, personalized marketing has become important for retail/e-retail firms due to significant rise in online shopping and market competition. Increase in online shopping and high market competition has led to an increase in promotional expenditure for online retailers, and hence, rolling out optimal offers has become imperative to maintain balance between number of transactions and profit. In this paper, we propose our approach to solve the offer optimization problem at the intersection of consumer, item and time in retail setting. To optimize offer, we first build a generalized non-linear model using Temporal Convolutional Network to predict the item purchase probability at consumer level for the given time period. Secondly, we establish the functional relationship between historical offer values and purchase probabilities obtained from the model, which is then used to estimate offer-elasticity of purchase probability at consumer item granularity. Finally, using estimated elasticities, we optimize offer values using constraint based optimization technique. This paper describes our detailed methodology and presents the results of modelling and optimization across categories.
[]
[ { "authors": [ "Shaojie Bai", "J Zico Kolter", "Vladlen Koltun" ], "title": "An empirical evaluation of generic convolutional and recurrent networks for sequence modeling", "venue": "arXiv preprint arXiv:1803.01271,", "year": 2018 }, { "authors": [ "Yoshua Bengio", "MONTREAL CA" ], "title": "Rmsprop and equilibrated adaptive learning rates for nonconvex optimization", "venue": "corr abs/1502.04390,", "year": 2015 }, { "authors": [ "Maxime Cohen", "Georgia Perakis" ], "title": "Promotion optimization in retail", "venue": "Available at SSRN 3194640,", "year": 2018 }, { "authors": [ "Maxime C Cohen", "Ngai-Hang Zachary Leung", "Kiran Panchamgam", "Georgia Perakis", "Anthony Smith" ], "title": "The impact of linear optimization on promotion planning", "venue": "Operations Research,", "year": 2017 }, { "authors": [ "Kris Johnson Ferreira", "Bin Hong Alex Lee", "David Simchi-Levi" ], "title": "Analytics for an online retailer: Demand forecasting and price optimization", "venue": "Manufacturing & Service Operations Management,", "year": 2016 }, { "authors": [ "Cheng Guo", "Felix Berkhahn" ], "title": "Entity embeddings of categorical variables", "venue": "arXiv preprint arXiv:1604.06737,", "year": 2016 }, { "authors": [ "Pavel Izmailov", "Dmitrii Podoprikhin", "Timur Garipov", "Dmitry Vetrov", "Andrew Gordon Wilson" ], "title": "Averaging weights leads to wider optima and better generalization", "venue": "arXiv preprint arXiv:1803.05407,", "year": 2018 }, { "authors": [ "Colin Lea", "Rene Vidal", "Austin Reiter", "Gregory D Hager" ], "title": "Temporal convolutional networks: A unified approach to action segmentation", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "Zachary C Lipton", "Charles Elkan", "Balakrishnan Naryanaswamy" ], "title": "Optimal thresholding of classifiers to maximize f1 measure", "venue": "In Joint European Conference on Machine Learning and Knowledge Discovery in Databases,", "year": 2014 }, { "authors": [ "Adam Paszke", "Sam Gross", "Soumith Chintala", "Gregory Chanan", "Edward Yang", "Zachary DeVito", "Zeming Lin", "Alban Desmaison", "Luca Antiga", "Adam Lerer" ], "title": "Automatic differentiation in pytorch", "venue": null, "year": 2017 }, { "authors": [ "Leslie N Smith" ], "title": "Cyclical learning rates for training neural networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2017 }, { "authors": [ "Ilya Sutskever", "Oriol Vinyals", "Quoc V Le" ], "title": "Sequence to sequence learning with neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Jason Yosinski", "Jeff Clune", "Yoshua Bengio", "Hod Lipson" ], "title": "How transferable are features in deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Manzil Zaheer", "Sashank Reddi", "Devendra Sachan", "Satyen Kale", "Sanjiv Kumar" ], "title": "Adaptive methods for nonconvex optimization", "venue": "In Advances in neural information processing systems,", "year": 2018 } ]
[ { "heading": null, "text": "Lately, personalized marketing has become important for retail/e-retail firms due to significant rise in online shopping and market competition. Increase in online shopping and high market competition has led to an increase in promotional expenditure for online retailers, and hence, rolling out optimal offers has become imperative to maintain balance between number of transactions and profit. In this paper, we propose our approach to solve the offer optimization problem at the intersection of consumer, item and time in retail setting. To optimize offer, we first build a generalized non-linear model using Temporal Convolutional Network to predict the item purchase probability at consumer level for the given time period. Secondly, we establish the functional relationship between historical offer values and purchase probabilities obtained from the model, which is then used to estimate offer-elasticity of purchase probability at consumer item granularity. Finally, using estimated elasticities, we optimize offer values using constraint based optimization technique. This paper describes our detailed methodology and presents the results of modelling and optimization across categories." }, { "heading": "1 INTRODUCTION", "text": "In most retail settings, promotions play an important role in boosting the sales and traffic of the organisation. Promotions aim to enhance awareness when introducing new items, clear leftover inventory, bolster customer loyalty, and improve competitiveness. Also, promotions are used on a daily basis in most retail environments including online retailers, supermarkets, fashion retailers, etc. A typical retail firm sells thousands of items in a week and needs to design offer for all items for the given time period. Offer design decisions are of primary importance for most retail firms, as optimal offer roll out can significantly enhance the business’ bottom line.\nMost retailers still employ a manual process based on intuition and past experience of the category managers to decide the depth and timing of promotions. The category manager has to manually solve the promotion optimization problem at consumer-item granularity, i.e., how to select an optimal offer for each period in a finite horizon so as to maximize the retailer’s profit. It is a difficult problem to solve, given that promotion planning process typically involves a large number of decision variables, and needs to ensure that the relevant business constraints or offer rules are satisfied. The high volume of data that is now available to retailers presents an opportunity to develop machine learning based solutions that can help the category managers improve promotion decisions.\nIn this paper, we propose deep learning with multi-obective optimization based approach to solve promotion optimization problem that can help retailers decide the promotions for multiple items while accounting for many important modelling aspects observed in retail data. The ultimate goal here is to maximize net revenue and consumer retention rate by promoting the right items at the right time using the right offer discounts at consumer-item level. Our contributions in this paper include a) Temporal Convolutional Neural Network architecture with hyperparameter configurations to predict the item purchase probability at consumer level for the given time period. b) Design and implementation of F1-maximization algorithm which optimises for purchase probability cut-off at consumer level. c) Methodology to estimate offer elasticity of purchase probability at consumer item granularity. d) Constraint based optimization technique to estimate optimal offers at consumer-item granularity." }, { "heading": "2 RELATED WORK", "text": "There has been a significant amount of research conducted on offer-based revenue management over the past few decades. Multiple great works have been done to solve Promotion Optimization problem. One such work is Cohen et al. (2017), where the author proposes general classes of demand functions (including multiplicative and additive), which incorporates post-promotion dip effect, and uses Linear integer programming to solve Promotion Optimization problem. In one of the other work [Cohen & Perakis (2018)], the author lays out different types of promotions used in retail, and then formulates the promotion optimization problem for multiple items. In paper Cohen & Perakis (2018), the author shows the application of discrete linearization method for solving promotion optimization. Gathering learnings from the above papers, we create our framework for offer optimization. The distinguishing features of our work in this field include (i) the use of a nonparametric neural network based approach to estimate the item purchase probability at consumer level, (ii) the establishment of the functional relationship between historical offer values and purchase probabilities, and (iii) the creation of a new model and efficient algorithm to set offers by solving a multi-consumer-item promotion optimization that incorporates offer-elasticity of purchase probability at a reference offer value" }, { "heading": "3 METHODOLOGY", "text": "We built seperate models for each category, as we understand that consumer purchase pattern and personalized marketing strategies might vary with categories." }, { "heading": "3.1 MODELLING", "text": "In our model setup, we treat each relevant consumer-item as an individual object and shape them into bi-weekly time series data based on historical transactions, where the target value at each time step (2 weeks) takes a binary input, 1/0 (purchased/non purchased). Relevancy of the consumeritem is defined by items transacted by consumer during training time window. Our Positive samples (purchased/1) are time steps where consumer did transact the item, whereas Negative samples (non purchased/0) are the time steps where the consumer did not buy that item. We apply sliding windows testing routine for generating out of time results. The time series is split into 3 parts - train (48 weeks), validation (2 weeks) and test (2 weeks). All our models are built in a multi-object fashion for an individual category, which allows the gradient movement to happen across all consumer-item combinations split in batches. This enables cross-learning to happen across consumers/items. A row in time series is represented by\nycit = h(it, ct, .., ct-n, ict, .., ict-n, dt, .., dt-n) (1)\nwhere ycit is purchase prediction for consumer ’c’ for item ’i’ at time ’t’. ’n’ is the number of time lags. it denotes attributes of item ’i’ like category, department, brand, color, size, etc at time ’t’. ct denotes attributes of consumer ’c’ like age, sex and transactional attributes at time ’t’. ct-n denotes the transactional attributes of consumer ’c’ at a lag of ’t-n’ time steps. ict denotes transactional attributes such as basket size, price, offer, etc. of consumer ’c’ towards item ’i’ at time ’t’ . dt is derived from datetime to capture trend and seasonality at time ’t’." }, { "heading": "3.1.1 FEATURE ENGINEERING", "text": "Based on available dataset, we generate multiple features for the modelling activity. Some of the feature groups we perform our experiments are:\nDatetime: We use transactional metrics at various temporal cuts like week, month, etc. Datetime related features capturing seasonality and trend are also generated. Consumer-Item Profile: We use transactional metrics at different granularities like consumer, item, and consumer-item. We also create features like Time since first order, Time since last order, time gap between orders, Reorder rates, Reorder frequency, Streak - user purchased the item in a row, Average position in the cart, Total number of orders. Price/Promotions: We use relative price and historical offer discount percentage to purchase propensity at varying price, and discount values. Lagged Offsets: We use statistical rolling operations like mean, median, variance, kurtosis and skewness over temporal regressors for different lag periods to generate offsets." }, { "heading": "3.1.2 LOSS FUNCTION", "text": "Since we are solving Binary Classification problem, we believe that Binary Cross-Entropy should be the most appropriate loss function for training the models. We use the below formula to calculate Binary Cross-Entropy:\nHp = − 1N ∑N i=1 y.log(p(y)) + (1− y).log(1− p(y)) (2)\nhere Hp represents computed loss, y is the target value (label), and p(y) is the predicted probability against the target. The BCELoss takes non-negative values. We can infer from Equation 3 that Lower the BCELoss, better the Accuracy." }, { "heading": "3.1.3 MODEL ARCHITECTURE", "text": "Traditional machine learning models may not be a suitable choice for modelling h (Equation 1) due to non-linear interactions between the features. Sequence to Sequence [Sutskever et al. (2014)] neural network architectures seems to be sound choice for tackling our problem. Hence, we use Entity Embeddings [Guo & Berkhahn (2016)] + Temporal Convolutional Network (TCN) (Figure 1) architecture for building all the models across categories. Originally proposed in [Lea et al. (2016)], TCN can take a sequence of any length and map it to an output sequence of the same length. For this to accomplish, TCN uses a 1D fully-convolutional network (FCN) architecture, where each hidden layer is the same length as the input layer, and zero padding of length (kernel size-1) is added to keep subsequent layers the same length as previous ones. Also, the convolutions in the architecture are causal, meaning that there is no information leakage from future to past. To achieve this, TCN uses causal convolutions [Bai et al. (2018)], convolutions where an output at time t is convolved only with elements from time t and earlier in the previous layer. For 1-D sequence input x and filter f the dilated convolution operation DC on element k of the sequence is defined as:\nDC(k) = (x ∗ df)(k) = ∑n−1 i=0 f(i) · xk-di , where x ∈ Rn and f : {0, ..., n− 1} → R (3)\nwhere d is the dilation factor, n is the filter size, and k-di accounts for the direction of the past. When d = 1, a dilated convolution reduces to a regular convolution. Using larger dilation enables an output at the top level to represent a wider range of inputs, thus effectively expanding the receptive field of a ConvNet.\nAs can be seen in Figure 1, Our network architecture comprises of 3 dilated Convolutions combined with entity embeddings [Guo & Berkhahn (2016)]. Post Convolutions and concatenation with embedding tensor, the created tensor flows through 3 fully connected ReLU layers yielding to sigmoid dense layer. To seggregate static and temporal features, we group input tensor into 4 seperate tensors, as can be seen in 1:\nStatic Categorical: These are categorical features that do not vary with time. This includes consumer attributes like sex, marital status and location along with different item attributes like category, department and brand. Temporal Categorical: These are categorical features that vary with time. It includes all the datetime related features like week, month of year, etc. Static Continuous: These features are static but continuous. This includes certain consumer attributes like age and weight, item attributes like size, and certain derived features like target encoded features. Temporal Continuous: These are time varying continuous features. All consumer and item related traditional attributes like number of orders, add to cart order, etc. falls under this bucket." }, { "heading": "3.1.4 HYPERPARAMETER TUNING", "text": "We use documented best practices along with our experimental results to choose model hyperparameters. Hyperparameter Optimization is performed over validation dataset. We list some of the hyperparameters along with the values we tune for Deep neural network models.\nOptimizer Parameters: RMSProp [Bengio & CA (2015)] and Adam are used as optimizers across model runs. The learning rate is experimentally tuned to 1e-3. We also have weight decay of 1e-5 which helps a bit in model Regularization. Scheduler Parameters: CyclicLR [Smith (2017)] and ReduceLROnPlateau [Zaheer et al. (2018)] Learning rates are used as schedulers across model runs. we use 1e-3 as max lr and 1e-6 as base lr for cyclical learning rate along with the step size being the function of length of train loader. ReduceLROnPlateau is tuned at 1e-6 as min lr. SWA: Stochastic\nWeight Averaging (SWA) [Izmailov et al. (2018)] is used to improve generalization across Deep Learning models. SWA performs an equal average of the weights traversed by SGD with a modified learning rate schedule. We use 1e-3 as SWA learning rate. Parameter Average: This is a method to average the neural network parameters of n best model checkpoints post training, weighted by validation loss of respective checkpoints.\nApart from these parameters we also iterate to tune network parameters like number of epochs, batch size, number of Fully Connected Layers, convnet parameters (kernel size, dilations, padding) and embedding sizes for the categorical features. Binary Cross-Entropy 3 is used as loss function for all the models trained across categories. Neural Network models are built using deep learning framework PyTorch [Paszke et al. (2017)], and are trained on GCP instance containing 6 CPUs and a single GPU." }, { "heading": "3.2 F1-MAXIMIZATION", "text": "Post stacking, we optimize for purchase probability threshold based on probability distribution at a consumer level using F1-Maximization. This enables optimal thresholding of consumer level probabilities to maximize F1 measure [Lipton et al. (2014)]. To illustrate the above, let us say we generated purchase probabilities for ’n’ items out of ’b’ actually purchased items by consumer ’c’. Now, let us visualize the actuals and predictions (4) of ’n’ predicted items for consumer ’c’.\nAc = [a1, a2, .., an] ∀ aj ∈ {0,1} , P c = [p1, p2, .., pn] ∀ pj ∈ [0, 1] (4) Ac represents the actuals for consumer ’c’, with aj being 1/0 (purchased/non purchased). Pc represents the predictions for consumer ’c’ for the respective item, with pj being probability value. ’n’ represents total items for which the model generated purchase probabilities for consumer ’c’. Now we apply Decision rule D() which converts probabilities to binary predictions, as described below in Equation 5.\nD(Prc) : P c 1 x n → P ’c1 x n : p’j = { 1 pj ≥ Prc 0 Otherwise\n(5)\nP ’c = [p ’ 1, p ’ 2, .., p ’ n] ∀ p’j ∈ {0,1} , k = ∑n i=1 p ’ i (6)\nPrc is the probability cut-off to be optimized for maximizing F1 measure [Lipton et al. (2014)] for consumer ’c’. Decision rule D() converts probabilities Pc to binary predictions P’c such that if pj is less than Prc then p’j equals 0, otherwise 1. ’k’ is the sum of predictions generated post applying Decision rule D(). Now we solve for F1 measure using equations and formulae described below.\nV Prc = P ’ c × AcT ⇒ ( p’1 .. p ’ n ) × ( a1 .. an ) (7)\nPrecisionc = V Prc k , Recallc = V Prc b , F 1c = 2×Precisionc×Recallc Precisionc+Recallc ⇒ 2 ∗ V Prck+b (8)\nVPrc represents the number of items with purchase probabilities greater than Prc which were actually purchased (True Positives). As can be seen, Formulae 8 is used to calculate Precision, Recall and F1-score for consumer ’c’.\nmax V Prc 2 ∗ V Prc k + b , subject to: Prc ∈ (0, 1) (9)\nEquation 9 represents the optimization function we solve to generate purchase predictions (1/0) for each consumer. Figure 5 - Section 4 shows the predicted probability distributions." }, { "heading": "3.3 ELASTICITY FRAMEWORK", "text": "After modelling, we establish the functional relationship between historical offer values and purchase probabilities obtained from the model, which is then used to estimate offer-elasticity of purchase probability at consumer item granularity. Given that our output layer of deep net is sigmoid and we are modelling for probability values, sigmoid function (Figure 2) seemed to us as an apt choice to study the variation of purchase probability with offer percent. We also perform multiple experiments as described in Figure 4 - Section 4 to see the goodness of fit of sigmoid curve over our dataset across different categories. The average R2 value for 8 categories is seen to be approximately 75 percent.\nf(x) = 1\n1 + e−(ax+b) , f ’(x) = a× f(x)× (1− f(x)) (10)\nSince the functional relationship might vary with categories, we learn seperate parameters of sigmoid for each category. We then use sigmoid curve to estimate elasticities, the x-elasticity of y measures the fractional response of y to a fraction change in x, which can be written as:\nx− elasticity of y : (x, y) = dy/y dx/x\n(11)\nWe incorporate equation 11 to determine the offer-elasticity of purchase probability. We use historical offer percent values at consumer-item granularity, identified using following criteria in that order: a) average of last 4 weeks non-zero offer percent values of the consumer-item combination b) average of historical non-zero offer percent values of the consumer-item combination c) average of last 4 weeks non-zero offer percent values of all same age consumer-item combinations within that category. Using Equations 10 and 11, we establish the offer-elasticity of purchase probability (equation 12) as shown below, k being the offer percent and f(k) being purchase probability.\n(k, f(k)) = f ’(k)× k f(k)\n⇒ (k, f(k)) = a× k × (1− f(k)) (12)" }, { "heading": "3.4 OFFER OPTIMIZATION", "text": "Post estimation of offer-elasticity of purchase probability, for each category, we solve the below optimization function (Equation 13) to maximize Net Revenue, with Consumer Retention Rate greater\nthan category threshold (Rrc).\nmax ηi n∑ i=1 [Ip − Ip 100 × (ki + ηi × ki)]× 1 Prc(f(ki) + ηi × (ki, f(ki))× f(ki))\ns.t. : ηi = 0.05× j ∀ j ∈ Z and j ∈ (−20, 20) (ki + ηi × ki) ∈ (o1, o2) 1\nn n∑ i=1 1 Prc(f(ki) + ηi × (ki, f(ki))× f(ki)) >= Rrc\n1 Prc(x) := { 1 x ≥ Prc 0 Otherwise\n(13)\nIn the equation above, n is the total consumer-item samples modelled for a particular category. Ip is the price of the item, ki and f(ki) being the offer percent and purchase probability for the ith consumer-item. ηi is the change in offer percent ki. 1Prc() denotes the Indicator function at Prc, which is the optimal probability cut-off obtained from F1-Maximization algorithm for consumer ’c’ (Equation 9). (ki, f(ki)) denotes the ki percent offer-elasticity of f(ki) purchase probability. Rrc denaotes the Retention rate cut-off of category c. o1 and o2 refers to offer range for that category. This is determined using the latest 2 weeks of consumer-item samples. We solve the Equation 13 using Linear Programming approach to compute optimal offer at consumer-item granularity. We observe that there is variance in the optimal offers generated from optimization engine across categories.We have shown the distribution of offers across categories in Figure 6 - Section 4." }, { "heading": "4 EXPERIMENTS AND RESULTS", "text": "We use kaggle dataset available at AmExpert 2019. Figure 7 shows the schema of the data we have used to build our models. As mentioned in Section 3, we use a maximum of 1 year data aggregated at bi-weekly level at consumer-item granularity. We split the data into three part train, validation and test based on our validation strategy. We generate consumer - item - bi-weekly data with purchase/ non purchase being the target , and use this data to train all our models." }, { "heading": "4.1 EXPERIMENT SETUPS", "text": "We start with exploratary data analysis, looking at the data from various cuts. We study the variations of different features with our target (purchase/ non purchase). Some of our studies include category-wise sales distribution, density of transactions with varying offer percent and density of\ntransactions with varying basket size (Figure 3). We observe that Grocery and Pharmaceutical contributes to approximately 77 percent of total sales. We also look at order probability variations at different temporal cuts like week, month and quarter, transactional metrics like total orders, total reorders, recency, gap between orders, at both consumer and item levels. We then perform multiple experiments with the above mentioned features and different hyperparameter configurations to land at reasonable hyperparameters to perform final experiments and present our results." }, { "heading": "4.2 RESULTS AND OBSERVATIONS", "text": "Figure 4 shows the functional relationship of offer percetage with purchase probability obtained across different categories. It also contains the equations and goodness of fit in the form of R2 for each category. Seafood and Bakery are the top 2 best fitted categories with R2 values of 0.83 and 0.82 respectively. Figure 5 Chart-1 and Chart-2 shows the predicted probability distribution when actual label equals 0 and 1 respectively over the validation data split. Chart-3 shows the cut-off probability\ndistribution post F1-Maximization, and, we observe that highest density of cut-off probability lies between 0.2 to 0.3. Table 1 presents the accuracy values post Modelling and F1-Maximization. It also has the average elasticity values along with weighted offer percent computed post optimization. From model performance perspective , it is observed that Grocery category has least BCELoss of 0.0283. Pharmaceutical and Meat categories followed Grocery with BCELoss of 0.0296 and 0.0299 respectively. Also, Grocery has the best F1 score of 0.512 followed by Meat and Pharmaceutical scoring 0.511 and 0.509 respectively. Vegetables is the most elastic category with elasticity value of 1.53, whereas Packaged meat and Skin and Hair care are the least elastic categories with elasticity value of 0.62. From Figure 6, we can see the graphical representation of distribution of optimal offer calculated through optimization. We find Skin and Hair care along with Pharmaceutical to be left skewed, whereas Vegetables as well as Flowers and Plants to be right skewed." }, { "heading": "5 CONCLUSION", "text": "We have presented our detailed methodology to solve the offer optimization problem at the intersection of consumer, item and time in retail setting. We have also presented the results of our models and optimization in the form of model accuracies and graphical representations at a category level. At the same time we understand that computation strategy is a key aspect in modelling millions of consumers, and we intend to further explore this aspect by building Transfer Learning framework Yosinski et al. (2014). We are also working to further improve our Sequence to Sequence neural network architectures to improve accuracy and decrease computation time." } ]
2,020
null
SP:072e0bcba8ff2c75ff0ff55576ae77943dada729
[ "This paper studies the problem of imitation an expert from noisy demonstrations without interactions with the environment. It proposes an algorithm, which utilizes an ensemble of behavioral cloning policies and is analogous to the mean shift algorithm, to find the mode from noisy demonstrations. Experimentally, the proposed algorithm can almost recover the expert policy. " ]
We consider the problem of learning an optimal expert behavior policy given noisy demonstrations that contain observations from both optimal and non-optimal expert behaviors. Popular imitation learning algorithms, such as generative adversarial imitation learning, assume that (clean) demonstrations are given from optimal expert policies but not the non-optimal ones, and thus often fail to imitate the optimal expert behaviors given the noisy demonstrations. Prior works that address the problem require (1) learning policies through environment interactions in the same fashion as reinforcement learning, and (2) annotating each demonstration with confidence scores or rankings. However, such environment interactions and annotations in real-world settings take impractically long training time and a significant human effort. In this paper, we propose an imitation learning algorithm to address the problem without any environment interactions and annotations associated with the non-optimal demonstrations. The proposed algorithm learns ensemble policies with a generalized behavioral cloning (BC) objective function where we exploit another policy already learned by BC. Experimental results show that the proposed algorithm can learn behavior policies that are much closer to the optimal policies than ones learned by BC.
[ { "affiliations": [], "name": "NOISY DEMONSTRA" }, { "affiliations": [], "name": "Fumihiro Sasaki" }, { "affiliations": [], "name": "Ryota Yamashina" } ]
[ { "authors": [ "Pieter Abbeel", "Andrew Y Ng" ], "title": "Apprenticeship learning via inverse reinforcement learning", "venue": "In Proceedings of the twenty-first international conference on Machine learning,", "year": 2004 }, { "authors": [ "Kianté Brantley", "Wen Sun", "Mikael Henaff" ], "title": "Disagreement-regularized imitation learning", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Daniel S Brown", "Wonjoon Goo", "Prabhat Nagarajan", "Scott Niekum" ], "title": "Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations", "venue": null, "year": 1904 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S Sutton" ], "title": "Off-policy actor-critic", "venue": "arXiv preprint arXiv:1205.4839,", "year": 2012 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Keinosuke Fukunaga", "Larry Hostetler" ], "title": "The estimation of the gradient of a density function, with applications in pattern recognition", "venue": "IEEE Transactions on information theory,", "year": 1975 }, { "authors": [ "Seyed Kamyar Seyed Ghasemipour", "Richard Zemel", "Shixiang Gu" ], "title": "A divergence minimization perspective on imitation learning methods", "venue": "In Conference on Robot Learning,", "year": 2020 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Daniel H Grollman", "Aude G Billard" ], "title": "Robot learning from failed demonstrations", "venue": "International Journal of Social Robotics,", "year": 2012 }, { "authors": [ "Jonathan Ho", "Stefano Ermon" ], "title": "Generative adversarial imitation learning", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Michael Kaiser", "Holger Friedrich", "Rudiger Dillmann" ], "title": "Obtaining good performance from a bad teacher", "venue": "In Programming by Demonstration vs. Learning from Examples Workshop at ML,", "year": 1995 }, { "authors": [ "Beomjoon Kim", "Amir-massoud Farahmand", "Joelle Pineau", "Doina Precup" ], "title": "Learning from limited demonstrations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2013 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement learning,", "year": 2012 }, { "authors": [ "Timothy P Lillicrap", "Jonathan J Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "arXiv preprint arXiv:1509.02971,", "year": 2015 }, { "authors": [ "Andrew Y Ng", "Stuart J Russell" ], "title": "Algorithms for inverse reinforcement learning", "venue": "In Icml,", "year": 2000 }, { "authors": [ "Michael Peter Perrone" ], "title": "Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization", "venue": "PhD thesis, Brown University,", "year": 1993 }, { "authors": [ "Dean A Pomerleau" ], "title": "Efficient training of artificial neural networks for autonomous navigation", "venue": "Neural computation,", "year": 1991 }, { "authors": [ "Stéphane Ross", "Drew Bagnell" ], "title": "Efficient reductions for imitation learning", "venue": "In Proceedings of the thirteenth international conference on artificial intelligence and statistics,", "year": 2010 }, { "authors": [ "Fumihiro Sasaki", "Tetsuya Yohira", "Atsuo Kawaguchi" ], "title": "Sample efficient imitation learning for continuous control", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Voot Tangkaratt", "Bo Han", "Mohammad Emtiyaz Khan", "Masashi Sugiyama" ], "title": "Vild: Variational imitation learning with diverse-quality demonstrations", "venue": "arXiv preprint arXiv:1909.06769,", "year": 2019 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Yueh-Hua Wu", "Nontawat Charoenphakdee", "Han Bao", "Voot Tangkaratt", "Masashi Sugiyama" ], "title": "Imitation learning from imperfect demonstration", "venue": "arXiv preprint arXiv:1901.09387,", "year": 2019 }, { "authors": [ "Brian D Ziebart", "Andrew L Maas", "J Andrew Bagnell", "Anind K Dey" ], "title": "Maximum entropy inverse reinforcement learning", "venue": "In Aaai,", "year": 2008 } ]
[ { "heading": "1 INTRODUCTION", "text": "Imitation learning (IL) has become a widely used approach to obtain autonomous robotics control systems. IL is often more applicable in real-world problems than reinforcement learning (RL) since expert demonstrations are often easier than designing appropriate rewards that RL requires. There have been several IL methods that involve RL (Ziebart et al., 2008; Ng et al., 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016). Those IL methods inherit sample complexity from RL in terms of environment interactions during training. The complexity restricts applicabilities in real-world problems since a number of environment interactions in real-world settings often take a long time and cause damage to the robot or the environment. Therefore, we are interested in IL methods that do not require the environment interactions, such as behavioral cloning (BC) (Pomerleau, 1991) which learns an expert policy in a supervised fashion.\nBC as well as popular IL methods, such as generative adversarial imitation learning (GAIL) (Ho & Ermon, 2016), assume the expert demonstration is optimal. Unfortunately, it is often difficult to obtain optimal demonstrations for many tasks in real-world problems because the expert who tries to operate the robot so that it can achieve tasks often makes mistakes due to various reasons, such as the difficulty of the task, difficulty in handling the controller, limited observability of the environment, or the presence of distraction. The mistakes include unnecessary and/or incorrect operations to achieve the tasks. Given such noisy expert demonstrations, which contain records of both optimal and non-optimal behavior, BC as well as the popular IL methods fails to imitate the optimal policy due to the optimal assumption on the demonstrations as shown in (Wu et al., 2019).\nA naive solution to cope with the noisy demonstrations is discarding the non-optimal demonstrations among the ones that were already collected. This screening process is often impractical because it involves a significant human effort. Most of recent IL works suppose settings where a very limited number of clean expert demonstrations, which are composed of only the optimal behavior records, are available. Those methods are also vulnerable to the noisy demonstrations due to the optimal\nassumption on the demonstrations. Thus they implicitly suppose such impractical screening process if they were applied in real-world problems, where a number of the noisy demonstrations other than the clean ones can be easily obtained. There have been IL methods addressing the noisy demonstrations. Instead of the screening process, they require to annotate each demonstration with confidence scores (Wu et al., 2019) or rankings (Brown et al., 2019). Even though they cope well with the noisy demonstrations to obtain the optimal behavior policies, such annotation costs a significant human effort as it is for the screening. Hence, we desire IL methods that can cope well with the noisy demonstrations, which can be easily obtained in real-world settings, without any screening and annotation processes associated with the non-optimal behaviors.\nIn this paper, we propose a novel imitation learning algorithm to address the noisy demonstrations. The proposed algorithm does not require (1) any environment interactions during training, and (2) any screening and annotation processes associated with the non-optimality of the expert behaviors. Our algorithm learns ensemble policies with a generalized BC objective function where we exploit another policy already learned by BC. Experimental results show that the proposed algorithm can learn policies that are much closer to the optimal than ones learned by BC." }, { "heading": "2 RELATED WORKS", "text": "A wide variety of IL methods have been proposed in these last few decades. BC (Pomerleau, 1991) is the simplest IL method among those and thus BC could be the first IL option when enough clean demonstrations are available. Ross & Bagnell (2010) have theoretically pointed out a downside of the BC which is referred to as compounding error – the small errors of the learners trained by BC could compound over time and bring about the deterioration of their performance. On the other hand, experimental results in (Sasaki et al., 2018) show that BC given the clean demonstrations of sufficient amounts can easily obtain the optimal behavior even for complex continuous control tasks. Hence, the effect of the compounding error is negligible in practice if the amount of clean demonstrations is sufficient. However, even if the amount of the demonstrations is large, BC cannot obtain the optimal policy given the noisy demonstrations due to the optimal assumption on the demonstrations. Another widely used IL approaches are inverse reinforcement learning (IRL) (Ziebart et al., 2008; Ng et al., 2000; Abbeel & Ng, 2004) and adversarial imitation learning (AIL) (Ho & Ermon, 2016). Since those approaches also assume the optimality of the demonstrations, they are also not able to obtain the optimal policy given the noisy demonstrations, as shown in (Wu et al., 2019). As we will show in Section 6, our algorithm successfully can learn near-optimal policies if noisy demonstrations of sufficient amounts are given.\nThere have been several works that address the noisy demonstrations (Wu et al., 2019; Brown et al., 2019; Tangkaratt et al., 2019; Kaiser et al., 1995; Grollman & Billard, 2012; Kim et al., 2013). Those works address the noisy demonstrations by either screening the non-optimal demonstrations with heuristic non-optimal assessments (Kaiser et al., 1995), annotations associated with the nonoptimality (Wu et al., 2019; Brown et al., 2019; Grollman & Billard, 2012), or training through the environment interactions (Kim et al., 2013; Wu et al., 2019; Brown et al., 2019; Tangkaratt et al., 2019). Our algorithm does not require any screening processes, annotations associated with the non-optimality, and the environment interactions during training.\nOffline RL methods (Lange et al., 2012; Fujimoto et al., 2019; Kumar et al., 2020) train the learner agents without any environment interactions, and allow the training dataset to have non-optimal trajectories as in our problem setting. A drawback of offline RL methods for the real-world applications is the requirement to design reward functions, which often involves a significant human efforts for its success, since those methods assume that the reward for each state-action pair is known. Our algorithm does not require to design reward functions as in standard IL methods.\nDisagreement regularized imitation learning (DRIL) (Brantley et al., 2019) is a state-of-the-art IL algorithm which employs an ensemble of policies as our algorithm does. The aims of employing the ensemble is different between DRIL and our algorithm. DRIL uses the disagreement in predictions made by policies in the ensemble to evaluate whether the states observed during training the learner are ones observed in the expert demonstrations. On the other hand, our algorithm uses the ensemble to encourage the learner to take optimal actions on each state as described in 5.3. In addition, DRIL fundamentally requires the environment interactions during training whereas our algorithm does not." }, { "heading": "3 PRELIMINARIES AND PROBLEM SETUP", "text": "In this work, we consider an episodic fixed-horizon Markov decision process (MDP) which is formalized as a tuple {S,A,P, R, d0, T}, where S is a set of states, A is a set of possible actions agents can take, P : S×A×S → [0, 1] is a transition probability, R : S×A→ [0, 1] is a reward function, d0 : S → [0, 1] is a distribution over initial states, and T is an episode horizon. The agent’s behavior is defined by a stochastic policy π : S×A→ [0, 1] and Π denotes a set of the stochastic policies. The expected one-step immediate reward for a policy π given a state s is defined as Rπ(s) = Ea∼π(·|s) [ R(s, a) ] .\nLet dπt and d π = 1T ∑T t=1 d π t denote the distribution over states at time step t and the average distribution over T time steps induced by π, respectively. The distributions dπ1 at the first step correspond to d0 for any π. When following a policy π throughout an episode, the expected one-step immediate reward at time step t and the expected T -step reward are defined as Rπt = Es∼dπt ,a∼π(·|s) [ R(s, a) ] = Es∼dπt [ Rπ(s) ] and J (π,R) = ∑T t=1R π t = TEs∼dπ [ Rπ(s) ] , respectively. We refer to J (π,R) as on-policy expected T -step reward. We also consider another T -step reward defined by Jβ(π,R) = TEs∼dβ [ Rπ(s) ] , which we call off-policy expected T -step reward, where β ∈ Π is a policy that can differ from π. In our problem setting, the functions R is not given. Instead, we observe noisy demonstrations. We refer to the agent that generates the noisy demonstrations as the noisy expert. The decision process turns to be MDP\\{R} as in the common imitation learning settings, and our problem can be formalized as to find an optimal policy in MDP\\{R}. Here we refer to the true expert policy π∗e as ones being able to take the optimal (thus not noisy) behavior in episodic tasks. We make the following four assumptions to further formalize our problem setting:\nAssumption 1. The T -step expected reward of π∗e satisfies J (π,R) ≤ J (π∗e , R); Jβ(π,R) ≤ Jβ(π∗e , R); and Jβ(π∗e , R) ≤ J (π∗e , R) for any non-optimal policies π, β ∈ Π \\ {π∗e}.\nAssumption 2. With small probability , which we call non-optimal probability, the policies πe the noisy experts follow during demonstrations are sampled at each time step as πe = π ∼ pΠ if ≥ z ∼ U(0, 1), otherwise πe = π∗e , where pΠ is an unknown distribution over the set of policies, z is a random variable, and U(0, 1) is a uniform distribution with range [0, 1].\nAssumption 3. The rewardRπet is at least zero if the noisy expert has followed a policy π ∈ Π\\{π∗e} once or more so far, otherwise Rπet = Es∼dπet [ Eπ∼pΠ [Rπ(s)] + (1− )Rπ ∗ e (s) ] .\nAssumption 4. The sequence {Rπe1 , ..., R πe T } has monotonically decreasing property R πe t ≥ R πe t+1.\nAssumption 1 indicates that both on-policy and off-policy expected T -step reward following π∗e are always greater than or equal to ones following any other policies. In other words, we assume the true expert policy is an optimal one in the MDP, and the agent following the policy is able to behave so that the expected immediate rewards at any states are maximized. Under Assumption 1, the problem that we would like to solve in this work can be said to learn a parameterized policy πθ to maximize its on-policy expected T -step reward J (πθ, R) to J (π∗e , R). Assumption 2 indicates that the noisy expert occasionally adopts non-optimal policies, which results in the noisy demonstrations, due to random events, such as the presence of distractions, associated with the random variable z. The noisy expert is going to visit states that would be never visited by the true expert if the noisy expert followed non-optimal policies even once. Assumption 3 indicates that those states are less rewarded and their rewards are at least zero. Assumption 3 also indicates that the noisy demonstrations have a number of episodes where the noisy expert has reached the same state s where the noisy expert has adopted both π∗e and π ∈ Π \\ {π∗e} with the probability . Assumption 4 indicates that, since the probability the noisy expert consecutively follows π∗e decreases as time step increases according to Assumption 2, the divergence between dπet and d π∗e t becomes greater as the number of time step\nt increases, and thus the one-step expected immediate reward Rπet = Es∼dπet ,a∼πe(·|s) [ R(s, a) ] decreases as t increases." }, { "heading": "4 ANALYSIS OF PERFORMANCE DETERIORATION", "text": "In this section, we firstly describe BC objective in 4.1. Then, we analyze why the learner trained by BC deteriorates its performance when using the noisy demonstrations from the expected T-step reward maximization and KL-divergence minimization perspectives in 4.2 and 4.3, respectively." }, { "heading": "4.1 BEHAVIORAL CLONING OBJECTIVE", "text": "Let πθ ∈ Π is a learner policy parameterized by θ to be optimized by IL algorithms. The objective of BC in common is as follows:\narg max θ Es∼dπe ,a∼πe(·|s)[log πθ(a|s)]. (1)\nThe objective (1) aims to mimic the expert behavior which follows πe. It can be interpreted that (1) is to maximize the expected one-step immediate reward Rπθ (s) to Rπe(s) at each state s ∼ dπe . Since the state distribution dπe is not induced by πθ, it can also be said that (1) is to maximize the off-policy expected T -step rewards Jπe(πθ, R) to J (πe, R)." }, { "heading": "4.2 THE EXPECTED T-STEP REWARD MAXIMIZATION", "text": "We obtain the lower bound of the expected on-policy T -step reward for the noisy expert policy in almost the same way to derive Theorem 2.1 in (Ross & Bagnell, 2010) where they showed the lower bound for the learner policies given the “clean“ expert demonstrations. Theorem 1. If the Assumptions 1 - 4 hold, J (πe, R) has the following lower bound:\nJ (πe, R) ≥ { 1\nT T−1∑ t=0 (1− )t } · Eπ∼pΠ [Jπe(π,R)]. (2)\nThe detailed derivation can be found in Appendix A.1. Assume that the learner policy πθ has a probability of non-optimal behavior ̂ = + ζ at most as the result of BC, where ζ ∈ [0, 1 − ] is an additional probability of non-optimal behavior due to the remained loss in (1). Note that ζ may become greater than zero due to the difficulty in the optimization of (1) even if = 0. The learner following πθ with ̂ can be deemed as another noisy expert who samples a policy at each time step πθ = π ∼ pπθ if ̂ ≥ z ∼ U(0, 1), otherwise πθ = π∗e , where pπθ is a (special) distribution from which the same policy is always sampled. By replacing ̂ and pπθ from and pΠ in Theorem 1 respectively, we obtain the following corollary. Corollary 1. If the Assumptions 1 - 4 hold and the policy πθ has a probability of non-optimal behavior ̂ = + ζ, J (πθ, R) has the following lower bound:\nJ (πθ, R) ≥ { 1\nT T−1∑ t=0 (1− ̂)t } · Jπe(πθ, R). (3)\nRecall that the BC objective (1) is to maximize Jπe(πθ, R). If ̂ = 0, Corollary 1 indicates that the on-policy expected T -step reward J (πθ, R), which corresponds to the actual learner performance, is boosted by maximizing Jπe(πθ, R) through the optimization of the BC objective (1). On the other hand, if > 0 and thus ̂ > 0, the first factor on the RHS in (3) becomes much smaller as becomes larger. Corollary 1 thus shows that the probability of non-optimal behavior of the noisy expert significantly negates the improvement of learner performance J (πθ, R) by BC even if ζ can be sufficiently minimized through the optimization. Hence, the learner trained by BC is not able to boost the learner performance enough if the noisy demonstrations were given." }, { "heading": "4.3 KL DIVERGENCE MINIMIZATION", "text": "Let Sπe be a set of states that are observed in the noisy demonstration. Sπe can be thought of as the domain of (empirical) state distributions dπe . Sπe can be defined with two state sets of states as Sπe = Sπee ∪ S πe e+∗, where S πe e contains states that are observed if the noisy expert has followed a policy π ∈ Π \\ {π∗e} once or more so far in the episode, and S πe e+∗ contains states\nat which the noisy expert has followed a policy π ∈ Π \\ {π∗e} at the first time in the episode. Under Assumption 3, the rewards Rπet for the states s ∈ Sπee are at least zero whereas R πe t =\nEs∼dπet [ Eπ∼pΠ [Rπ(s)]+(1− )Rπ ∗ e (s) ] for the states s ∈ Sπee+∗. Note that the noisy expert adopts π ∈ Π \\ {π∗e} with a probability at the states s ∈ S πe e+∗. Let d πe e and d πe e+∗ be the state distributions the noisy expert policy induces in Sπee and S πe e+∗, respectively. Then we can define d\nπe as a mixture of those distributions as\ndπe(s) = αdπee (s) + βd πe e+∗(s), (4)\nwhere α and β are ratios the noisy expert entered states that belong to Sπee and S πe e+∗ during demonstrations, respectively. In addition, α + β = 1 is satisfied. Using Equation (4), the upper bound of the objective function in Equation (1) is derived as follows:\nEs∼dπe ,a∼πe(·|s)[log πθ(a|s)] ≤ −αΩe(θ)− βΩe+∗(θ), (5) Ωe(θ) = Es∼dπee [DKL[πe(·|s)||πθ(·|s)]], (6)\nΩe+∗(θ) = Es∼dπee+∗,π∼pΠ [DKL[π(·|s)||πθ(·|s)]] + (1− )Es∼dπee+∗ [DKL[π ∗ e(·|s)||πθ(·|s)]], (7)\nwhere DKL is forward Kullback-Leibler (KL) divergence. The full derivation can be found in Appendix A.2. The inequality (5) shows that the BC objective (1) with the noisy demonstrations is to minimizes the sum of KL divergences. The first term on the RHS in (7) leads the learner to imitate some non-optimal behaviors whereas the second term is to learn π∗e on the same states. The optimization to maximize the RHS in (7) is difficult because minimizing KL divergences with different target distributions at the same time is difficult in general. The first term on the RHS in (7) thus works as a “noisy” regularizer with a coefficient that makes the learner confused to learn π∗e . The difficulty in the optimization due to the noisy regularizer increases ζ as increases.\nAs mentioned in 4.1 and 4.2, BC is to maximize Jπe(πθ, R) to J (πe, R). Hence, minimizing Ωe(θ) in (6) corresponds to maximize Es∼dπee [R πθ (s)] to Es∼dπee [R πe(s)]. Since the rewards Rπe(s) are at least zero for the states s ∼ dπee according to Assumption 3 and the definition of Sπee , Es∼dπee [R\nπθ (s)] becomes at least zero by minimizing Ωe(θ). Hence Jπe(πθ, R) becomes at least zero as the rate α increases, while the rate α increases as the probabilities of non-optimal behavior increases. Thus, the larger the probability is, the more difficult it is to boost the learner performance by BC.\nIf the influence of the noisy regularizer can be reduced, probabilities the learner follows π∗e at the state s ∈ Sπee+∗ will increase. In addition, as probabilities the learner follows π∗e at the states s ∈ Sπee+∗ increase, the rate (corresponding to α) for the states s ∈ Sπee will decrease. Thus, it can be said that, the more often learner follows π∗e at the states s ∈ S πe e+∗, the more rewards R\nπ∗e (s) the learner obtains according to Assumption 3. To summarize the above analysis, reducing the influence of the noisy regularizer for states s ∈ Sπee+∗, which leads the learner to imitate some non-optimal behaviors, might boost the learner performance." }, { "heading": "5 ALGORITHM", "text": "The analyses in Section 4 describe that the learner trained by standard BC deteriorates its performance when the noisy demonstrations are given. Based on both analyses in 4.2 and 4.3, the learner performance will be boosted if the learner imitates the optimal policy π∗e but not the non-optimal ones π ∈ Π \\ {π∗e} for the states s ∈ S πe e+∗. In other words, the learner performance will be boosted if ̂ of the learner can be reduced. In this section, we first propose our algorithm that avoids learning π ∈ Π\\{π∗e} while learning π∗e in 5.1. Then we describe how our algorithm works to avoid learning π ∈ Π\\{π∗e} from mode seeking and reward maximization perspectives in 5.2 and 5.3, respectively. We lastly provide limitations of our algorithm in 5.4." }, { "heading": "5.1 PROPOSED ALGORITHM", "text": "We consider a generalization of the BC objective as follows:\narg max θ Es∼dπe ,a∼πe(·|s)[log πθ(a|s) · R̂(s, a)], (8)\nAlgorithm 1 Behavioral Cloning from Noisy Demonstrations 1: Given the expert demonstrations D. 2: Set R̂(s, a) = 1 for ∀(s, a) ∈ D. 3: Split D into K disjoint sets {D1,D2, ...,DK}. 4: for iteration = 1,M do 5: for k = 1,K do 6: Initialize parameters θk. 7: for l = 1, L do 8: Sample a random minibatch of N state-action pairs (sn, an) from Dk. 9: Calculate a sampled gradient 1N ∑N n=1∇θk log πθk(sn, an) · R̂(sn, an). 10: Update θk by gradient ascent using the sampled gradient. 11: end for 12: end for 13: Copy πθold ← πθ. 14: Set R̂(s, a) = πθold(a|s) for ∀(s, a) ∈ D. 15: end for 16: return πθ.\nwhere R̂ : S×A→ [0, 1] denotes an arbitrary function which can differ from R. If R̂(s, a) = 1 for ∀(s, a) ∈ S × A, the objective (8) corresponds to the BC objective (1). If ∫ A R̂(s, a)da = 1 for ∀s ∈ S is satisfied, R̂(s, a) can be interpreted as weights for action samples obtained by the demonstrations so that the actions are sampled according to their relative weights. The objective (8) can also be deemed as that of the off-policy actor-critic (Off-PAC) algorithm1 (Degris et al., 2012) with reward functions R̂(s, a) and zero discount factors.\nLet πθ1 , πθ2 , ..., πθK beK parameterized policies with different initial parameters θ1, θ2, ..., θK , and πθ(a|s) = ∑K k=1 πθk(a|s)/K denotes an ensemble of the parameterized policies with parameters θ = {θ1, θ2, ..., θK}. Let πθold be a parameterized policy with θold which was already optimized with the noisy demonstrations. The main idea of our algorithm is to reuse the old policy πθold as R̂(s, a) in the generalized BC objective (8).\narg max θ Es∼dπe ,a∼πe(·|s)[log πθ(a|s) · πθold(a|s)]. (9)\nThe overview of our algorithm is described in Algorithm 1.\n5.2 WEIGHTED ACTION SAMPLING FOR π∗e MODE SEEKING\nSince πθold satisfies ∫ A πθold(a|s)da = 1 for ∀s ∈ S , πθold can be interpreted as the weights for the weighted action sampling. We below explain the weighted action sampling procedure in our algorithm on Sπee+∗. Figure 1 depicts a toy example of the sampling procedure. The distribution of the noisy expert actions on Sπee+∗ is a mixture of two distributions as shown in Equation (7). If is sufficiently small, πθ is optimized so that its mode is closer to that of π∗e than π ∈ Π \\ {π∗e} according to mode seeking properties of the forward KL divergence (Ghasemipour et al., 2020). Given the sampling weights πθold(a|s) = πθ(a|s) for the empirical action samples, the weighted action distribution distorts so that its mode also gets closer to the mode of π∗e . By iteratively distorting the weighted action distribution with the same procedure, its mode fits to near the mode of πe∗. The weights for actions sampled from π ∈ Π\\{π∗e} eventually become much smaller, and thus the learner will not learn π ∈ Π \\ {π∗e}. The mode seeking procedure of our algorithm is analogous to the mean shift algorithm (Fukunaga & Hostetler, 1975) so that the mode of πθ shifts towards that of π∗e by minimizing the KL divergence between πθ and the weighted action distribution.\n1Although Off-PAC multiplies log πθ(a|s) by a density ratio πe(s|a)/πθ(s|a), πθ(s|a) is empirically approximated to be one in popular off-policy RL algorithms such as DDPG (Lillicrap et al., 2015)." }, { "heading": "5.3 REWARD MAXIMIZATION", "text": "As the Off-PAC objective, the objective (9) maximizes the expected (one-step) reward R̂(s, a) = πθold(a|s). Recall that the learner policy πθ(a|s) = ∑K k=1 πθk(a|s)/K is an ensemble of the parameterized policies in our algorithm. Following the work in (Perrone, 1993), we obtain\n1\nK K∑ k=1 Es∼dπe ,a∼πe(·|s)[log πθk(a|s) · R̂(s, a)] ≤ Es∼dπe ,a∼πe(·|s) [ log πθ(a|s) · R̂(s, a) ] , (10)\nwhere we use Jensen’s inequality with the concave property of logarithm : 1K ∑K k=1 log πθk(a|s) ≤ log πθ(a|s). The inequality (10) indicates that the ensemble of policies πθ1 , πθ2 , ..., πθK , each of which was learned with (8), has greater or equal values of the objective function in (8) than the averaged values over the policies in the ensemble. As mentioned in 5.2, R̂(s, a) = πθold(a|s) becomes higher near the mode of π∗e . Thus, making πθ as the ensemble further encourages to shift its mode to that of π∗e and avoid learning π ∈ Π \\ {π∗e}." }, { "heading": "5.4 LIMITATIONS", "text": "Our algorithm has three limitations. First, K ×M times computational cost is required in comparison with BC, where M is the number of iterations in Algorithm 1. Second, the compounding error due to the probability of non-optimal behavior ζ still remains unless sufficient amounts of the demonstrations are given. Lastly, πθ is fitting to π ∈ Π \\ {π∗e} rather than π∗e if the major mode of π(a|s) + (1 − )π∗e(a|s) is nearer to the mode of π(a|s) than that of π∗e . It may be caused due to the higher kurtosis of π(a|s) or of large values." }, { "heading": "6 EXPERIMENTS", "text": "In our experiments, we aim to answer the following three questions:\nQ1. Does our algorithm improve the learner performance more than BC given the noisy demonstrations?\nQ2. Can the compounding error due to ζ be reduced as the number of noisy demonstrations increase?\nQ3. Is our algorithm competitive to the existing IL methods if both annotations associated with the non-optimality and environment interactions are allowed?" }, { "heading": "6.1 SETUP", "text": "To answer Q1 and Q2, we evaluated our algorithm against BC on four continuous control tasks that are simulated with MuJoCo physics simulator (Todorov et al., 2012). We train an agent on each task by proximal policy optimization (PPO) algorithm (Schulman et al., 2017) using the rewards defined in the OpenAI Gym (Brockman et al., 2016). We use the resulting stochastic policy as the true expert policy π∗e . We generate the noisy expert demonstrations using π ∗ e while randomly adopting non-optimal policies π with probabilities of the non-optimal behavior . The non-optimal policies π are selected from uniform distributions a ∼ U(−u, u), Gaussian distributions a ∼ N (a∗, I) with a ∼ π∗e(·|s), or a deterministic policy a = 0, where u ∈ R|A| denotes all-ones vectors and I ∈ R|A|×|A| denotes identity matrices. are selected from {0.0, 0.1, 0.2, 0.3, 0.4, 0.5}. The noisy expert takes actions following π∗e if z ≥ otherwise π which is fixed to a selected one through an episode, where z ∼ U(0, 1). Each noisy demonstration with the selected consists ofN state-action pairs, where N is selected from {5000, 10000, 50000, 100000}. Then we perform our algorithm as well as BC to train the learners using each noisy demonstration. We also conducted the same experiments on four low-dimensional discrete control tasks (see Appendix A.4).\nTo answer Q3, we evaluated our algorithm against IC-GAIL (Wu et al., 2019), 2IWIL (Wu et al., 2019), T-REX (Brown et al., 2019), GAIL and DRIL on three continuous control tasks. ICGAIL, 2IWIL and T-REX require both annotations associated with the non-optimality and environment interactions. GAIL and DRIL require the environment interactions for the training, but they do not address the noisy demonstration problem. The true expert policy π∗e are obtained in the same way as mentioned above. The non-optimal policies π are fixed to a ∼ U(−u, u). We generate the noisy expert demonstrations which consists of 10000 state-action pairs for each ∈ {0.05, 0.1, 0.15, ...., 1.0}. Then we perform our algorithm and the baselines using all noisy demonstrations. The detailed description of this experimental setup can be found in Appendix A.3.\nIn both experiments, the performance of the learners is measured by cumulative rewards they earned in an episode. The cumulative reward is normalized with ones earned by π∗e and a random policy a ∼ U(−u, u) so that 1.0 and 0.0 indicate the performance of π∗e and the random policy, respectively. We run five experiments on each task and setup, and measure the mean and standard deviation of the normalized cumulative rewards for each learner over the five experiments. In all experiments, we set the number of policies K = 5 in the ensemble learner policy πθ and the number of iterations M = 5. The implementation details of our algorithm can be found in Appendix A.5." }, { "heading": "6.2 RESULTS", "text": "Figure 2 depicts the experimental results against BC. Over all tasks, our algorithm obtains much better learner performance than BC-Single, which is a single (thus not an ensemble) policy learned by BC. It suggests that the policies learned by our algorithm are closer to π∗e than ones learned by BC. The compounding error due to ζ is expected to be reduced as the number of demonstrations increase. Whereas BC-Ensemble which denotes the ensemble of policies learned by BC yields significant performance gains against BC-Single, increasing the number of noisy demonstrations has a little effect to boost the learner performance trained by BC-Ensemble as shown in Figure 2- (D). It indicates that BC-Ensemble can not reduce the compounding error due to . On the other hand, our algorithm can boost the learner performance up to that of π∗e as increasing the number of demonstrations. It suggests that our algorithm can reduce the compounding error due to both and ζ if sufficient amounts of the noisy expert demonstrations are given, as is the case for BC with the clean expert demonstrations. The results with the deterministic non-optimal policy π ∈ Π \\ {π∗e} which always takes an action a = 0 are worse than those with other non-optimal policies. It corresponds to the limitation of our algorithm as mentioned in 5.4, since the major mode of π(a|s)+(1− )π∗e(a|s) might be around a = 0. We also conducted ablation experiments where the number of policies K are selected from {1, 5} in our algorithm. See Appendix A.6 for details. The ablation experimental results show that the learner obtains better performance if K increases. In addition, the performance of the learner trained by our algorithm is significantly better than that of BC-Single even though K = 1. It suggests that our algorithm improves the learner performance by not only the ensemble approach but also using the old policies πθold .\nTable 1 shows the experimental results against IC-GAIL, 2IWIL, T-REX, GAIL and DRIL. Over all tasks, 2IWIL and our algorithm can successfully obtain the true expert performance while others can\nnot. It suggests that our algorithm can obtain competitive results with that of existing IL methods even though the annotation and the environment interactions are not used." }, { "heading": "7 CONCLUSION", "text": "In this paper, we proposed an imitation learning algorithm to cope with the noisy expert demonstrations. Experimental results showed that our algorithm can learn behavior policies that are much closer to the true expert policies than ones learned by BC. Since our algorithm cope well with the noisy expert demonstrations while not requiring any environment interactions and annotations associated with the non-optimal demonstrations, our algorithm is more applicable to real-world problems than the prior works. Although our algorithm has a few limitations as mentioned in 5.4, we believe that the analysis of performance deterioration detailed in Section 4 contributes to step forward for solving the noisy demonstration problems. In future work, we will consider the setting where the probability of non-optimal behavior is state-dependent, which often occurs in the real world more than the state-independent case that we have considered in this paper." }, { "heading": "A APPENDIX", "text": "A.1 DETAILED DEVIATION OF THEOREM 1\nProof. Let qt = (1 − )t denotes the probability the noisy expert consecutively follows π∗e in the first t step, and χ = ∑T t=1 qt−1 denotes sum of qt−1 over time steps. Then we obtain:\nJ (πe, R) ≥ T∑ t=1 qt−1R πe t + (1− qt−1) · 0 (11)\n≥ T { 1\nT T∑ t=1 qt−1 }{ 1 T T∑ t=1 Rπet } (12)\n= χ\nT { T∑ t=1 Es∼dπet [ Eπ∼pΠ [Rπ(s)] + (1− )Rπ ∗ e (s) ]} = χ\nT\n{ Eπ∼pΠ [Jπe(π,R)] + (1− )Jπe(π∗e , R) } ≥ χ\nT\n{ Eπ∼pΠ [Jπe(π,R)] + (1− )Eπ∼pΠ [Jπe(π,R)] } (13)\n=\n{ 1\nT T−1∑ t=0 (1− )t } · Eπ∼pΠ [Jπe(π,R)]\nThe first inequality (11) is from Assumption 2 and 3. The second inequality (12) is from Chebyshev’s sum inequality with the monotonically decreasing properties according to Assumption 4. The third inequality (13) is from Assumption 1 : Jβ(π,R) ≤ Jβ(π∗e , R) for any π, β ∈ Π \\ {π∗e}.\nA.2 DETAILED DERIVATION OF THE KL DIVERGENCES\nFrom the definition of (4), we obtain:\nEs∼dπe ,a∼πe(·|s)[log πθ(a|s)] = αEs∼dπee ,a∼πe(·|s)[log πθ(a|s)] (14) + βEs∼dπee+∗,a∼πe(·|s)[log πθ(a|s)] (15)\nThe forward Kullback-Leibler (KL) divergence DKL between πe and πθ over a state distribution dπe is defined as Es∼dπe [DKL(πe(·|s)||πθ(·|s))] = −Es∼dπe [Ea∼πe(·|s)[log πθ(a|s)]+H[πe(·|s)]], whereH denotes the entropy. SinceH[πe(·|s)] always takes positive value and is not associated with θ, we obtain an inequality : Es∼dπe ,a∼πe(·|s)[log πθ(a|s)] ≤ −Es∼dπe [DKL(πe(·|s)||πθ(·|s))]. The same goes with (14) as\nαEs∼dπee ,a∼πe(·|s)[log πθ(a|s)] ≤ −αEs∼dπe [DKL(πe(·|s)||πθ(·|s))]. (16)\nSince πe adopts both π∗e and π ∈ Π \\ {π∗e} following the probability , the third term (15) can be expanded as:\nβEs∼dπee+∗,a∼πe(·|s)[log πθ(a|s)] = βEs∼dπee+∗ { Eπ∼pΠ,a∼π(·|s)[log πθ(a|s)]\n+(1− )Ea∼π∗e (·|s)[log πθ(a|s)] }\n≤ −β { Es∼dπee+∗,π∼pΠ [DKL(π(·|s)||πθ(·|s))]\n+(1− )Es∼dπee+∗ [DKL(π ∗ e(·|s)||πθ(·|s))]\n} (17)\nA.3 DETAILED DESCRIPTION OF THE EXPERIMENTAL SETUP\nWe annotate confidence scores for the noisy demonstrations so that the confidence is one if the demonstrations are obtained with = 0 otherwise zero. The confidence scores are used IC-GAIL as well as 2IWIL. We use publicly available code 2 for the implementation of both IC-GAIL and\n2https://github.com/kristery/Imitation-Learning-from-Imperfect-Demonstration\nCartPole-v1 Acrobot-v1 MountainCar-v0 LunarLander-v2\n0.0\n1.0\n0.0\n1.0\n0.0\n1.0\n0.0\n1.0\n0.0 0.2 0.4 0.0 0.2 0.4 0.0 0.2 0.4 0.0 0.2 0.4 N o rm\nal iz\ned C\nu m\nu la\nti v e\nR ew\nar d\nProbability of Non-Optimal Behavior\nNoisy Expert BC-EnsembleBC-Single Ours\nFigure 3: The performance of policies vs. given 50000 state-action pairs of the noisy expert demonstrations where the non-optimal policies π ∈ Π \\ {π∗e} select actions uniformly at random. BC-Single is a policy learned by BC. BC-Ensemble is an ensemble of policies, each of which was learned by BC. Shaded regions indicate the standard deviation over five experiments.\n2IWIL. We follow the training procedure of both methods as described in Section 5 in (Wu et al., 2019).\nWe annotate rankings for the noisy demonstrations so that the smaller correspond to higher rankings. Then, we train the learner by T-REX given the ranked demonstration data. We use publicly available code 3 for the implementation of T-REX.\nFor training the learner with GAIL and DRIL, we use all noisy demonstrations without any screening process. We use publicly available code 4 for the implementation of GAIL and DRIL.\nA.4 EXPERIMENTAL RESULTS ON DISCRETE CONTROL TASKS\nFigure 3 shows the experimental results on four discrete control tasks. Over all tasks, our algorithm obtain much better results than BC.\nA.5 IMPLEMENTATION DETAILS OF OUR ALGORITHM\nWe implement our algorithm using K neural networks with two hidden layers to represent policies πθ1 , πθ2 , ..., πθK in the ensemble. The input of the networks is vector representations of the state. Each neural network has 100 hidden units in each hidden layer followed by hyperbolic tangent nonlinearity, and the dimensionality of its final output corresponds to that of action space. The final output is followed by softmax function in the discrete control tasks. As for the continuous control tasks, the final output represents the mean of a Gaussian policy as πθk = N (µθk(s), σ2θk), where σ2θk is implemented as a trainable independent vector from the networks. The neural network architecture for the policy trained by BC is the same as the ones for a single policy in our algorithm. We employ Adam (Kingma & Ba, 2014) for learning parameters with a learning rate of η ∗ 10−4 where η = K/ ∑K k=1 πθkold(µθkold(s)|s) is a scaling parameter. The parameter η plays a role in scaling R̂ = πθold(a|s) to avoid the training being slow due to πθold(a|s) of small values. The parameters in all layers are initialized by Xavier initialization (Glorot & Bengio, 2010). The mini-batch size and the number of training epochs are 128 and 500, respectively.\nA.6 ABLATION EXPERIMENTS\nWe conducted ablation experiments where we evaluate how the number of policies K in the ensemble policy πθ as well as the number of the policies Kold used in the old ensemble policies πθold affect the performance. Table 2 summarizes the ablation experimental results. Even if our algorithm uses K = 1 as BC-Single does, the results of our algorithm are better than BC. It indicates that the\n3https://github.com/hiwonjoon/ICML2019-TREX 4https://github.com/xkianteb/dril\nweighted action sampling described in 5.2 works to avoid learning the non-optimal policies without relying on the ensemble approach. The same goes with K = 5. Our algorithm with K = 5 and Kold = 1 obtain much better performance than BC-Ensemble with K = 5. This result also supports the weighted action sampling works. The learner performance with fixed K increases as Kold increases. Similarly, the learner performance with fixed Kold increases as K increases. It suggests that both K and Kold affect the performance in our algorithm." } ]
2,021
null
SP:32798b2b132f08f55733c163090f097aa0b384bf
[ "The authors propose to mix the independent embeddings of audio and visual data by a set of cross-attention layers to the task of audio-visual event localization. They weigh the features based on the correlation between the representations of both modalities to obtain the output representation. After several layers like these, combined with dense skip connections, the final output is a multimodal representation of the input where the features have been individually learned for each modality based on information from the other one. The final features are concatenated and passed through an open-max classifier. Finally, they design different losses to help enforce coherence and continuity among the proposed labels throughout the video." ]
Temporally localizing actions in videos is one of the key components for video understanding. Learning from weakly-labeled data is seen as a potential solution towards avoiding expensive frame-level annotations. Different from other works which only depend on visual-modality, we propose to learn richer audiovisual representation for weakly-supervised action localization. First, we propose a multi-stage cross-attention mechanism to collaboratively fuse audio and visual features, which preserves the intra-modal characteristics. Second, to model both foreground and background frames, we construct an open-max classifier which treats the background class as an open-set. Third, for precise action localization, we design consistency losses to enforce temporal continuity for the actionclass prediction, and also help with foreground-prediction reliability. Extensive experiments on two publicly available video-datasets (AVE and ActivityNet1.2) show that the proposed method effectively fuses audio and visual modalities, and achieves the state-of-the-art results for weakly-supervised action localization.
[ { "affiliations": [], "name": "Juntae Lee" }, { "affiliations": [], "name": "Mihir Jain" }, { "affiliations": [], "name": "Hyoungwoo Park" } ]
[ { "authors": [ "Yusuf Aytar", "Lluis Castrejon", "Carl Vondrick", "Hamed Pirsiavash", "Antonio Torralba" ], "title": "Cross-modal scene networks", "venue": "In IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2017 }, { "authors": [ "Abhijit Bendale", "Terrance E. Boult" ], "title": "Towards open set deep networks", "venue": "In CVPR. 2016", "year": 2016 }, { "authors": [ "Joao Carreira", "Andrew Zisserman" ], "title": "Quo vadis, action recognition? A new model and the kinetics dataset", "venue": "In CVPR. 2017", "year": 2017 }, { "authors": [ "Thomas G. Dietterich" ], "title": "Steps toward robust artificial intelligence", "venue": "In AI Mag.,", "year": 2017 }, { "authors": [ "Thomas G. Dietterich", "Richard H. Lathrop", "Tomás Lozano-Pérez" ], "title": "Solving the multiple instance problem with axis-parallel rectangles", "venue": "In Artif. Intell.,", "year": 1997 }, { "authors": [ "Andrea Frome", "Greg S. Corrado", "Jon Shlens", "Samy Bengio", "Jeff Dean", "Marc’Aurelio Ranzato", "Tomas Mikolov" ], "title": "DeVise: A deep visual-semantic embedding model", "venue": "In NIPS", "year": 2013 }, { "authors": [ "Akira Fukui", "Dong Huk Park", "Daylen Yang", "Anna Rohrbach", "Trevor Darrell", "Marcus Rohrbach" ], "title": "Multimodal compact bilinear pooling for visual question answering and visual grounding", "venue": "In arXiv preprint arXiv:1606.01847", "year": 2016 }, { "authors": [ "Jort F Gemmeke", "Daniel PW Ellis", "Dylan Freedman", "Aren Jansen", "Wade Lawrence", "R Channing Moore", "Manoj Plakal", "Marvin Ritter" ], "title": "Audio set: An ontology and human-labeled dataset for audio events", "venue": "In ICASSP. 2017", "year": 2017 }, { "authors": [ "Xavier Glorot", "Yoshua Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks", "venue": "In AISTATS", "year": 2010 }, { "authors": [ "Wenzhong Guo", "Jianwen Wang", "Shiping Wang" ], "title": "Deep multimodal representation learning: A survey", "venue": "In IEEE Access,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In CVPR. 2016", "year": 2016 }, { "authors": [ "Shawn Hershey", "Sourish Chaudhuri", "Daniel PW Ellis", "Jort F Gemmeke", "Aren Jansen", "R Channing Moore", "Manoj Plakal", "Devin Platt", "Rif A Saurous", "Bryan Seybold" ], "title": "CNN architectures for large-scale audio classification", "venue": "In ICASSP. 2017", "year": 2017 }, { "authors": [ "Ruibing Hou", "Hong Chang", "MA Bingpeng", "Shiguang Shan", "Xilin Chen" ], "title": "Cross attention network for few-shot classification", "venue": "In NeurIPS. 2019", "year": 2019 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens van der Maaten", "Kilian Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In CVPR. 2017", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-toimage translation", "venue": "In ECCV. 2018", "year": 2018 }, { "authors": [ "Mihir Jain", "Amir Ghodrati", "Cees G.M. Snoek" ], "title": "ActionBytes: Learning from trimmed videos to localize actions", "venue": "In CVPR. 2020", "year": 2020 }, { "authors": [ "Will Kay", "Joao Carreira", "Karen Simonyan", "Brian Zhang", "Chloe Hillier", "Sudheendra Vijayanarasimhan", "Fabio Viola", "Tim Green", "Trevor Back", "Paul Natsev" ], "title": "The kinetics human action video dataset", "venue": "In arXiv preprint arXiv:1705.06950", "year": 2017 }, { "authors": [ "Jin-Hwa Kim", "Kyoung-Woon On", "Woosang Lim", "Jeonghee Kim", "Jung-Woo Ha", "Byoung-Tak Zhang" ], "title": "Hadamard product for low-rank bilinear pooling", "venue": "In ICLR", "year": 2017 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In ICLR", "year": 2015 }, { "authors": [ "Ryan Kiros", "Ruslan Salakhutdinov", "Richard S. Zemel" ], "title": "Unifying visual-semantic embeddings with multimodal neural language models", "venue": "In NIPS", "year": 2014 }, { "authors": [ "Angeliki Lazaridou", "Nghia The Pham", "Marco Baroni" ], "title": "Combining language and vision with a multimodal skip-gram model", "venue": "In HLT-NAACL", "year": 2015 }, { "authors": [ "Kuang-Huei Lee", "Xi Chen", "Gang Hua", "Houdong Hu", "Xiaodong He" ], "title": "Stacked cross attention for image-text matching", "venue": "In ECCV. 2018", "year": 2018 }, { "authors": [ "Y. Lin", "Y. Li", "Y.F. Wang" ], "title": "Dual-modality seq2seq network for audio-visual event localization", "venue": "In ICASSP. 2019", "year": 2019 }, { "authors": [ "Venice Erin Liong", "Jiwen Lu", "Yap-Peng Tan", "Jie Zhou" ], "title": "Deep coupled metric learning for crossmodal matching", "venue": "In IEEE Trans. Multimedia,", "year": 2016 }, { "authors": [ "Daochang Liu", "Tingting Jiang", "Yizhou Wang" ], "title": "Completeness modeling and context separation for weakly supervised temporal action localization", "venue": "In CVPR. 2019a", "year": 2019 }, { "authors": [ "Ziyi Liu", "Le Wang", "Qilin Zhang", "Zhanning Gao", "Zhenxing Niu", "Nanning Zheng", "Gang Hua" ], "title": "Weakly supervised temporal action localization through contrast based evaluation networks", "venue": "In ICCV. 2019b", "year": 2019 }, { "authors": [ "Zhekun Luo", "Devin Guillory", "Baifeng Shi", "Wei Ke", "Fang Wan", "Trevor Darrell", "Huijuan Xu" ], "title": "Weakly-supervised action localization with expectation-maximization multi-instance learning", "venue": "In ECCV. 2020", "year": 2020 }, { "authors": [ "Noam Mor", "Lior Wolf", "Adam Polyak", "Yaniv Taigman" ], "title": "A universal music translation network", "venue": "In arXiv preprint arXiv:1805.07848", "year": 2018 }, { "authors": [ "Sanath Narayan", "Hisham Cholakkal", "Fahad Shahbaz Khan", "Ling Shao" ], "title": "3C-Net: Category count and center loss for weakly-supervised action localization", "venue": "In ICCV. 2019", "year": 2019 }, { "authors": [ "Phuc Nguyen", "Ting Liu", "Gautam Prasad", "Bohyung Han" ], "title": "Weakly supervised action localization by sparse temporal pooling network", "venue": "In CVPR. 2018", "year": 2018 }, { "authors": [ "Phuc Xuan Nguyen", "Deva Ramanan", "Charless C. Fowlkes" ], "title": "Weakly-supervised action localization with background modeling", "venue": "In ICCV. 2019", "year": 2019 }, { "authors": [ "Andrew Owens", "Alexei A. Efros" ], "title": "Audio-visual scene analysis with self-supervised multisensory features", "venue": "In ECCV. 2018", "year": 2018 }, { "authors": [ "Yingwei Pan", "Tao Mei", "Ting Yao", "Houqiang Li", "Yong Rui" ], "title": "Jointly modeling embedding and translation to bridge video and language", "venue": "In CVPR", "year": 2016 }, { "authors": [ "Sujoy Paul", "Sourya Roy", "Amit K. Roy-Chowdhury" ], "title": "W-TALC: Weakly-supervised temporal activity localization and classification", "venue": "In ECCV. 2018", "year": 2018 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "In Int. J. Comput. Vis.,", "year": 2015 }, { "authors": [ "Baifeng Shi", "Qi Dai", "Yadong Mu", "Jingdong Wang" ], "title": "Weakly-supervised action localization by generative attention modeling", "venue": "In CVPR. 2020", "year": 2020 }, { "authors": [ "Yapeng Tian", "Jing Shi", "Bochen Li", "Zhiyao Duan", "Chenliang Xu" ], "title": "Audio-visual event localization in unconstrained videos", "venue": "In ECCV. 2018", "year": 2018 }, { "authors": [ "Limin Wang", "Yuanjun Xiong", "Dahua Lin", "Luc Van Gool" ], "title": "UntrimmedNets for weakly supervised action recognition and detection", "venue": "In CVPR. 2017", "year": 2017 }, { "authors": [ "Xi Wei", "Tianzhu Zhang", "Yan Li", "Yongdong Zhang", "Feng Wu" ], "title": "Multi-modality cross attention network for image and sentence matching", "venue": "In CVPR", "year": 2020 }, { "authors": [ "Yu Wu", "Linchao Zhu", "Yan Yan", "Yi Yang" ], "title": "Dual attention matching for audio-visual event localization", "venue": "In ICCV. 2019", "year": 2019 }, { "authors": [ "Ran Xu", "Caiming Xiong", "Wei Chen", "Jason J Corso" ], "title": "Jointly modeling deep video and compositional text to bridge vision and language in a unified framework", "venue": "In AAAI", "year": 2015 }, { "authors": [ "Hanyu Xuan", "Zhenyu Zhang", "Shuo Chen", "Jian Yang", "Yan Yan" ], "title": "Cross-modal attention network for temporal inconsistent audio-visual event localization", "venue": "In AAAI. 2020", "year": 2020 }, { "authors": [ "Tan Yu", "Zhou Ren", "Yuncheng Li", "Enxu Yan", "Ning Xu", "Junsong Yuan" ], "title": "Temporal structure mining for weakly supervised action detection", "venue": "In ICCV. 2019", "year": 2019 }, { "authors": [ "Zhou Yu", "Jun Yu", "Jianping Fan", "Dacheng Tao" ], "title": "Multi-modal factorized bilinear pooling with co-attention learning for visual question answering", "venue": "In ICCV. 2017", "year": 2017 }, { "authors": [ "Amir Zadeh", "Minghai Chen", "Soujanya Poria", "Erik Cambria", "Louis-Philippe Morency" ], "title": "Tensor fusion network for multimodal sentiment analysis", "venue": "In EMNLP. 2017", "year": 2017 }, { "authors": [ "Yue Zhao", "Yuanjun Xiong", "Limin Wang", "Zhirong Wu", "Xiaoou Tang", "Dahua Lin" ], "title": "Temporal action detection with structured segment networks", "venue": "In ICCV. 2017", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "The goal of this paper is to temporally localize actions and events of interest in videos with weaksupervision. In the weakly-supervised setting, only video-level labels are available during the training phase to avoid expensive and time-consuming frame-level annotation. This task is of great importance for video analytics and understanding. Several weakly-supervised methods have been developed for it (Nguyen et al., 2018; Paul et al., 2018; Narayan et al., 2019; Shi et al., 2020; Jain et al., 2020) and considerable progress has been made. However, only visual information is exploited for this task and audio modality has been mostly overlooked. Both, audio and visual data often depict actions from different viewpoints (Guo et al., 2019). Therefore, we propose to explore the joint audio-visual representation to improve the temporal action localization in videos.\nA few existing works (Tian et al., 2018; Lin et al., 2019; Xuan et al., 2020) have attempted to fuse audio and visual modalities to localize audio-visual events. These methods have shown promising results, however, these audio-visual events are essentially actions that have strong audio cues, such as playing guitar, and dog barking. Whereas, we aim to localize wider range of actions related to sports, exercises, eating etc. Such actions can also have weak audio aspect and/or can be devoid of informative audio (e.g. with unrelated background music). Therefore, it is a key challenge to fuse audio and visual data in a way that leverages the mutually complementary nature while maintaining the modality-specific information.\nIn order to address this challenge, we propose a novel multi-stage cross-attention mechanism. It progressively learns features from each modality over multiple stages. The inter-modal interaction is allowed at each stage only through cross-attention, and only at the last stage are the visuallyaware audio features and audio-aware visual features concatenated. Thus, an audio-visual feature representation is obtained for each snippet in videos.\nSeparating background from actions/events is a common problem in temporal localization. To this end, we also propose: (a) foreground reliability estimation and classification via open-max classifier and (b) temporal continuity losses. First, for each video snippet, an open-max classifier predicts ∗Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.\nscores for action and background classes, which is composed of two parallel branches for action classification and foreground reliability estimation. Second, for precise action localization with weak supervision, we design temporal consistency losses to enforce temporal continuity of actionclass prediction and foreground reliability.\nWe demonstrate the effectiveness of the proposed method for weakly-supervised localization of both audio-visual events and actions. Extensive experiments are conducted on two video datasets for localizing audio-visual events (AVE1) and actions (ActivityNet1.22). To the best of our knowledge, it is the first attempt to exploit audio-visual fusion for temporal localization of unconstrained actions in long videos." }, { "heading": "2 RELATED WORK", "text": "Our work relates to the tasks of localizing of actions and events in videos, as well as to the regime of multi-model representation learning.\nWeakly-supervised action localization: Wang et al. (2017) and Nguyen et al. (2018) employed multiple instance learning (Dietterich et al., 1997) along with attention mechanism to localize actions in videos. Paul et al. (2018) introduced a co-activity similarity loss that looks for similar temporal regions in a pair of videos containing a common action class. Narayan et al. (2019) proposed center loss for the discriminability of action categories at the global-level and counting loss for separability of instances at the local-level. To alleviate the confusion due to background (nonaction) segments, Nguyen et al. (2019) developed the top-down class-guided attention to model background, and (Yu et al., 2019) exploited temporal relations among video segments. Jain et al. (2020) segmented a video into interpretable fragments, called ActionBytes, and used them effectively for action proposals. To distinguish action and context (near-action) snippets, Shi et al. (2020) designed the class-agnostic frame-wise probability conditioned on the attention using conditional variational auto-encoder. Luo et al. (2020) proposed an expectation-maximization multi-instance learning framework where the key instance is modeled as a hidden variable. All these works have explored various ways to temporally differentiate action instances from the near-action background by exploiting only visual modality, whereas we additionally utilize audio modality for the same objective.\nAudio-visual event localization: The task of audio-visual event localization, as defined in the literature, is to classify each time-step into one of the event classes or background. This is different from action localization, where the goal is to determine the start and the end of each instance of the given action class. In (Tian et al., 2018), a network with audio-guided attention was proposed, which showed prototypical results for audio-visual event localization, and cross-modality synchronized event localization. To utilize both global and local cues in event localization, Lin et al. (2019) conducted audio-visual fusion in both of video-level and snippet-level using multiple LSTMs. Assuming single event videos, Wu et al. (2019) detected the event-related snippet by matching the video-level feature of one modality with the snippet-level feature sequence of the other modality. Contrastingly, our cross-attention is over the temporal sequences from both the modalities and does not assume single-action videos. In order to address the temporal inconsistency between audio and visual modalities, Xuan et al. (2020) devised the modality sentinel, which filters out the eventunrelated modalities. Encouraging results have been reported, however, the localization capability of these methods has been shown only for the short fixed-length videos with distinct audio cues. Differently, we aim to fuse audio and visual modalities in order to also localize actions in long, untrimmed and unconstrained videos.\nDeep multi-modal representation learning: Multi-modal representation learning methods aim to obtain powerful representation ability from multiple modalities (Guo et al., 2019). With the advancement of deep-learning, many deep multi-modal representation learning approaches have been developed. Several methods fused features from different modalities in a joint subspace by outerproduct (Zadeh et al., 2017), bilinear pooling (Fukui et al., 2016), and statistical regularization (Aytar et al., 2017). The encoder-decoder framework has also been exploited for multi-modal learning for image-to-image translation (Huang et al., 2018) and to produce musical translations (Mor et al.,\n1https://github.com/YapengTian/AVE-ECCV18 2http://activity-net.org/download.html\n2018). Another category of approaches aim to disjointly learn the features of each modality under cross-modality constraints such as cross-modal ranking (Frome et al., 2013; Lazaridou et al., 2015; Kiros et al., 2014) or feature distance (Pan et al., 2016; Xu et al., 2015; Liong et al., 2016). Our approach belongs to this category and uses cross-correlation as cross-modality constraint. Crosscorrelation has been exploited to generate visual features attended by text for visual question answering (Kim et al., 2017; Yu et al., 2017). It has also been used to obtain cross-attention for few-shot learning (Hou et al., 2019) and image-text matching (Lee et al., 2018; Wei et al., 2020). In our work, we adopt the cross-correlation to generate both of audio and visual features attended by each other. The most similar to our cross-attention mechanism is the cross-attention module of Hou et al. (2019), which computes cross-correlation spatially between features maps of two images (sample and query). Whereas, our cross-attention is designed for video and is computed between two temporal sequences of different modalities." }, { "heading": "3 METHODOLOGY", "text": "In this section, we introduce the proposed framework for weakly-supervised action and event localization. Fig. 1 illustrates the complete framework. We first present the multi-stage cross-attention mechanism to generate the audio-visual features in Sec. 3.1. Then, we explain open-max classification to robustly distinguish the actions3 from unknown background in 3.2. Finally, in Sec. 3.3, we describe the training loss including two consistency losses designed to enforce temporal continuity of the actions and background.\nProblem statement: We suppose that a set of videos only with the corresponding video-level labels are given for training. For each video, we uniformly sample L non-overlapping snippets, and then extract the audio features U = (ul)Ll=1 ∈ Rdu×L with a pre-trained network, where ul is the du dimensional audio feature representation of the snippet l. Similarly, the snippet-wise visual features V = (vl)Ll=1 ∈ Rdv×L are also extracted. The video-level label is represented as c ∈ {0, 1, . . . , C}, where C is the number of action classes and 0 denotes the background class. Starting from the audio and visual features, our approach learns to categorize each snippet into C + 1 classes and hence localizes actions in weakly-supervised manner." }, { "heading": "3.1 MULTI-STAGE CROSS-ATTENTION MECHANISM", "text": "While multiple modalities can provide more information than a single one, the modality-specific information may be reduced while fusing different modalities. To reliably fuse the two modalities,\n3For brevity we refer both action and event as action.\nwe develop the multi-stage cross-attention mechanism where features are separately learned for each modality under constraints from the other modality. In this way, the learned features for each modality encodes the inter-modal information, while preserving the exclusive and meaningful intramodal characteristics.\nAs illustrated in Fig. 1, we first encode the input features U and V to Xu = (xlu) L l=1 and Xv = (xlv) L l=1 via the modality-specific fully-connected (FC) layers fu and fv , where x l u and x l v are in Rdx . After that, we compute the cross-correlation of Xu and Xv to measure inter-modal relevance. To reduce the gap of the heterogeneity between the two modalities, we use a learnable weight matrix W ∈ Rdx×dx and compute the cross-correlation as\nΛ = XTuWXv (1)\nwhere Λ ∈ RL×L. Note that xlu and xlv are l2-normalized before computing the cross-correlation. In the cross-correlation matrix, a high correlation coefficient means that the corresponding audio and visual snippet features are highly relevant. Accordingly, the lth column of Λ corresponds to the relevance of xlv to L audio snippet features. Based on this, we generate the cross-attention weights Au and Av by the column-wise soft-max of Λ and ΛT , respectively. Then, for each modality, the attention weights are used to re-weight the snippet features to make them more discriminative given the other modality. Formally, the attention-weighted features X̃u and X̃v are obtained by\nX̃u = XuAu and X̃v = XvAv. (2)\nNote that each modality guides the other one through the attention weights. This is to ensure the meaningful intra-modal information is well-preserved while applying the cross-attention.\nTo extensively delve into cross-modal information, we repeatedly apply the cross-attention multiple times. However, during the multi-stage cross-attention, the original modality-specific characteristics may be over-suppressed. To prevent this, we adopt the dense skip connection (Huang et al., 2017). More specifically, at stage t, we obtain the attended audio features by\nX (t) att,u = tanh( t−1∑ i=0 X (i) att,u + X̃ (t) u ) (3)\nwhere X(0)att,u is Xu, and tanh(·) denotes the hyperbolic tangent activation function. Similar to X\n(t) att,u, the attended visual features X (t) att,v are generated for the visual modality.\nAt the last stage te, we concatenate the attended audio and visual features to yield audio-visual features,\nXatt = [X (te) att,u; X (te) att,v ] (4)\nwhere te is empirically set to 2 which will be discussed in the ablation studies in Section 4.3.\nDiscussion Applying the cross-attention (Eq. 2) brings the audio and visual embeddings closer, while the skip connections (Eq. 3) enforce modality specific information, more so with dense skip connections. Using both the cross-attention and the dense skip connections alternatively over multiple stages, we progressively learn optimal embeddings for fusion. Learning in this way, we aim to achieve right amount of compatibility between the two embeddings while preserving the modality specific information, in order to optimize for the training objective." }, { "heading": "3.2 OPEN-MAX CLASSIFICATION", "text": "Video segments can be dichotomized into foreground actions and background. For precise action localization, distinguishing the background from the actions is important as well as categorizing the action classes. However, unlike action classes, the background class comprises of extremely diverse types of non-actions. Therefore, it is not possible to train for the wide range of background classes that the model may confront at the test time.\nTo resolve this problem, we address the background as an open set (Dietterich, 2017; Bendale & Boult, 2016). As illustrated in Fig. 1, we construct an open-max classifier on top of the multistage cross-attentional feature fusion. Specifically, the open-max classifier consists of two parallel\nFC layers for action classification and foreground reliability estimation. The attended audio-visual feature xlatt, where l = 1, . . . , L, is fed snippet-wise into the open-max classifier. The first FC layer outputs a snippet-wise activation vector hl = [hl(1), . . . , hl(C)] for C action classes, which is converted to probability scores, plac by soft-max function.\nSimultaneously, the second FC layer is applied on xlatt, followed by a sigmoid function to estimate its foreground reliability, µl ∈ [0, 1]. The foreground reliability, µl, is the probability of snippet l belonging to any action class. The low reliability indicates that no action occurs in the snippet. Therefore, we compute the probability for the background class as the complement of µl, by plbg = 1− µl. Lastly, the open-max classifier outputs the probability distribution pl over C + 1 classes including the background and C actions as\npl = [plbg; µ lplac]. (5)" }, { "heading": "3.3 TRAINING LOSS", "text": "Next, we describe the loss functions to train our model. The actions or foreground do not abruptly change over time. To impose this constraint, we devise two types of temporal continuity losses.\nForeground continuity loss: Foreground continuity implies two important properties for neighboring snippets: (a) similar foreground reliability in a class-agnostic way, and (b) consistent open-max probabilities for a target foreground class.\nThe first of the two constraints is imposed via class-agnostic foreground continuity:\nµlag = 1\nB + 1 B/2∑ i=−B/2 G(i)µl−i (6)\nwhere G(i) is a Gaussian window of width B + 1 to apply temporal smoothing over foreground reliability around lth snippet. For the second constraint, temporal Gaussian smoothing is applied over open-max probability of video-level ground-truth action class, ĉ, to obtain class-specific foreground continuity:\nµlsp = 1\nB + 1 B/2∑ i=−B/2 G(i) pl−i(ĉ) (7)\nFinally, the foreground continuity loss is defined as:\nLcont = 1\nL L∑ l=1 |µl − µlag|+ |µl − µlsp|. (8)\nThe foreground continuity loss imposes temporal continuity of foreground, and hence also helps in separating background from the action classes.\nPseudo localization loss: Here, we consider the action or background class continuity, which implies that the open-max probabilities, pl, agrees with the classification of neighbouring snippets. This can be used to obtain the pseudo label for snippet l. We first average the open-max prediction of N neighbor snippets and itself, ql = 1N+1 ∑l+N/2 i=l−N/2 p i. We set ĉl = arg maxc(q l(c)) as the pseudo label, but only retain it when the largest class probability of ql is higher than a predefined threshold τ . Accordingly, the pseudo localization loss is formulated by\nLpseu = 1\nL L∑ l=1 1(max(ql) ≥ τ)(− log pl(ĉl)) (9)\nTotal loss: Additionally, we employ the multiple instance learning (MIL) and co-activity similarity (CAS) losses (Paul et al., 2018). The final loss L is defined by\nL = Lmil + αLcas + βLcont + γLpseu (10) where Lmil and Lcas denote MIL and CAS losses, respectively. For details see Appendix D. Figs. 2 (b) and (c) compare the class activation sequences along the temporal axis for the target classes between the models trained without and with the two consistency losses, respectively. We see that class activations are more continuous in the model with the consistency losses." }, { "heading": "4 EXPERIMENTS", "text": "In this section, we provide experimental analysis and comparative evaluation to show the effectiveness of the proposed method. More experiments and qualitative results are in the Appendix." }, { "heading": "4.1 DATASETS AND EVALUATION METHOD", "text": "Datasets: We evaluate our approach on Audio-Visual Event (AVE) and ActivityNet1.2 datasets.\nAVE dataset is constructed for audio-visual event localization, which contains 3,339 training and 804 testing videos, each lasting 10 seconds with event annotation per second. There are 28 audiovisual event categories covering a wide range of domains, such as animal and human actions, vehicle sounds, and music performance. Each event category has both audio and visual aspects, e.g. church bell, baby crying, man speaking etc.\nActivityNet1.2 is a temporal action localization dataset with 4,819 train and 2,383 validation videos, which in the literature is used for evaluation. It has 100 action classes of wider variety than AVE dataset, with on an average 1.5 instances per video. The average length of the videos in this dataset is 115 seconds, often with weak audio cues, which makes action localization as well as leveraging audio harder.\nEvaluation metric: We follow the standard evaluation protocol of each dataset. For the AVE dataset, we report snippet-wise event prediction accuracy. For the ActivityNet1.2 dataset, we generate the action segments (start and end time) from snippet-wise prediction (details are described in the following section), and then measure mean average precision (mAP) at different intersection over union (IoU) thresholds." }, { "heading": "4.2 FEATURE EXTRACTION AND IMPLEMENTATION DETAILS", "text": "Feature extraction: We use the I3D network (Carreira & Zisserman, 2017) and the ResNet152 architecture (He et al., 2016) to extract the visual features for ActivityNet1.2 and AVE, respectively. The I3D network is pre-trained on Kinetics-400 (Kay et al., 2017), and the features consist of two components: RGB and optical flow. The ResNet 152 is pre-trained on the ImageNet (Russakovsky et al., 2015), and the features are extracted from the last global pooling layer. To extract the audio features, we use the VGG-like network (Hershey et al., 2017), which is pre-trained on the AudioSet (Gemmeke et al., 2017), for both AVE and ActivityNet1.2 datasets.\nImplementation Details: We set dx to 1,024, and the LeakyRelu and hyperbolic tangent functions are respectively used for the activation of modality-specific layers and cross-attention modules. In training, the parameters are initialized with Xavier method (Glorot & Bengio, 2010) and updated by Adam optimizer (Kingma & Ba, 2015) with the learning rate of 10−4 and the batch size of 30. Also, the dropout with a ratio of 0.7 is applied for the final attended audio-visual features. In the loss, the hyper parameters are set as B = 4, α = 0.8, β = 0.8 and γ = 1.\nLocalization at test time: For event localization at test time, i.e. snippet classification, each snippet l is classified into one of event classes (including background) by arg maxc p\nl(c), where pl is the open-max probability of snippet l. For action localization, we follow the two-stage thresholding scheme of (Paul et al., 2018). The first threshold is applied to filter out the classes that have videolevel scores less than the average over all the classes. The second threshold is applied along the temporal axis to obtain the start and the end of each action instance." }, { "heading": "4.3 ABLATION ANALYSIS", "text": "Multi-stage cross-attention: To evaluate the effectiveness of the multi-stage cross-attention in audio-visual fusion, we compare two uni-modal methods (audio or visual) and four multi-modal methods with different stages (0-3 stages) on the AVE and ActivityNet1.2 datasets in Table 1. The pseudo-label losses and the open-max classifiers are used in all six cases. In the uni-modal methods, the input feature is embedded using an FC layer, and then fed into the open-max classifier. The 0-stage method denotes a naive fusion, where audio and visual features are fused by simple con-\ncatenation. Even this naive fusion yields higher performance than the uni-modal methods on the AVE dataset. However, that is not the case with more challenging task of the action localization on ActivityNet1.2 dataset. Furthermore, all the later stages improve considerably over 0-stage and the uni-modal cases, for the both datasets. The 2-stage cross-attention achieves the best performance for the both datasets (more in Appendix A). Interestingly, even with the minimal audio cue in ActivityNet1.2 (avg. mAP of audio only is 7.8%), the proposed audio-visual features improve the avg. mAP over visual-only and naive fusion (0-stage) models by 4%.\nFig. 3 shows the qualitative results of the proposed and visual-only models given an example of the ActivityNet1.2 dataset. At the beginning of the video, a performer is shown without any activity. The visual-only model incorrectly predicts the beginning part as a target action while our proposed model correctly predicts it as background. Also, the visual-only model cannot catch the action at the last part of the video since it is visually similar across the frames and has minimal visual activity. Whereas, our model correctly recognizes the last part as an action, owing to the multi-stage crossattention of effective fusion of the two modalities. More qualitative results are in Appendix E.\nConsistency losses: We show the ablation over the two proposed losses, Lcont and Lpseu, while using Open-Max classifier and 2-stage cross-attention, in the lower part of the Table 2. We denote the method with only Lcont loss by O-I and with only Lpseu loss by O-II. The proposed method (OIII) with both of the losses performs the best suggesting the importance of both of the losses. Further, O-II outperforms O-I by a big margin on both the datasets, implying that the pseudo localization loss is more critical for the action localization (more in Appendix B.1). This result demonstrates that guiding temporal continuity is essential in the long untrimmed videos as well as the short ones.\nOpen-max classifier: We compare the open-max classifier with the soft-max classifier where the last FC layer outputs activations for C + 1 classes are normalized by the soft-max function. As the background is considered a closed set in the soft-max approach, the foreground continuity loss is not available. The soft-max is denoted by S-I in Table 2. Both O-II and O-III versions of the open-max outperform the S-I method with the soft-max. The O-III method improves the accuracy by 8.6% on the AVE dataset and the avg. mAP by 2.2% on the ActivityNet1.2 dataset. For further analysis see Appendix B.2. This shows the advantage of modelling background with the open-max classifier.\nDense skip connections: We evaluate the impact of dense skip connections in Table 3 for 2-stage model on the ActivityNet1.2. Compared to no skip connection, performance is improved with the skip connections, and further boosted with the dense skip connection to avg. mAP of 26.0%. This shows preserving the modality specific information leads to better fusion and action localization." }, { "heading": "4.4 MODEL EFFICIENCY", "text": "Though we successfully leverage the audio modality to improve action localization performance, the added modality leads to increased computational cost. The trade-off between efficiency and performance due to the fusion with audio modality is demonstrated in Table 4. When using feature dimension, dx =1024, the fusion increases the computation over visual-only method by about 52% and 74% after 1-stage and 2-stage, respectively. When we reduce dx to 512, the visual-only model gets affected while the 2-stage model maintains its performance at 25.9%. Thanks to the effectiveness of the proposed fusion, even with smaller dx its avg. mAP is well above that of video-only model with dx = 1024, while using about 26% less computation (1.7 MFLOPS vs 2.3 MFLOPS)." }, { "heading": "4.5 COMPARISON WITH THE STATE-OF-THE-ART", "text": "Audio-visual event localization: In Table 5, we compare the proposed method with the recent fully and weakly-supervised methods on the AVE dataset for audio-visual event localization task. In the weakly-supervised setting, our method performs better than all of the existing methods at least by 1.4%. Note that, even though learned in weak-supervision, our approach achieves a comparable accuracy (77.1%) to the fully-supervised accuracy of the state-of-the-art method (Xuan et al., 2020).\nTemporal action localization: In Table 6, we apply the proposed method to weakly-supervised action localization in long duration videos of the ActivityNet1.2 dataset. We report results for our method as well as its efficient version from Section 4.4. The mAP scores at varying IoU thresholds are compared with the current state-of-the-art methods. Both our method and its efficient version achieve the highest mAPs for 8 out of 10 IoU thresholds, and outperform all of the previous methods with the avg. mAP of 26.0%. We also significantly outperform the audio-visual based method of Tian et al. (2018) by the avg. mAP of 17.2%. Additionally, we compare with two naive fusions without the cross-attention (0-stage, SoftMax) with and without the continuity losses (denoted as CL in the Table), both are bettered comfortably by our method. This demonstrates that the effective fusion of audio and visual modalities is critical for action localization. Furthermore, our approach is even comparable to the fully-supervised method in (Zhao et al., 2017)." }, { "heading": "5 CONCLUSION", "text": "We presented a novel approach for weakly-supervised temporal action localization in videos. In contrast to other methods, we leveraged both audio and visual modalities for this task. This is the first attempt at audio-visual localization of unconstrained actions in long videos. To collaboratively fuse audio and visual features, we developed the multi-stage cross-attention mechanism that also preserves the characteristics specific to each modality. We proposed to use the open-max classifier to model the action foreground and background, in absence of temporal annotations. Our model learns to classify video snippets via two consistency losses that enforce continuity for foreground reliability and open-max probabilities for action classes and the background. We conducted extensive experiments to analyze each of the proposed components and demonstrate their importance. Our method outperforms the state-of-the-art results on both AVE and ActivityNet1.2 datasets." }, { "heading": "A ANALYSIS ON MULTI-STAGE CROSS-ATTENTION", "text": "In this section, we conduct extensive analysis for the impact of the multiple stages and dense skip connection of the proposed cross-attention mechanism. Tables 7 and 8 show the experimental results.\nTraining multiple stages of cross-attention: As shown in the Table 1, the 3-stage model suffers from performance drop. To analyze this, in Table 7, we compare 2- and 3-stage models on each of ‘w/o skip connection’, ‘w/ skip connection’, and ‘w/ dense skip connection’. Without the skip connection, 3-stage model improves over 2-stage model, which is intuitively expected. With the skip connection, avg. mAP of 3-stage model drops compared to 2-stage model, from 24.9% to 23.2%. But, when the third stage is appended to the trained (and now frozen) stages of 2-stage model, the avg. mAP is maintained at 24.9%. Similarly, with the dense skip connection, training the entire 3-stage model end-to-end leads to degraded performance. But, when training the model frozen till the second stage the drop is much less. The fact that, in 3-stage model, better performance is obtained when training with first two stages frozen compared to training end-to-end, shows that the optimization gets hard in the latter. Therefore, we conclude that though the third stage helps without the skip connections, due to harder optimization with more stages and (dense) skip connections, 2-stage model is the optimal choice.\nNeed for multi-stage cross-attention: In Table 8, we experiment with 1-stage model, varying the size of dimensions (dx,u and dx,v) of the cross-correlation matrix W on the AVE dataset. We tried several hyper-parameter settings in 1-stage model, but none of them could outperform the default\nsetting (dx,u = 1024, dx,v = 1024) of 2-stage model even with more parameters. Instead of increasing the parameters in 1-stage model itself, when an additional stage is added (i.e. a weight matrix learned with a non-linear activation function) better performance is achieved. Indeed, it is often not trivial to replace a sequence of non-linear functions with another non-linear function as we experimentally observe here. The intention behind the multi-stage is also to extensively delve into cross-modal information, progressively learning the embeddings for each modality." }, { "heading": "B ANALYSIS FOR CONSISTENCY LOSSES AND OPEN-MAX CLASSIFICATION", "text": "B.1 ANALYSIS OF CONSISTENCY LOSSES ON DIFFERENT STAGE MODELS\nIn Table 9, we conduct the analysis for the consistency losses for 0, 1 and 3-stage models as well as the chosen 2-stage model.\nEffect of losses on different stage models: The impact of continuity losses is analogous on 1-, 2- and 3-stage models. Each of the two continuity losses help, but the pseudo localization loss (Lpseu) is more effective. Also, there is further benefit of using them together for almost all the IoU thresholds and stages. In 0-stage model, i.e. without the cross-attention, O-II shows the highest snippet-level performance on the AVE dataset, but the lowest temporal action localization performance on the ActivityNet1.2 dataset. From this, we understand that Lpseu has difficulty in achieving continuity when audio and visual features are overly heterogeneous. Consequently, clear benefit is observed when the cross-attention is used.\nInterdependence of cross-attention and pseudo localization loss: When comparing the O-I of all 0-3 stage models, we see that the performance improvement by stacking the cross-attention is marginal, and the pseudo localization is critical to the performance. This follows from Eq. 9, where Lpseu is only activated at snippet lwhen classification over its neighboring snippets does not strongly agree on the action class or background. To analyze this, we check how frequently Lpseu is activated when cross-attention is not used and when it is used. For 0-stage model, without continuity losses, Lpseu is activated on 11.1% snippets of the ActivityNet1.2 training set. The same frequency is 38.2% for 2-stage model, again without the continuity losses. This shows that when the cross-attention is used, more often the open-max classification of a snippet fails to strongly agree with its neighbors. Therefore, the pseudo localization loss is much needed to enforce the continuity.\nB.2 ANALYSIS OF LOSSES AND OPEN-MAX CLASSIFICATION ON 2-STAGE MODEL\nIn Table 10, we conduct more extensive analysis for the consistency losses and the open-max classifier. Specifically, we replace the open-max classification approach with soft-max one. Then, for both classifiers with the 2-stage cross-attention, we ablate the foreground continuity or pseudo local-\nization losses where CAS and MIL losses are commonly used. First, the performance gap between S-0 and O-0, where only CAS and MIL losses are used, shows the difficulty of learning two parallel branches in weakly-supervised manner. However, when adding the pseudo localization loss, (S-I and O-II), the open-max classification approach is further improved than the soft-max. Hence, the pseudo labels reduce the fallacious action classification of snippets and are more effective on the open-set background modeling than the closed-set modeling.\nNext, O-I and O-II shows higher performance than O-0. Similarly, S-I is superior to S-0. This indicates that erroneous classifications are suppressed by the correctly classified neighbors when using the consistency losses. Also, comparing O-I and O-II, the pseudo localization loss gives more performance improvement. This is because the pseudo localization loss addresses the consistency of classification scores of all the classes including background, while the foreground continuity loss smoothens foreground reliability being class-agnostic or only for the target class. For all of the IoU thresholds (except 0.7), O-III, open-max classification with both of the consistency losses, yields the highest performance. Therefore, all of the proposed open-max classification and consistency losses are effective to temporal action or event localization in videos." }, { "heading": "C DETAILS OF THE PROPOSED ARCHITECTURE", "text": "" }, { "heading": "D MULTIPLE INSTANCE LOSS AND CO-ACTIVITY SIMILARITY LOSS", "text": "We apply multiple-instance learning loss for classification. The prediction score corresponding to a class is computed as the average of its top k activations over the temporal dimension. Co-activity similarity loss (CASL) (Paul et al., 2018) is computed over two snippet sequences from a pair of videos, to have higher similarity when the videos have a common class." }, { "heading": "E QUALITATIVE EVALUATION", "text": "We provide additional qualitative results for action localization on the ActivityNet1.2 dataset. Fig. 4 compares the proposed method with the method trained on visual modality (‘Visual-only’). The\nopen-max classifier and total loss function are commonly used for both. In Figs. 4(a) and (b), because the videos are static in visual modality, the background segments in early parts of videos are miss-localized as actions in the visual-only model. Contrarily, proposed method distinguishes the background based on the action-related audio (cheerleading music and violin sound). In Fig. 4(c), the brushing sound is overlapped with the loud human narration lasting throughout videos. Nevertheless, the proposed method effectively extracts the crucial audio cues and fuses them with the visual ones. In Fig. 4(d), even though the early part of the action is visually occluded by large logos, our method exactly localizes the action. Also, for all of the class activation sequences, the activations by the proposed method are more consistently high for actions. This means that our collaboration of audio and visual modalities is more robust in distinguishing foreground from background.\nFig. 5 illustrates the cases where audio degrades the performance. Fig. 5 (a) shows an example video for action class ‘playing violin’. The violin sound of the soloist and the band is intermingled in the video. In the end, the sound of violin continues making our model predict the action but since camera focuses on the band, the ground-truth does not include those frames. Fig. 5 (b) shows an example of action ‘using parallel bars’. Here the repeated background music is irrelevant to action,\ntherefore the class activation is bit off in the last part. However, thanks to the visual modality, the prediction is still reasonable." } ]
2,021
WEAKLY-SUPERVISED ACTION LOCALIZATION
SP:c477e488dfaa82ea6698a52c6677b74135fecd12
[ "This paper focuses on self-supervised contrastive learning. Previous contrastive learning methods heavily rely on a large number of negative samples. This paper proposed a novel method with an additional margin term, and mathematically investigate the relationship among the margin term, the temperature, and the number of negative samples. The number of negative samples can be significantly reduced by tuning the margin term, while the performance remains more stable compared to previous contrastive learning methods. Furthermore, this paper proposed a MoCo-based strong baseline that can achieve comparable results with an extremely small number of negative samples." ]
In this paper, we propose a method, named EqCo (Equivalent Rules for Contrastive Learning), to make self-supervised learning irrelevant to the number of negative samples in the contrastive learning framework. Inspired by the InfoMax principle, we point that the margin term in contrastive loss needs to be adaptively scaled according to the number of negative pairs in order to keep steady mutual information bound and gradient magnitude. EqCo bridges the performance gap among a wide range of negative sample sizes, so that we can use only a few negative pairs (e.g. 16 per query) to perform self-supervised contrastive training on large-scale vision datasets like ImageNet, while with almost no accuracy drop. This is quite a contrast to the widely used large batch training or memory bank mechanism in current practices. Equipped with EqCo, our simplified MoCo (SiMo) achieves comparable accuracy with MoCo v2 on ImageNet (linear evaluation protocol) while only involves 16 negative pairs per query instead of 65536, suggesting that large quantities of negative samples might not be a critical factor in contrastive learning frameworks.
[]
[ { "authors": [ "Sanjeev Arora", "Hrishikesh Khandeparkar", "Mikhail Khodak", "Orestis Plevrakis", "Nikunj Saunshi" ], "title": "A theoretical analysis of contrastive unsupervised representation learning", "venue": null, "year": 1902 }, { "authors": [ "Philip Bachman", "R Devon Hjelm", "William Buchwalter" ], "title": "Learning representations by maximizing mutual information across views", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yue Cao", "Zhenda Xie", "Bin Liu", "Yutong Lin", "Zheng Zhang", "Han Hu" ], "title": "Parametric instance classification for unsupervised visual feature learning", "venue": "arXiv preprint arXiv:2006.14618,", "year": 2020 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Kevin Swersky", "Mohammad Norouzi", "Geoffrey E Hinton" ], "title": "Big self-supervised models are strong semi-supervised learners", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Xinlei Chen", "Haoqi Fan", "Ross Girshick", "Kaiming He" ], "title": "Improved baselines with momentum contrastive learning", "venue": "arXiv preprint arXiv:2003.04297,", "year": 2020 }, { "authors": [ "Ching-Yao Chuang", "Joshua Robinson", "Lin Yen-Chen", "Antonio Torralba", "Stefanie Jegelka" ], "title": "Debiased contrastive learning", "venue": "arXiv preprint arXiv:2007.00224,", "year": 2020 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Jiankang Deng", "Jia Guo", "Niannan Xue", "Stefanos Zafeiriou" ], "title": "Arcface: Additive angular margin loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Alexey Dosovitskiy", "Jost Tobias Springenberg", "Martin Riedmiller", "Thomas Brox" ], "title": "Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing", "venue": null, "year": 2014 }, { "authors": [ "Aleksandr Ermolov", "Aliaksandr Siarohin", "Enver Sangineto", "Nicu Sebe" ], "title": "Whitening for selfsupervised representation learning", "venue": "arXiv preprint arXiv:2007.06346,", "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Raia Hadsell", "Sumit Chopra", "Yann LeCun" ], "title": "Dimensionality reduction by learning an invariant mapping", "venue": "IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06),", "year": 2006 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "arXiv preprint arXiv:1502.03167,", "year": 2015 }, { "authors": [ "Alex Krizhevsky" ], "title": "One weird trick for parallelizing convolutional neural networks", "venue": "arXiv preprint arXiv:1404.5997,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Ralph Linsker" ], "title": "Self-organization in a perceptual network", "venue": null, "year": 1988 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Chao Peng", "Tete Xiao", "Zeming Li", "Yuning Jiang", "Xiangyu Zhang", "Kai Jia", "Gang Yu", "Jian Sun" ], "title": "Megdet: A large mini-batch object detector", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Ben Poole", "Sherjil Ozair", "Aaron van den Oord", "Alexander A Alemi", "George Tucker" ], "title": "On variational bounds of mutual information", "venue": null, "year": 1905 }, { "authors": [ "Yifan Sun", "Changmao Cheng", "Yuhan Zhang", "Chi Zhang", "Liang Zheng", "Zhongdao Wang", "Yichen Wei" ], "title": "Circle loss: A unified perspective of pair similarity optimization", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Yonglong Tian", "Chen Sun", "Ben Poole", "Dilip Krishnan", "Cordelia Schmid", "Phillip Isola" ], "title": "What makes for good views for contrastive learning", "venue": "arXiv preprint arXiv:2005.10243,", "year": 2020 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": null, "year": 1907 }, { "authors": [ "Hao Wang", "Yitong Wang", "Zheng Zhou", "Xing Ji", "Dihong Gong", "Jingchao Zhou", "Zhifeng Li", "Wei Liu" ], "title": "Cosface: Large margin cosine loss for deep face recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Tongzhou Wang", "Phillip Isola" ], "title": "Understanding contrastive representation learning through alignment and uniformity on the hypersphere", "venue": "arXiv preprint arXiv:2005.10242,", "year": 2020 }, { "authors": [ "Xun Wang", "Haozhi Zhang", "Weilin Huang", "Matthew R Scott" ], "title": "Cross-batch memory for embedding learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiahao Xie", "Xiaohang Zhan", "Ziwei Liu", "Yew Soon Ong", "Chen Change Loy" ], "title": "Delving into interimage invariance for unsupervised visual representations", "venue": "arXiv preprint arXiv:2008.11702,", "year": 2020 }, { "authors": [ "Yang You", "Igor Gitman", "Boris Ginsburg" ], "title": "Large batch training of convolutional networks", "venue": "arXiv preprint arXiv:1708.03888,", "year": 2017 } ]
[ { "heading": null, "text": "In this paper, we propose a method, named EqCo (Equivalent Rules for Contrastive Learning), to make self-supervised learning irrelevant to the number of negative samples in the contrastive learning framework. Inspired by the InfoMax principle, we point that the margin term in contrastive loss needs to be adaptively scaled according to the number of negative pairs in order to keep steady mutual information bound and gradient magnitude. EqCo bridges the performance gap among a wide range of negative sample sizes, so that we can use only a few negative pairs (e.g. 16 per query) to perform self-supervised contrastive training on large-scale vision datasets like ImageNet, while with almost no accuracy drop. This is quite a contrast to the widely used large batch training or memory bank mechanism in current practices. Equipped with EqCo, our simplified MoCo (SiMo) achieves comparable accuracy with MoCo v2 on ImageNet (linear evaluation protocol) while only involves 16 negative pairs per query instead of 65536, suggesting that large quantities of negative samples might not be a critical factor in contrastive learning frameworks." }, { "heading": "1 INTRODUCTION AND BACKGROUND", "text": "Self-supervised learning has recently received much attention in the field of visual representation learning (Hadsell et al. (2006); Dosovitskiy et al. (2014); Oord et al. (2018); Bachman et al. (2019); Hénaff et al. (2019); Wu et al. (2018); Tian et al. (2019); He et al. (2020); Misra & Maaten (2020); Grill et al. (2020); Cao et al. (2020); Tian et al. (2020)), as its potential to learn universal representations from unlabeled data. Among various self-supervised methods, one of the most promising research paths is contrastive learning (Oord et al. (2018)), which has been demonstrated to achieve comparable or even better performances than supervised training for many downstream tasks such as image classification, object detection, and semantic segmentation (Chen et al., 2020c; He et al., 2020; Chen et al., 2020a;b).\nThe core idea of contrastive learning is briefly summarized as follows: first, extracting a pair of embedding vectors (q(I),k(I)) (named query and key respectively) from the two augmented views of each instance I; then, learning to maximize the similarity of each positive pair (q(I),k(I)) while pushing the negative pairs (q(I),k(I ′)) (i.e., query and key extracted from different instances accordingly) away from each other. To learn the representation, an InfoNCE loss (Oord et al. (2018); Wu et al. (2018)) is conventionally employed in the following formulation (slightly modified with an additional margin term):\nLNCE = E q∼D,k0∼D′(q),ki∼D′\n[ −log e (q>k0−m)/τ\ne(q>k0−m)/τ + ∑K i=1 e q>ki/τ\n] , (1)\nwhere q and ki (i = 0, . . . ,K) stand for the query and keys sampled from the two (augmented) data distributions D and D′ respectively. Specifically, k0 is associated to the same instance as q’s while other kis not; hence we name k0 and ki (i > 0) positive sample and negative samples respectively in the remaining text, in which K is the number of negative samples (or pairs) for each query. The temperature τ and the margin m are hyper-parameters. In most previous works, m is trivially set to zero (e.g. Oord et al. (2018); He et al. (2020); Chen et al. (2020a); Tian et al. (2020)) or some\nhandcraft values (e.g. Xie et al. (2020)). In the following text, we mainly study contrastive learning frameworks with InfoNCE loss as in Eq. 1 unless otherwise specified. 1\nIn contrastive learning research, it has been widely believed that enlarging the number of negative samples K boosts the performance (Hénaff et al. (2019); Tian et al. (2019); Bachman et al. (2019)). For example, in MoCo (He et al. (2020)) the ImageNet accuracy rises from 54.7% to 60.6% under linear classification protocol when K grows from 256 to 65536. Such observation further drives a line of studies how to effectively optimize under a number of negative pairs, such as memory bank methods (Wu et al. (2018); He et al. (2020)) and large batch training (Chen et al. (2020a)), either of which empirically reports superior performances when K becomes large. Analogously, in the field of supervised metric learning (Deng et al. (2019); Wang et al. (2018); Sun et al. (2020); Wang et al. (2020)), loss in the similar form as Eq. 1 is often applied on a lot of negative pairs for hard negative mining. Besides, there are also a few theoretical studies supporting the viewpoint. For instance, Oord et al. (2018) points out that the mutual information between the positive pair tends to increase with the number of negative pairs K; Wang & Isola (2020) find that the negative pairs encourage features’ uniformity on the hypersphere; Chuang et al. (2020) suggests that large K leads to more precise estimation of the debiased contrastive loss; etc.\nDespite the above empirical or theoretical evidence, however, we point out that the reason for using many negative pairs is still less convincing. First, unlike the metric learning mentioned above, in self-supervised learning, the negative terms ki in Eq. 1 include both “true negative” (whose underlying class label is different from the query’s, similarly hereinafter) and “false negative” samples, since the actual ground truth label is not available. So, intuitively large K should not always be beneficial because the risk of false negative samples also increases (known as class collision problem). Arora et al. (2019) thus theoretically concludes that a large number of negative samples could not necessarily help. Second, some recent works have proven that by introducing new architectures (e.g., a predictor network in BYOL (Grill et al., 2020)), or designing new loss functions (e.g., Caron et al. (2020a); Ermolov et al. (2020)), state-of-the-art performance can still be obtained even without any explicit negative pairs. In conclusion, it is still an open question whether large quantities of negative samples are essential to contrastive learning.\nAfter referring to the above two aspects, we rise a question: is a large K really essential in the contrastive learning framework? We propose to rethink the question from a different view: note that in Eq. 1, there are three hyper-parameters: the number of negative samples K, temperature τ , and margin m. In most of previous empirical studies (He et al. (2020); Chen et al. (2020a)), only K is changed while τ and m are usually kept constant. Do the optimal hyper-parameters of τ and m varies with K? If so, the performance gains observed from larger Ks may be a wrong interpretation – merely brought by suboptimal hyper-parameters’ choices for small Ks, rather than much of an essential.\nIn the paper, we investigate the relationship among three hyper-parameters and suggest an equivalent rule:\nm = τ log α\nK ,\nwhere α is a constant. We find that if the margin m is adaptively adjusted based on the above rule, the performance of contrastive learning is irrelevant to the size of K, in a very large range (e.g. K ≥ 16). For example, in MoCo framework, by introducing EqCo the performance gap between K = 256 and K = 65536 (the best configuration reported in He et al. (2020)) almost disappears (from 6.1% decrease to 0.2%). We call this method “Equivalent Rules for Contrastive learning” (EqCo). For completeness, as the other part of EqCo we point that adjusting the learning rate according to the conventional linear scaling rule satisfies the equivalence for different number of queries per batch.\nTheoretically, following the InfoMax principle (Linsker (1988)) and the derivation in CPC (Oord et al. (2018)), we prove that in EqCo, the lower bound of the mutual information keeps steady under various numbers of negative samples K. Moreover, from the back-propagation perspective, we further prove that in such configuration the upper bound of the gradient norm is also free of\n1Recently, some self-supervised learning algorithms achieve new state-of-the-art results using different frameworks instead of conventional InfoNCE loss as in Eq. 1, e.g. mean teacher (in BYOL Grill et al. (2020)) and online clustering (in SWAV Caron et al. (2020b)). We will investigate them in the future.\nK’s scale. The proposed equivalent rule implies that, by assigning α = K0, it can “mimic” the optimization behavior underK0 negative samples even if the physical number of negativesK 6= K0. The “equivalent” methodology of EqCo follows the well-known linear scaling rule (Krizhevsky (2014); Goyal et al. (2017)), which suggests scaling the learning rate proportional to the batch size if the loss satisfies with the linear averaged form: L = 1N ∑N i=1 f(xi; θ). However, linear scaling rule cannot be directly applied on InfoNCE loss (Eq. 1), which is partially because InfoNCE loss includes two batch sizes (number of queries and keys respectively) while linear scaling rule only involves one, in addition to the nonlinearity of the keys in InfoNCE loss. In the experiments of SimCLR (Chen et al. (2020a)), learning rates under different batch sizes are adjusted with linear scaling rule, but the accuracy gap is still very large (57.5%@batch=256 vs. 64+%@batch=8192, 100 epochs training).\nEqCo challenges the belief that self-supervised contrastive learning requires large quantities of negative pairs to obtain competitive performance, making it possible to design simpler algorithms. We thus present SiMo, a simplified contrastive learning framework based on MoCo v2 (Chen et al. (2020c)). SiMo is elegant, efficient, free of large batch training and memory bank; moreover, it can achieve superior performances over state-of-the-art even if the number of negative pairs is extremely small (e.g. 16), without bells and whistles.\nThe contributions of our paper are summarized as follows:\n• We challenge the widely accepted belief that on large-scale vision datasets like ImageNet, large size of negative samples is critical for contrastive learning. We interpret it from a different view: it may be because the hyper-parameters are not set to the optimum.\n• We propose EqCo, an equivalent rule to adaptively set hyper-parameters between small and large numbers of negative samples, which proves to bridge the performance gap.\n• We present SiMo, a simpler but stronger baseline for contrastive learning." }, { "heading": "2 EQCO: EQUIVALENT RULES FOR CONTRASTIVE LEARNING", "text": "In this section we introduce EqCo. We mainly consider the circumstance of optimizing the InfoNCE loss (Eq. 1) with SGD. For each batch of training, there are two meanings of the concept “batch size”, i.e., the size of negative samples/pairs K per query, and the number of queries (or positive pairs) N per batch. Hence our equivalent rules accordingly consist of two parts, which will be introduced in the next subsections." }, { "heading": "2.1 THE CASE OF NEGATIVE PAIRS", "text": "Our derivation is mainly inspired by the model of Contrastive Predictive Coding (CPC) (Oord et al. (2018)), in which InfoNCE loss is interpreted as a mutual information estimator. We further extend the method so that it is applicable to InfoNCE loss with a margin term (Eq. 1), which is not considered in Oord et al. (2018).\nFollowing the concept in Oord et al. (2018), given a query embedding q (namely the context in Oord et al. (2018)) and suppose K + 1 random key embeddings x = {xi}i=0,...,K , where there exists exactly one entry (e.g., xi) sampled from the conditional distribution P(xi|q) while others (e.g., xj) sampled from the “proposal” distribution P(xj) independently. According to which entry corresponds to the conditional distribution, we therefore defines K + 1 candidate distributions for x (denoted by {Hi}i=0,...,K), where the probability density of x under Hi is PHi(x) = P(xi|q) ∏ j 6=i P(xj). So, given the observed data X = {k0, . . . ,kK} of x, the probability where x is sampled fromH0 rather than other candidates is thus derived with Bayes theorem:\nPr[x ∼ H0|q, X] = P+PH0(X) P+PH0(X) + P −∑K i=1 PHi(X)\n=\nP+ P− P(k0|q) P(k0)\nP+ P− P(k0|q) P(k0) + ∑K i=1 P(ki|q) P(ki) ,\n(2)\nwhere we denote P+ and P− as the prior probabilities of H0 and Hi(i > 0) respectively. We point that Eq. 2 introduces a generalized form to that in Oord et al. (2018) by taking the priors into account. Referring to the notations in Eq. 1, we suppose thatH0 is the ground truth distribution of x (since k0 is the only positive sample). By modeling the density ratio P(ki|q)/P(ki) ∝ eq >ki/τ (i = 0, . . . ,K) and letting P+/P− = e−m/τ , the negative log-likelihood Lopt , Eq,X − log Pr[x ∼ H0|q, X] can be regarded as the optimal value of LNCE . Similar to the methodology of Oord et al. (2018), we explore the lower bound of Lopt:\nLopt = E q∼D,k0∼D′(q),ki∼D′ log\n( 1 + em/τ\nP(k0) P(k0|q) K∑ i=1 P(ki|q) P(ki)\n)\n≈ E q∼D,k0∼D′(q)\nlog ( 1 +Kem/τ\nP(k0) P(k0|q)\n( E\nki∼D′ P(ki|q) P(ki) )) = E q∼D,k0∼D′(q) log ( 1 +Kem/τ P(k0) P(k0|q)\n) ≥ log(1 +Kem/τ )− I(k0,q),\n(3)\nwhere I(·, ·) means mutual information. The approximation in the second row is guaranteed by Law of Large Numbers as well as the fact P(ki|q) ≈ P(ki) since ki(i > 0) and q are “almost” independent. The inequality in the last row is resulted from P(k0|q) ≥ P(k0) as k0 and q are extracted from the same instance. Therefore the lower bound of the mutual information (noted as fbound(m,K)) between the positive pair (k0,q) is:\nI(k0,q) ≥ fbound(m,K) , log(1 +Kem/τ )− Lopt\n≈ log(1 +Kem/τ )− E q∼D,k0∼D′(q)\nlog ( 1 +Kem/τ\nP(k0) P(k0|q)\n) . (4)\nSo, minimizing LNCE (Eq. 1) towards Lopt implies maximizing the lower bound of the mutual information, which is also satisfied when m 6= 0. In the case of m = 0, the result is consistent with that in Oord et al. (2018). Oord et al. (2018) further points out the bound increases with K, which indicates larger K encourages to learn more mutual information thus could help to improve the performance.\nNevertheless, different from Oord et al. (2018) our model does not requirem to be zero, so the lower bound in Eq. 4 is also a function of em/τ . Thus we have the following theorem:\nTheorem 1. (Main, EqCo for negative pairs) The mutual information lower bound of InfoNCE loss in Eq. 1 is irrelevant to the number of negative pairs K, if\nm = τ log α\nK , (5)\nwhere α is a constant coefficient. And in the circumstances the bound is given by:\nfbound\n( τ log α K ,K ) ≈ log(1 + α)− E q∼D,k0∼D′(q) log ( 1 + α P(k0) P(k0|q) ) ≈ fbound(0, α), (6)\nwhich can be immediately obtained by substituting Eq. 5 into Eq. 4. We name Eq. 5 as “equivalent condition”.\nTheorem 1 suggests a property of equivalency: under the condition of Eq. 5, no matter what the number of physical negative pairs K is, the optimal solution of LNCE (Eq. 1) is “equivalent” in the sense of the same mutual information lower bound. The bound is controlled by a hyper-parameter α rather than K. Eq. 6 further implies that the lower bound also correlates to the configuration of K = α without margin, which suggests we can “mimic” the InfoNCE loss’s behavior of K = K0 under a different physical negative sample size K1, just by applying Eq. 5 with α = K0. It inspires us to simplify the existing state-of-the-art frameworks (e.g. MoCo (He et al. (2020))) with fewer negative samples but as accurate as the original configurations, which will be introduced next.\nWe empirically validate Theorem 1 as follows. Notice that fbound is difficult to calculate directly because Lopt is not known. Instead, we plot the empirical mutual information lower bound\nf̂bound(m,K) , log(1 + Kem/τ ) − LNCE . So, we have f̂bound ≤ fbound; when LNCE converges to the optimum Lopt, f̂bound is an approximation of fbound. In Fig. 1, we plot the evolution of f̂bound during the training of MoCo v2 under different configurations. Obviously, when it converges, without EqCo f̂bound keeps increasing with the number of negative pairs K; in contrast, after applying the equivalent condition (Eq. 5) f̂bound converges to almost the same value under different Ks. The empirical results are thus consistent with Theorem 1.\nRemarks 1. The equivalent condition in Eq. 5 suggests the margin m is inversely correlated with K. It is intuitive, because the larger K is, the more risks of class collision (Arora et al. (2019)) it suffers from, so we need to avoid over-penalty for negative samples near the query, thus smaller m is used; in contrast, if K is very small, we use larger m to exploit more “hard” negative samples.\nBesides, recall that the margin term em/τ is defined as the ratio of the prior probabilities P−/P+ in Eq. 2. If the equivalent condition Eq. 5 satisfies, i.e., P−/P+ = α/K, we have P+ = 1/(1 + α) (notice thatKP−+P+ ≡ 1), suggesting that the prior probability of the ground truth distributionH0 is supposed to be a constant ignoring the number of negative samples K. While in previous works (usually without the margin term, or m = 0) we have P+ = 1/(K + 1). It is hard to distinguish which prior is more reasonable. However at least, we intuitively suppose keeping a constant prior for the ground truth distribution may help to keep the optimal choices of hyper-parameters steady under different Ks, which is also consistent with our empirical observations.\nRemarks 2. In Theorem 1, it is worth noting that K refers to the number of negative samples per query. In the conventional batched training scheme, negative samples for different queries could be either (fully or partially) shared or isolated, i.e., the total number of distinguishing negatives samples per batch could be different, which is not ruled by Theorem 1. However, we empirically find the differences in implementation do not result in much of the performance variation.\nThe following theorem further supports the equivalent rule (Theorem 1) from back-propagation view:\nTheorem 2. Given the equivalent condition (Eq. 5) and a query embedding q as well as the corresponding positive sample k0, for LNCE in Eq. 1 the expectation of the gradient norm w.r.t. q is bounded by 2:\nE ki∼D′ ∥∥∥∥dLNCEdq ∥∥∥∥ ≤ 2τ ( 1− exp(q >k0/τ) exp(q>k0/τ) + αEki∼D′ [exp(q>ki/τ)] ) . (7)\n2Some works (e.g., He et al. (2020)) only use dLNCE/dq for optimization. In contrast, other works (Chen et al. (2020a)) also involve dLNCE/dki, (i = 0, . . . ,K), which we will investigate in the future.\nPlease refer to the Appendix A.1 for the detailed proof. Note that we assume the embedding vectors are normalized, i.e., ‖ki‖ = 1(i = 0, · · · ,K), which is also a convention in recent contrastive learning works.\nTheorem 2 indicates that, equipped with the equivalent rule (Eq. 5), the upper bound of the gradient norm is irrelevant to the number of negative samples K. Fig. 4 (see the Appendix A.2) further validates our theory: the gradient norm becomes much more steady after using EqCo under different Ks. Since the size of K affects little on the gradient magnitude, gradient scaling techniques, e.g. linear scaling rule, are not required specifically for different Ks. Eq. 7 also implies that the temperature τ significantly affects the gradient norm even EqCo is applied – it is why we only recommend to modify m for equivalence (Eq. 5), though the mutual information lower bound is determined by em/τ as a whole." }, { "heading": "2.2 THE CASE OF POSITIVE PAIRS", "text": "In practice the InfoNCE loss (Eq. 1) is usually optimized with batched SGD, which can be represented as empirical risk minimization:\nLbatchNCE = 1\nN N∑ j=1 L(j)NCE(qj ,kj,0), (8)\nwhere N is the number of queries (or positive pairs) per batch; (qj ,kj,0) ∼ (D,D′(qj)) is the j-th positive pair, and L(j)NCE(qj ,kj,0) is the corresponding loss. For different j, L (j) NCE is (almost) independent of each other, because qj is sampled independently. Hence, Eq. 8 satisfies the form of linear scaling rule (Krizhevsky (2014); Goyal et al. (2017)), suggesting that the learning rate should be adjusted proportional to the number of queries N per batch.\nRemarks 3. Previous work like SimCLR (Chen et al. (2020a)) also proposes to apply linear scaling rule. 3 The difference is, in SimCLR it does not clarify the concept of “batch size” refers to the number of queries or the number of keys. However in our paper, we explicitly point that the linear scaling rule needs to be applied corresponding to the number of queries per batch (N ) rather than K." }, { "heading": "2.3 EMPIRICAL EVALUATION", "text": "In this subsection we conduct experiments on the three state-of-the-art self-supervised contrastive learning frameworks – MoCo (He et al. (2020)), MoCo v2 (Chen et al. (2020c)) and SimCLR (Chen et al. (2020a)) to verify our theory in Sec. 2.1 and Sec. 2.2. We propose to alter K and N separately to examine the correctness of our equivalent rules.\nImplementation details. We follow most of the training and evaluation settings recommended in the original papers respectively. The only difference is, for SimCLR, we adopt SGD with momentum rather than LARS (You et al. (2017)) as the optimizer. We use ResNet-50 (He et al. (2016)) as the default network architecture. 128-d features are employed for query and key embeddings. Unless specially mentioned, all models are trained on ImageNet (Deng et al. (2009)) for 200 epochs without using the ground truth labels. We report the top-1 accuracy under the conventional linear evaluation protocol according to the original paper respectively. The number of queries per batch (N ) is set to 256 by default. All models are trained with 8 GPUs.\nIt is worth noting the way we alter the number of negative samplesK independent ofN during training. For MoCo and MoCo v2, we simply need to set the size of the memory bank to K. Specially, if K < N , in the current batch the memory bank is actually composed of K random keys sampled from the previous batch. While for SimCLR, if K < N we random sample K negative keys for\n3In SimCLR, the authors find that square-root learning rate scaling is more desirable with LARS optimizer (You et al. (2017)), rather than linear scaling rule. Also, their experiments suggest that the performance gap between large and small batch sizes become smaller under that configuration. We point that the direction is orthogonal to our equivalent rule. Besides, SimCLR does not explore the case of very small Ks (e.g. K <= 128).\neach query independently. We do not study the case that K > N for SimCLR. We mainly consider the ease of implementation in designing the strategies; as mentioned in Remarks 2 (Sec. 2.1), it does not affect the empirical conclusion.\nQuantitative results. Fig. 2 illustrates the effect of our equivalent rule under different Ks. Our experiments start with the best configurations (i.e. K = 65536 for MoCo and MoCo v2, and K = 256 for SimCLR4), then we gradually reduce K and benchmark the performance. Results in Fig. 2 indicates that, without EqCo the accuracy significantly drops if K becomes very small (e.g. K < 64). While with EqCo, by setting α to “mimic” the optimal K, the performance surprisingly keeps steady under a wide range of Ks. Fig. 2(b) further shows that in SimCLR, by setting α to a number larger than the physical batch size (e.g. 4096 vs. 256), the accuracy significantly improves from 62.0% to 65.3%, 5 suggesting the benefit of EqCo especially when the memory is limited. The comparison fully demonstrates EqCo is essential especially when the number of negative pairs is small.\nBesides, Table 1 compares the results of MoCo v2 under different number of queries N , while K = 65536 is fixed. It is clear that, with linear scaling rule (Krizhevsky (2014); Goyal et al. (2017)), the final performance is almost unchanged under different N , suggesting the effectiveness of our equivalent rule for N ." }, { "heading": "3 SIMO: A SIMPLER BUT STRONGER BASELINE", "text": "EqCo inspires us to rethink the design of contrastive learning frameworks. The previous state-ofthe-arts like MoCo and SimCLR heavily rely on large quantities of negative pairs to obtain high\n4In the original paper of SimCLR (Chen et al. (2020a)), the best number of negative pairs is around 4096. However, the largest K we can use in our experiment is 256 due to GPU memory limit.\n5Our “mimicking” result (65.3%, α = 4096,K = 256) is slightly lower than the counterpart score reported in the original SimCLR paper (66.6%, with a physical batch size ofK = 4096), which we think may be resulted from the extra benefits of SyncBN along with LARS optimizer used in SimCLR, especially when the physical batch size is large.\nperformances, hence implementation tricks such as memory bank and large batch training are introduced, which makes the system complex and tends to be costly. Thanks to EqCo, we are able to design a simpler contrastive learning framework with fewer negative pairs.\nMethod Epochs Top-1 (%) CPC v2 (Hénaff et al., 2019) 200 63.8 CMC (Tian et al., 2019) 240 66.2 SimCLR (Chen et al., 2020a) 200 66.6 MoCo v2 (Chen et al., 2020c) 200 67.5 InfoMin Aug. (Tian et al. (2020)) 200 70.1 SiMo (K = 16, α = 256) 200 68.1 SiMo (K = 256, α = 256) 200 68.0 SiMo (K = 256, α = 65536) 200 68.5 PIRL (Misra & Maaten, 2020) 800 63.6 SimCLR (Chen et al., 2020a) 1000 69.3 MoCo v2 (Chen et al., 2020c) 800 71.1 InfoMin Aug. (Tian et al. (2020)) 800 73.0 SiMo (K = 256, α = 256) 800 71.8 SiMo (K = 256, α = 65536) 800 72.1\nTable 2: State-of-the-art InfoNCE-based frameworks\nWe propose SiMo, a simplified variant of MoCo v2 (Chen et al. (2020c)) equipped with EqCo. We follow most of the design in Chen et al. (2020c), where the key differences are as follows:\nMemory bank. MoCo, MoCo v2 and SimCLR v2 6 (Chen et al. (2020b)) employ memory bank to maintain large number of negative embeddings ki, in which there is a side effect: every positive embedding k0 is always extracted from a “newer” network than the negatives’ in the same batch, which could harm the performance. In SiMo, we thus cancel the memory bank as we only rely on a few negative samples per batch. Instead, we use the momentum encoder to extract both positive and negative key embeddings from the current batch.\nShuffling BN vs. Sync BN. In MoCo v1/v2, shuffling BN (He et al. (2020)) is proposed to remove the obvious dissimilarities of the BN (Ioffe & Szegedy (2015)) statistics between the positive (from current mini-batch) and the negatives (from memory bank), so that the model can make predictions based on the semantic information of images rather than the BN statistics. In contrast, since the positive and negatives are from the same batch in SiMo, therefore, we use sync BN (Peng et al. (2018)) for simplicity and more stable statistics. Sync BN is also used in SimCLR (Chen et al. (2020a)) and SimCLR v2 (Chen et al. (2020b)).\nThere are a few other differences, including 1) we use a BN attached to each of the fully-connected layers; 2) we introduce a warm-up stage at the beginning of the training, which follows the methodology in SimCLR (Chen et al. (2020a)). Apart from all the differences mentioned above, the architecture and the training (including data augmentations) details in SiMo are exactly the same as MoCo v2’s. In the following text, the number of queries per batch (N) is set to 256, and the backbone network is ResNet-50 by default.\nQuantitative results. First, we empirically demonstrate the necessity of EqCo in SiMo framework. We choose the number of negative samples K = 256 as the baseline, then reduce K to evaluate the performance. Fig. 3 shows the result on ImageNet using linear evaluation protocol. Without EqCo, the accuracy significantly drops when K is very small. In contrast, using EqCo to “mimic” the case of largeK (by setting α to 256), the accuracy almost keeps steady even under very small Ks.\nTable 2 further compares our SiMo with state-of-the-art self-supervised contrastive learning methods on ImageNet. 7 Using only 16 negative samples per query, SiMo outperforms MoCo v2 (68.1% vs. 67.5%). If we increase α to 65536 to “simulate” the case under huge number of negative pairs, the accuracy further increases to 68.5%. Moreover, when we extend the training epochs to 800, we get the accuracy of 72.1%, surpassing the baseline MoCo v2 by 1.0%. The only entry that surpasses\n6SimCLR v2 compares the settings with/without memory bank. However, they suggest employing memory bank as the best configuration.\n7We mainly compare the methods with InfoNCE loss (Eq. 1) here, though recently BYOL (Grill et al. (2020)) and SWAV achieve better results using different loss functions.\nour results is InfoMin Aug. (Tian et al. (2020)), which is mainly focuses on data generation and orthogonal to ours. The experiments indicate that SiMo is a simpler but more powerful baseline for self-supervised contrastive learning. Readers can refer to the Appendix B for more experimental results of SiMo." }, { "heading": "4 LIMITATIONS AND FUTURE WORK", "text": "Theorem 1 suggests that given the equivalent condition (Eq. 5), InfoNCE losses under various Ks are “equivalent” in the sense of the same mutual information lower bound, which is also backed up with the experiments in Fig. 1. However, Fig. 2 (a) shows that if K is smaller than a certain value (e.g. K ≤ 16), some frameworks like MoCo v2 start to degrade significantly even with EqCo; while for other frameworks like SiMo (Fig. 3), the accuracy almost keeps steady for very small Ks. Tschannen et al. (2019) also point that the principle of InfoMax cannot explain all the phenomena in contrastive learning. We will investigate the problem in the future, e.g. from other viewpoints such as gradient noise brought by small Ks (Fig. 4 in Appendix A.2 gives some insights).\nThough the formulation of Eq. 1 is very common in the field of supervised metric learning, which is usually named margin softmax cross-entropy loss (Deng et al., 2019; Wang et al., 2018; Sun et al., 2020). Nevertheless, unfortunately, our equivalent rule seems invalid to be generalized to those problems (e.g. face recognition). The major issue lies in the approximation in Eq. 3, we need the negative samples ki to be independent of the query q, which is not satisfied in supervised tasks.\nAccording to Fig. 2 and Fig. 3, the benefits of EqCo become significant if K is sufficiently small (e.g. K < 64). But in practice, for modern computing devices (e.g. GPUs) it is not that difficult to use ∼ 256 negative pairs per query. Applying EqCo to “simulate” more negative pairs via adjusting α can further boost the performance, however, whose accuracy gains become relatively marginal. For example, in Table 2 under 200 epochs training, SiMo with α = 65536 outperforms that of α = 256 by only 0.5%. It could be a fundamental limitation of InfoNCE loss. We will investigate the problem in the future." }, { "heading": "A.1 PROOF OF EQ. 7", "text": "Given the equivalent condition (Eq. 5) and a query embedding q as well as the corresponding positive sample k0, for LNCE in Eq. 1 the expectation of the gradient norm w.r.t. q is bounded by:\nE ki∼D′ ∥∥∥∥dLNCEdq ∥∥∥∥ ≤ 2τ ( 1− exp(q >k0/τ) exp(q>k0/τ) + αEki∼D′ [exp(q>ki/τ)] ) . (9)\nProof. For simplicity, we denote the term exp(q>ki/τ) as si(i = 0, . . . ,K). Then LNCE can be rewritten as:\nLNCE = − log s0\ns0 + α K ∑K i=1 si\n(10)\nThe gradient of LNCE with respect to q is easily to derived:\ndLNCE dq = −1 τ\n( 1− s0\ns0 + α K ∑K i=1 si\n) k0 + α\nτK K∑ i=1\ns0\ns0 + α K ∑K i=1 si ki, (11)\nOwing to the Triangle Inequality and the fact that ki(i = 0, . . . ,K) is normalized, the norm of gradient is bounded by:\n∥∥∥∥dLNCEdq ∥∥∥∥ ≤ ∣∣∣∣∣1τ ( 1− s0\ns0 + α K ∑K i=1 si )∣∣∣∣∣ · ‖k0‖+ K∑ i=1 ∣∣∣∣∣ ατK sis0 + αK ∑Ki=1 si ∣∣∣∣∣ · ‖ki‖\n= 1\nτ\n( 1− s0\ns0 + α K ∑K i=1 si\n) + 1\nτ K∑ i=1\nα K si\ns0 + α K ∑K i=1 si\n= 2\nτ\n( 1− s0\ns0 + α K ∑K i=1 si\n) (12)\nSince the cosine similarity between q and ki (i = 1, . . . ,K) is bounded in [−1, 1], we know the expectation of Eki∼D′ [si] exists. According to Inequality (12) and Jensen’s Inequality, we have:\nE ki∼D′\n[ 2\nτ\n( 1− s0\ns0 + α K ∑K i=1 si\n)] = 2\nτ\n( 1− E\nki∼D′\n[ s0\ns0 + α K ∑K i=1 si\n])\n≤ 2 τ\n( 1− s0\ns0 + αEki∼D′ [si]\n) (13)\nReplacing si by exp(q>ki/τ), the proof of Theorem 2 is completed." }, { "heading": "A.2 EMPIRICAL EVALUATION ON THE MAGNITUDE OF GRADIENTS", "text": "" }, { "heading": "B MORE EXPERIMENTS ON SIMO", "text": "For the following experiments of this section, we report the top-1 accuracy of SiMo on ImageNet (Deng et al., 2009) under the linear evaluation protocol. The backbone of SiMo is ResNet-50 (He et al., 2016) and we train SiMo for 200 epochs unless noted otherwise." }, { "heading": "B.1 ABLATION ON MOMENTUM UPDATE", "text": "In MoCo (He et al., 2020) and MoCo v2 (Chen et al., 2020c), the key encoder is updated by the following rule:\nθk = βθk + (1− β) θq\nwhere θq and θk stand for the weights of query encoder and key encoder respectively, and β is the momentum coefficient. For SiMo, we also adopt the momentum update and use the key encoder to compute the features of positive sample and negative samples.\nIn Table 3, we report the results of SiMo with different momentum coefficients. The number of training epochs is set to be 100, so the top-1 accuracy of baseline (β = 0.999) drops to 64.4%. Compared to the baseline, SiMo without momentum update (β = 0) is inferior, showing the advantage of momentum update." }, { "heading": "B.2 ABLATION ON BN", "text": "Table 4 shows the performance of SiMo equipped with shuffling BN or Sync BN. Likewise, we train SiMo for 100 epochs. It is easy to check out that SiMo with shuffling BN struggles to perform well. Besides, compared to MoCo v2, SiMo with shuffling BN degrades significantly, and we conjecture that it is because the MLP structure of SiMo is more suitable for Sync BN, rather than shuffling BN.\nB.3 SIMO WITH DIFFERENT α\nAs shown in Sec.2.1, α is related to the lower bound of mutual information. Table 5 reveals how accuracy of SiMo varies with the choice of α. As we increase α to 65536, the accuracy tends to improve, in accordance with the Eq.6. However, when α is too large (e.g., 262144), the performance slightly drops by 0.2%.\nSimilar results can be found in MoCo v2. We increase K to 262144 in MoCo v2, the accuracy also descends (in Table 6)." }, { "heading": "B.4 SIMO WITH WIDER MODELS", "text": "Results using wider models are presented in Table 7. For SiMo, the performance is further boosted with wider models (more channels). For instance, SiMo with ResNet-50 (2x) and ResNet-50 (4x) outperforms the baseline (68.5%) by 2% and 3.8% respectively." }, { "heading": "B.5 TRANSFER TO OBJECT DETECTION", "text": "Setup We utilize FPN (Lin et al., 2017) with a stack of 4 3 × 3 convolution layers in R-CNN head to validate the effectiveness of SiMo. Following the MoCo training protocol, we fine-tune with synchronized batch-normalization (Peng et al., 2018) across GPUs. The additional initialized layers are also equipped with BN for stable training. To effectively validate the transferability of the features, the training schedule is set to be 12 epochs (known as 1×), in which learning rate is initialized as 0.2 and decreased at 7 and 11 epochs with a factor of 0.1. The image scales are random sampled of [640, 800] pixels during training and fixed with 800 at inference.\nResults Table 8 summarizes the fine-tuning results on COCO val2017 of different pre-training methods. Random initialization indicates training COCO from scratch, and supervised represents conventional pre-training with ImageNet labels. Compared with MoCo, SiMo achieves competitive performance without large quantities of negative pairs. It is also on a par with the supervised counterpart and significantly outperforms random initialized one." }, { "heading": "C A TOY EVALUATION OF EQCO", "text": "To evaluate the effectiveness of EqCo as mutual information (MI) estimator, following the configuration of Poole et al. (2019), we estimate the MI lower bound of between two simple random vectors.\nSpecifically, given that (X,Y ) are drawn from the known correlated Gaussian distribution, we calculate the lower bound of MI between X and Y based on their embedding. X is a 20-dimensional random variables drawn from a standard Gaussian distribution. And we sampled Y with the following rule:\nY = ρX + √ 1− ρ2 (14)\nwhere ρ is a the given correlation coefficient and is a random variable sampled from a standard Gaussian distribution and independent from X . With a known ρ, the ground truth MI between X and Y is easy to compute:\nI (X,Y ) = −d 2 log ( 1− ρ2 ) (15)\nHere, d is the dimension of X and Y , and as mentioned above we set d = 20.\nTo embed X and Y , we adopt two MLPs respectively, and each MLP has 1 hidden layer of 256 units, followed by ReLU activation function. We use Adam optimizer with learning rate of 0.0005 to optimize InfoNCE or EqCo for 5000 steps. For each training iteration, K pairs of (X,Y ) are independently sampled, which means there areK−1 negative samples for each query. After training, the weights of MLPs are frozen and we repeat estimating the lower bound of MI for 1000 times to reduce the estimating variance. For experiments with EqCo, we set the α = 512.\nAs shown in Table 9, INCE varies with K, while IEqCo remains steady. Especially, when the ground truth MI is relatively large (e.g., 8, 10), significant differences between EqCo and InfoNCE can be observed. The experiment further validates the effectiveness of EqCo." } ]
2,020
null
SP:4b87425197bb556af13c6aee324bc5ea2b82fc45
[ "This paper performs an ablative study on the two components involved in training unsupervised MT systems: 1) back-translation loss, 2) denoising autoencoding loss. It links the reconstruction loss to ELBO (where the q distribution is a back-translation model). It shows that the original loss with both the components is important for unsupervised MT and ELBO needs to be augmented with denoising autoencoding loss to be effective at training unsupervised MT models." ]
Unsupervised Neural Machine Translation or UNMT has received great attention in recent years. Though tremendous empirical improvements have been achieved, there still lacks theory-oriented investigation and thus some fundamental questions like why certain training protocol can work or not under what circumstances have not yet been well understood. This paper attempts to provide theoretical insights for the above questions. Specifically, following the methodology of comparative study, we leverage two perspectives, i) marginal likelihood maximization and ii) mutual information from information theory, to understand the different learning effects from the standard training protocol and its variants. Our detailed analyses reveal several critical conditions for the successful training of UNMT.
[ { "affiliations": [], "name": "DEMYSTIFYING LEARNING" } ]
[ { "authors": [ "Mikel Artetxe", "Gorka Labaka", "Eneko Agirre" ], "title": "Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", "venue": "URL https://www.aclweb. org/anthology/P17-1042", "year": 2017 }, { "authors": [ "Mikel Artetxe", "Gorka Labaka", "Eneko Agirre", "Kyunghyun Cho" ], "title": "Unsupervised neural machine", "venue": "translation. ArXiv,", "year": 2018 }, { "authors": [ "A. Blum", "T. Mitchell" ], "title": "Combining labeled and unlabeled data with co-training", "venue": "In COLT’", "year": 1998 }, { "authors": [ "Piotr Bojanowski", "Edouard Grave", "Armand Joulin", "Tomas Mikolov" ], "title": "Enriching word vectors with subword information", "venue": "Transactions of the Association for Computational Linguistics,", "year": 2017 }, { "authors": [ "Gino Brunner", "Y. Liu", "Damián Pascual", "Oliver Richter", "Massimiliano Ciaramita", "Roger Wattenhofer" ], "title": "On identifiability in transformers", "venue": "arXiv: Computation and Language,", "year": 2020 }, { "authors": [ "R. Caruana" ], "title": "Multitask learning", "venue": "In Encyclopedia of Machine Learning and Data Mining,", "year": 1998 }, { "authors": [ "Alexis Conneau", "Guillaume Lample", "Marc’Aurelio Ranzato", "Ludovic Denoyer", "Hervé Jégou" ], "title": "Word translation without parallel data", "venue": "arXiv preprint arXiv:1710.04087,", "year": 2017 }, { "authors": [ "Alexis Conneau", "Germán Kruszewski", "Guillaume Lample", "Loı̈c Barrault", "M. Baroni" ], "title": "What you can cram into a single vector: Probing sentence embeddings for linguistic properties", "venue": null, "year": 2018 }, { "authors": [ "T. Cover", "J. Thomas" ], "title": "Elements of information theory", "venue": "In Elements of Information Theory,", "year": 1991 }, { "authors": [ "Xiangyu Duan", "Baijun Ji", "Hao Jia", "Min Tan", "Min Zhang", "Boxing Chen", "Weihua Luo", "Yue Zhang" ], "title": "Bilingual dictionary based neural machine translation without using parallel sentences", "venue": null, "year": 2007 }, { "authors": [ "Yansong Gao", "Pratik Chaudhari" ], "title": "A free-energy principle for representation learning", "venue": "Proceedings of Machine Learning Research. PMLR,", "year": 2020 }, { "authors": [ "Zoubin Ghahramani" ], "title": "Unsupervised Learning, pp. 72–112", "venue": "URL https: //doi.org/10.1007/978-3-540-28650-9_5", "year": 2004 }, { "authors": [ "Di He", "Yingce Xia", "Tao Qin", "L. Wang", "N. Yu", "T. Liu", "W. Ma" ], "title": "Dual learning for machine translation", "venue": null, "year": 2016 }, { "authors": [ "Junxian He", "X. Wang", "Graham Neubig", "Taylor Berg-Kirkpatrick" ], "title": "A probabilistic formulation of unsupervised text style transfer", "venue": "ArXiv, abs/2002.03912,", "year": 2020 }, { "authors": [ "Yunsu Kim", "M. Graça", "H. Ney" ], "title": "When and why is unsupervised neural machine translation useless? ArXiv", "venue": null, "year": 2004 }, { "authors": [ "Diederik P. Kingma", "M. Welling" ], "title": "Auto-encoding variational bayes", "venue": "CoRR, abs/1312.6114,", "year": 2014 }, { "authors": [ "Diederik P. Kingma", "M. Welling" ], "title": "An introduction to variational autoencoders", "venue": "Found. Trends Mach. Learn.,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Alexis Conneau" ], "title": "Cross-lingual language model pretraining", "venue": "ArXiv, abs/1901.07291,", "year": 2019 }, { "authors": [ "Guillaume Lample", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Unsupervised machine translation using monolingual corpora", "venue": "only. ArXiv,", "year": 2018 }, { "authors": [ "Guillaume Lample", "Myle Ott", "Alexis Conneau", "Ludovic Denoyer", "Marc’Aurelio Ranzato" ], "title": "Phrase-based & neural unsupervised machine translation", "venue": "In EMNLP,", "year": 2018 }, { "authors": [ "M. Lewis", "Yinhan Liu", "Naman Goyal", "Marjan Ghazvininejad", "A. Mohamed", "Omer Levy", "V. Stoyanov", "Luke Zettlemoyer" ], "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "venue": "ArXiv, abs/1910.13461,", "year": 2020 }, { "authors": [ "Mike Lewis", "Marjan Ghazvininejad", "Gargi Ghosh", "Armen Aghajanyan", "Sida Wang", "Luke Zettlemoyer" ], "title": "Pre-training via paraphrasing", "venue": "arXiv preprint arXiv:2006.15020,", "year": 2020 }, { "authors": [ "Ke Li", "Jitendra Malik" ], "title": "Implicit maximum likelihood estimation", "venue": "arXiv preprint arXiv:1809.09087,", "year": 2018 }, { "authors": [ "Yinhan Liu", "Jiatao Gu", "Naman Goyal", "Xiongmin Li", "Sergey Edunov", "Marjan Ghazvininejad", "Mike Lewis", "Luke Zettlemoyer" ], "title": "Multilingual denoising pre-training for neural machine", "venue": "translation. ArXiv,", "year": 2020 }, { "authors": [ "Kelly Marchisio", "Kevin Duh", "Philipp Koehn" ], "title": "When does unsupervised machine translation work? ArXiv", "venue": null, "year": 2004 }, { "authors": [ "Kishore Papineni", "S. Roukos", "T. Ward", "Wei-Jing Zhu" ], "title": "Bleu: a method for automatic evaluation of machine translation", "venue": "In ACL,", "year": 2002 }, { "authors": [ "Tiago Pimentel", "Josef Valvoda", "Rowan Hall Maudslay", "Ran Zmigrod", "Adina Williams", "Ryan Cotterell" ], "title": "Information-theoretic probing for linguistic structure", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4609–4622,", "year": 2020 }, { "authors": [ "Victor Prokhorov", "Ehsan Shareghi", "Yingzhen Li", "Nigel Collier" ], "title": "On the importance of the kullback-leibler divergence term in variational autoencoders for text", "venue": "generation. ArXiv,", "year": 2019 }, { "authors": [ "Shuo Ren", "Yu Wu", "Shujie Liu", "Ming Zhou", "Shuai Ma" ], "title": "Explicit cross-lingual pre-training for unsupervised machine translation", "venue": null, "year": 1909 }, { "authors": [ "Rico Sennrich", "B. Haddow", "Alexandra Birch" ], "title": "Neural machine translation of rare words with subword", "venue": "units. ArXiv,", "year": 2016 }, { "authors": [ "Kaitao Song", "Xu Tan", "Tao Qin", "Jianfeng Lu", "Tie-Yan Liu" ], "title": "Mass: Masked sequence to sequence pre-training for language generation", "venue": null, "year": 2019 }, { "authors": [ "Chau B. Tran", "Yuqing Tang", "Xiongmin Li", "Jiatao Gu" ], "title": "Cross-lingual retrieval for iterative selfsupervised training", "venue": "ArXiv, abs/2006.09526,", "year": 2020 }, { "authors": [ "Jiawei Wu", "Xin Wang", "William Yang Wang" ], "title": "Extract and edit: An alternative to back-translation for unsupervised neural machine translation", "venue": null, "year": 2019 }, { "authors": [ "Zhen Yang", "Wei Chen", "Feng Wang", "Bo Xu" ], "title": "Unsupervised neural machine translation with weight sharing", "venue": "In ACL,", "year": 2018 }, { "authors": [ "Lewis" ], "title": "2020a) to the multilingual setting. They pre-train sequenceto-sequence model on monolingual corpora of 25 languages, and only use BT loss to finetune the model for UNMT. In their paper, they claim that when relying only on BT, they use constrained decoding to obtain sentence on the other language at initial epochs to overcome the copy issue. Conceptually, mBART actually redefines the role of DAE loss as a pre-training objective", "venue": null, "year": 2020 }, { "authors": [ "Marchisio" ], "title": "Although they do not apply their method on sequence-to-sequence pre-training, and their method could be applied directly to MASS", "venue": "(Conneau et al.,", "year": 2017 }, { "authors": [ "Kim" ], "title": "and the dissimilarity of the language pair correlate well with performance degradation", "venue": null, "year": 2020 }, { "authors": [ "Tran" ], "title": "2020) proposes a novel cross-lingual retrieval method for finding comparative sentence pairs from monolingual corpora of the two language. They use the multilingual pre-trained encoder of mBART (Liu et al., 2020) to get universal semantic representations of sentences (by doing this, they just average the token-level vectors from mBART as a single vector for nearest neighbor search", "venue": null, "year": 2020 }, { "authors": [ "Wu" ], "title": "2019) who use the UNMT model’s own encoder representation instead of a self-supervised pre-trained encoder, and can be seen as its multilingual pre-training extension. All of the above proposed specific training methods for UNMT together with recent paraphrase-based pre-training objective (Lewis et al., 2020b) can all be thought of as implicit maximum likelihood", "venue": null, "year": 2020 }, { "authors": [ "L’ avocat de Manning" ], "title": "déposé une plainte formelle pour les traitements subis par Manning en janvier", "venue": null, "year": 2011 } ]
[ { "heading": "1 INTRODUCTION", "text": "Unsupervised Neural Machine Translation or UNMT have grown from its infancy (Artetxe et al., 2018; Lample et al., 2018a) to close-to-supervised performance recently on some translation scenarios (Lample & Conneau, 2019; Song et al., 2019). Early UNMT works (Artetxe et al., 2017; Lample et al., 2018a; Yang et al., 2018) adopt complex training strategies including model initialization, synthetic parallel data for warming up the model, adversarial loss for making encoder universal, different weight sharing mechanisms etc. Then Lample et al. (2018b) simplifies all these and establishes a two-components framework, involving an initialization strategy followed by iterative training on two tasks, i.e. denoising auto-encoding with the DAE loss and online back-translation with the BT loss. Works afterwards mainly focus on developing better initialization strategies (Lample & Conneau, 2019; Ren et al., 2019; Song et al., 2019; Liu et al., 2020). Although obtaining impressive performance, it is unclear why this standard training protocol is possible to be successful. Kim et al. (2020) and Marchisio et al. (2020) consider the standard training as a black-box and empirically analyze its success or failure regarding different data settings (i.e. text domains and language pairs). Unfortunately, due to the lack of theoretical guidelines, some fundamental questions are still remained unknown: what standard training tries to minimize under the general unsupervised training paradigm (Ghahramani, 2004) and when a certain training protocol can work for training UNMT? In this paper, we attempt to open the back-box training of UNMT and understand its theoretical essence from two angles: i) a marginal likelihood maximization view; and ii) an information-theoretic view by ablating standard training protocol with other variants. Our contributions are as follows.\nA. By making an analogy of standard training protocol with marginal likelihood or Evidence Lower BOund (ELBO) optimization, we visualize the learning curves of the two terms in ELBO objective, and found that optimizing ELBO is not sufficient for training a successful UNMT model, indicating that specific regularization design i.e. the DAE loss, quite matters.\nB. By leveraging information theory, we present a formal definition on what does it mean to successfully train an UNMT model, and then readily derive a sufficient condition and a necessary condition for successfully training UNMT in principle. In addition, we validate both sufficient and necessary conditions through empirical experiments, and find that both conditions indeed explain why standard training protocol works while others suffer from degeneration to learning sub-optimal tasks.\nC. Based on explanations for those failed protocols, we continue experiments to settle the role played by DAE and BT. Firstly, BT is the main task while DAE is a critical auxiliary. Then we clarify that DAE has more important role than just learning word order, accepted as common knowledge in almost all previous works, but also preserving the mutual information between encoder input and\nencoder output, which is necessary for successful training. Furthermore, DAE also functions as a behavior regularizer for decoding with online BT, and prevents BT from yielding degenerated data." }, { "heading": "2 UNDERSTANDING UNMT FROM TWO PERSPECTIVES", "text": "In this section, we first introduce background about the standard training protocol proposed in Lample et al. (2018b), which is adopted by almost all later works. Then we introduce the basic concept of two perspectives on which we rely for analyzing the learning of different training protocol variants. Due to the space limit, please refer to appendix A.1 for a timely literature review of recent advance." }, { "heading": "2.1 STANDARD TRAINING PROTOCOL", "text": "The standard training protocol involves standard initialization strategy and standard iterative training procedure, and they both are built upon a specific design of encoder-decoder parameterization.\nParameterization and initialization UNMT model adopts a shared embedding matrix for a shared vocabulary with joint BPE (Sennrich et al., 2016), and the two languages share the same encoder and decoder with only a language embedding for distinguishing the input from different languages. As a result, unconstrained decoding might generate tokens from the same language as the input. Standard initialization means using fastTEXT (Bojanowski et al., 2017) to initialize the embedding matrix, denoted as JointEmb. XLM (Lample & Conneau, 2019) uses a trained encoder to initialize both encoder and decoder of the UNMT model. We also consider random initialization for completeness.\nIterative training strategy The iterative training strategy involves optimization of two critical losses by turns, i.e. the DAE loss and the BT loss as defined in Eq. 1 and Eq. 2, where s and t denote the two languages. DAE loss is constructed through sampling a monolingual sentence x (or y), construct its noisy version C(x) (C(y)) and minimize the reconstruction error or RecErr:\nLdae = − log ps→s(x|C(x)) + [− log pt→t(y|C(y))], (1)\nBT loss is constructed through sampling a monolingual sentence x (or y), construct its corresponding translation via the current modelM(x) (M(y)) through back-translation and minimize the RecErr:\nLbt = Eŷ∼M(x)[− log pt→s(x|ŷ)] + Ex̂∼M(y)[− log ps→t(y|x̂)], (2)\nThe online BT process involved in the iterative training strategy can be seen as Co-Training (Blum & Mitchell, 1998), where two models (with shared weights) constructed on two views (source/target sentence) generate pseudo labels as the other view (pseudo translation) for training the corresponding dual model. We summarize the whole standard training protocol in Algorithm 1 in appendix A.2.\nConstrained decoding Besides the basics, we further introduce the concept of constrained decoding where the model should be constrained to decode tokens only in the target language regardless of the shared embedding parameterization. This could give us a simple definition of cross-lingual RecErr beyond naive RecErr in Eq. 2. Details of the algorithm and the definition are shown in appendix A.3." }, { "heading": "2.2 A MARGINAL MAXIMIZATION VIEW", "text": "The standard training of UNMT model takes advantage of sole monolingual corpora Ds, Dt, which is similar to the generative modeling setting where only unlabeled data is available (Ghahramani, 2004). Here we take an analogy of the standard UNMT training as implicitly maximizing marginal of the monolingual data. Due to the duality of translation (He et al., 2016), the target sentence not only plays the role of label, but also the input in reverse translation direction. So in essence the standard UNMT training can be seen as maximizing the marginal log likelihood ofDs andDt simultaneously. However, since marginals involve infinite summation over a certain view (target/source), a lower bound is often optimized via Monte Carlo approximation (Kingma & Welling, 2014).\nIn the following derivation of ELBO (Kingma & Welling, 2019), qφ(y|x) is the posterior distribution of y when taking y as the latent variable. Here we only derive the bound for x ∈ Ds. A detailed analogy of the standard UNMT objective and the ELBO objective is presented in Table 1. As you can see, both objectives have the same reconstruction error terms but different regularization terms: for ELBO, the model is optimized to stay close with the language model via the KL loss.\nWorth noting that, we are not the first to make a connection between marginal maximization and the standard UNMT training. In He et al. (2020), they have already proposed an ELBO formulation for unsupervised sequence transduction task. However, they tend to focus on replacing the standard UNMT-training-style objective function with the ELBO objective, and propose several critical tricks such as Gumbel softmax and self-reconstruction for making ELBO really work. Instead, we leverage ELBO mainly as an anology to the standard UNMT training objective, and through comparative study with other protocol variants, we can further understand why standard objective and its variants work or not, even though they all tend to have similar ELBO values. Details are in appendix A.4." }, { "heading": "2.3 AN INFORMATION-THEORETIC VIEW", "text": "If we denote Y ′ =M(X) as the random variable (r.v.) generated by the modelM over X . Therefore, if (Y ′, X) gradually contains more and more bilingual knowledge, the model will be able to generate better translations, eventually leading to the success of UNMT training. Suppose c is a constant predefined by users which controls the satisfactory level for translation performance, we give a definition to formalize the success of UNMT training from an information-theoretic viewpoint. Definition 2.1. If I(Y ′, X) > c after training, we say that UNMT training is successful; otherwise, we say that UNMT training fails. (Caveat, c is concetual quantity, we never instantiate its value.)\nSuppose p(x, y′) is the true distribution of 〈x, y′〉, and pt→s(x | y′) is an estimator of p(x | y′). We obtain the following two conditions for success of training UNMT based on the above definition 2.1. Proposition 1. (Sufficient condition) If Ep(x,y′) log pt→s(x|y′) ≥ c − H(X), then UNMT training will be successful.\nProof. Based on Definition 2.1 and the definition of mutual information (MI) and Jensen’s inequality, we can derive the following inequality (Pimentel et al., 2020):\nI(X,Y ′) = H(X)−H(X | Y ′) ≥ H(X)−Hpt→s(X | Y ′) = H(X) + Ep(x,y′) log pt→s(x | y′) ≥ c (4)\nSince the sufficient condition relies on true distribution p(x, y′) which is unknown in practice, we sample (x, y′) from the empirical distribution ofX and ps→t(y′|x) as approximation. Then the ideal sufficient condition is reduced to a practical one: If ∑ x Eps→t(y′|x) log pt→s(x|y′) ≥ c − H(X), then UNMT training will be successful. Since MI is symmetric, we can also have similar formula regarding s→ t direction with Ep(x′,y) log ps→t(y|x′) ≥ c−H(Y ). They together connect success of training to the BT loss in Eq. 2: a lower BT loss is more likely to make UNMT training successful.\nFurthermore, if we denote the encoder output as a r.v., Z = enc(X), we can obtain the following necessary condition: Proposition 2. (Necessary condition). If UNMT training is successful, then I(X,Z) ≥ c.\nProof. Following the Data Processing Inequality (Cover & Thomas, 1991), we have the following inequality that holds all the time:\nI(X,Z) ≥ I(X,Y ′) ≥ c, (5)\nwith the Markov chain (or data processing order) X enc−−→ Z dec−−→ Y ′.\nIn subsequent experiments, we follow Pimentel et al. (2020) and estimate I(X,Z) = H(X) − H(X|Z) by calculating H(X) through a statistical 1-gram language model and H(X|Z) through probing (Conneau et al., 2018) respectively. For estimating I(X,Y ′), we use the token-by-token point-wise mutual information (PMI) over some pseudo bitext as a surrogate to the sentence-bysentence MI. The detailed estimation methods are presented in appendix A.5." }, { "heading": "3 EXPERIMENT", "text": "Protocol Loss used standard DAE (Eq. 1) + BT (Eq. 2) dae-only DAE (Eq. 1) bt-only BT (Eq. 2) elbo ELBO (Eq. 3) elbo-dae ELBO (Eq. 3) + DAE (Eq. 1)\nterms in negative ELBO with a conclusion that marginal maximization is only a necessary but not sufficient condition for successfully learning translation; iii) we explain both quantitatively and qualitatively why dae-only and bt-only that implicitly optimizes ELBO cannot work from an information-theoretic perspective, and highlight the importance of task-specific regularization such as the DAE loss; iv) we clarify the main and auxiliary relationship between BT and DAE loss, and further investigate the critical regularization effects of DAE loss." }, { "heading": "3.1 EXPERIMENTAL SETTINGS AND OVERALL PERFORMANCE", "text": "Dataset and Reproducibility We adopt the publically accessible WMT14 En-Fr and En-De datasets for our experiments. We strictly follow the data pre-processing pipeline and instructions for training in the official XLM code repoitory 1. The monolingual data for each language is set to about 5M sentences from the newscrawl monolingual collection 2. Though we do not focus on improving over state of the art, adding more monolingual data can indeed largely improve the final performance. For XLM initialization, we download the pretrained models from the XLM repo; and for JointEmb initialization, we use fastTEXT (Bojanowski et al., 2017) for learning the word embeddings on the concatenated monolingual corpora for each language pair (about 10M).\nOutline of Overall Performance Here in Table 3, we first report the overall performances with the standard training protocol and two of its variants dae-only and bt-only under the three initialization strategies (random, jointEmb, XLM); performances of optimizing elbo are also shown with XLM initialization. There are several observations to be highlighted here. i) Only the standard and the elbo-dae training protocols lead to decent performances, and the later requires using DAE loss as well and it is necessary to set the coefficient of the KL regularization term lower than 0.05. ii) Simply optimizing elbo leads to failure of training. iii) although dae-only seems to lead to very low performance through largely copying the input, if we continue training with the BT loss alone, we can surprisingly obtain similar (or sometimes even better) performance compared with standard training protocol (the +BT loss row), though the initial performance is very low (about 2 BLEU points), which indicating that it is not necessary that we have initial model with decent performance to make Co-Training successful. iv) For bt-only, if we continue with standard training, the final performance still struggles to reach that of standard; actually, for weak initialization methods (random, JointEmb), the model could even hardly learn copy, and stay failure all the time. This may imply that bt-only is learning poisoned inner representation. Actually, according to the information inequality (5), bt-only will make I(X,Z) to be very low, that is the output of the encoder can hardly identify the input sentence (Brunner et al., 2020), therefore data quality of Co-Training stays low all the time. We will design an experiment to prove this in Sec. 3.4.2.\n1https://github.com/facebookresearch/XLM 2Please refer to the ‘get-data-nmt.sh‘ script in the XLM repo for more details; we use the default setting." }, { "heading": "3.2 VISUALIZING ELBO LEARNING CURVES", "text": "In this subsection, we set up to visualize the learning curves of ELBO, together with the two terms in ELBO, i.e. reconstruction error and KL divergence. The actual ELBO values are negative, but learning commonly means to minimize certain loss, so here we visualize the negative ELBO. The lower the value is, the better the ELBO is being optimized. We first demonstrate the learning curves of standard training, describe some observed phenomena, and then turn to visualize the curves of other failure training protocols. Note that, since we use two samples (k=2) in the Monte Carlo approximation, all the terms are twice the value as it should be.\nFigure 1 (a) demonstrates the ELBO learning curves of the standard UNMT training under three initialization strategies. The overall ELBO on the two monolingual datasets is the sum of En⇒Fr and Fr⇒En directions. As you can see, across all initialization strategies, even though there is clear mismatch between the standard UNMT objective and the ELBO objective over the regularization term, we can conclude that standard training implicitly minimizes negative ELBO.\nFigure 1 (b) is the visualization of the reconstruction error term within ELBO. It is self-evident that for standard training, the reconstruction loss represents the cross-lingual translation ability of the model, so in the original paper (Lample et al., 2018a) the reconstruction BLEU, which correlates well with the reconstruction error, is used for model selection without given bitext development set. Figure 1 (c) shows the visualization of the KL divergence term. It is interesting that for all initialization strategies, the KL value first goes down and then goes up quite a bit till convergence. This learning phenomenon can be summarized as: the standard training protocol tends to make the model first fit to behave like language model of the two languages and then fit to translation model\nin later stage. And the ’going-up’ reflects the existence of large distance between the distribution of a language model p(y) and the translation model p(y|x), given same target y. Next, we continue to visualize the ELBO curves of some failure variants of the training protocol. In Figure 2, we draw all the ELBO learning curves for those training protocols presented in Table 2 that fail to train a well-performed model under XLM initialization. As you can see in Figure 2 (a), other than standard, before (30x50=) 1500 updates, all variants seem to reach similar ELBO values. As the training goes on, dae-only and bt-only tend to have exactly the same ELBO value with the standard training protocol. However, the reconstruction error of bt-only is much higher than standard, while the KL divergence of dae-only is much higher than that of standard as well. Both a low reconstruction error and a low KL distance are necessary for successful training. For elbo-only, there is a quick posterior collapse phenomenon at initial training (before 1k updates) (in Figure 2 (c) KL becomes very low), however, then the KL slowly goes up, which might be resulted from the unstableness of ELBO optimization with REINFORCE (He et al., 2020). This indicates that only requiring ELBO to be optimized as a whole is only the necessary but not the sufficient condition of successful learning the target translation task.\n3.3 WHY MINIMIZED ELBO CAN still LEAD TO TRAINING FAILURE?\nAn intuitive explanation is that unsupervised learning through marginal likelihood maximization is under-determined. There are many plausible tasks like language modeling, paraphrasing, simple sequence copying, translation, that satisfy the inductive bias of the parameterization, and freely learning with objectives like ELBO can make the model learn any of the plausible tasks if optimization finally converges. And learning any one of them can induce minimized ELBO. So what tasks on earth have dae-only and bt-only finally learned respectively? Table 7 in appendix demonstrates the decoding behavior of the final models given certain source inputs. We can assume that dae-only degenerates to the sequence copy task while bt-only degenerates to the language modeling task." }, { "heading": "3.3.1 ANALYSIS ON FAILURE OF DAE-ONLY", "text": "Learned copy Figure 2 (b) informs us that dae-only has even lower reconstruction error than standard, which means that even if dae-only is not trained with the language embedding feeding of another language in the target-side, it can still minimize the reconstruction loss when feed with the target language embedding. However, Table 7 demonstrates that dae-only has learned almost perfect sequence copy. Here we use the definition of cross-lingual RecErr to clarify this phenomenon, since unconstrained decoding might generate tokens that come from source language and this might prevent us from distinguishing RecErr of mono-lingual reconstruction (copying, paraphrasing) and cross-lingual reconstruction (translation).\nIn previous subsection, we draw all the learning curves in Figure 2 (b) based on naive RecErr without considering the above situation. Here in Figure 3, according to a modified Definition A.2, we can draw the cross-lingual RecErr for standard, dae-only and bt-only accordingly. As shown, the cross-lingual RecErr curve is much higher than the RecErr for dae-only, and it is the highest among all, indicating that, essentially, the target translation task is learned only by learning a low crosslingual reconstruction loss. That is why later work like Liu et al. (2020) directly uses constrained\ndecoding for BT to accelerate training. We also calculate the correlation between RecErr/crosslingual RecErr and BLEU in Figure 5 (see the appendix), the later is much better correlated (-0.37 verses -0.87) with final performance. Moreover, in Table 4, we calculate the mutual information in pseudo-bitext generated by dae-only under constrained decoding, suprisingly we find that the MI is very high. However, since dae-only does not expose the model on such data, it never learns the cross-lingual alignment in the data, which indicates the important role of BT that can actually learn such cross-lingual MI. These findings support our practical sufficient condition: a lower crosslingual RecErr is more likely to make UNMT training successful." }, { "heading": "3.3.2 ANALYSIS ON FAILURE OF BT-ONLY", "text": "Degeneration to LM As you can see in Figure 2 (c), the sentence-level KL distance between bt-only and the language model is very small, much lower than standard and others. This indicates that the learned behavior of bt-only may resemble the behavior of a language model. That is, the UNMT model learned with bt-only might largely ignore the potential predictive information contained at the source side, and only relies on decoder’s LM prior.\nMI of Pseudo-Bitext The reason why such degeneration happens during the training process can be intuitively visualized by showing the mutual information contained in the pseudo-bitext generated with online iterative BT. Table 4 shows the mutual information of the final checkpoints obtained by standard, bt-only and dae-only. As you can see bt-only has the lowest mutual information between source and target of the generated bitext. And even if we instead use sampling for generation, the mutual information is still lower than random bitext (0.12 < 0.27). We also conduct experiment when at BT phase the model uses sampling instead of greedy, this will alleviate the degeneration a little bit, but the learning still fails. We further draw the mutual information of pseudo-bitext along training in Figure 4, and bt-only stays low all the time, while standard have growing values." }, { "heading": "3.4 THE CRITICAL ROLE OF BT AND DAE LOSSES", "text": "" }, { "heading": "3.4.1 BT LOSS IS THE MAIN TASK WHILE DAE LOSS THE AUXILIARY", "text": "In the standard UNMT training protocol, we can think of consecutive learning of DAE and BT losses as multitasking (Caruana, 1998). However, since the (cross-lingual) BT-loss is directly related to\nI(x, ŷ) which is used to guarantee the success of translation in principle as mentioned in Sec. 2.3. we may think BT task as the main task and DAE task as the auxiliary task with respect to the target translation task. This section gives an affirmative answer to this intuition. Since DAE and BT losses are updated one after another, we can control the update frequency ratio τ between them. This can control the degree of the model learning towards certain loss and τ = 1 corresponds to the standard setting. Table 5 shows the results. The larger τ is, the more frequent DAE loss is being updated. As you can see, it seems the more DAE loss is updated, the training tends to become the dae-only setting, that is, the model starts to learn copy instead of translation. On the contrary, it does not hurt so much even BT is updated 10 times more than DAE (τ = 0.1), though increasing BT updates a lot (larger than 20 times, τ = 0.05) will definitely degrade final performance, which reflects the important role of DAE’s regularization effect on BT, that is to constrain online inference at BT phase so as to prevent the model from learning unexpected noise." }, { "heading": "3.4.2 DAE NOT JUST HELPS WITH LANGUAGE MODELING, BUT PRESERVES MI", "text": "In previous works DAE loss have been recognized as learning word order aka. the language modeling ability of the UNMT model Lample et al. (2018a); Artetxe et al. (2018); Lample et al. (2018b); Yang et al. (2018); Kim et al. (2020). Here we would like to clarify that the most critical functionality of DAE is not just learning language model, but at least preserving the MI between encoder input and output which matches the necessary condition we introduced in Sec.2.3 2.3. As a result, it can prevent the model from degeneration during online BT. To this end, we first experiment with a different version of DAE loss that ignores word order, that is, when constructing DAE loss from x we first permute the order of x, denoted by Perm[x] and then optimize − logPs→s(Perm[x]|C(Perm[x])) instead. For XLM initialization, the final performance only drops from 33.12 to 31.02. This indicates that DAE are not only learning word order, but something more critical, i.e. preserving the MI between input X and encoder output Z.\nWe verify this by estimating the MI between X and Z for encoders trained with standard, bt-only, elbo-only, and a baseline encoder initialized from XLM. Then we only train a random initialized decoder over the encoder. In Table 6, each entry consists of two terms with the first term an estimation of H(X) and the second term an estimation of H(X|Z). As you can see, without the regularization effect of DAE, bt-only and elbo-only has very large entropy of X|Z, even much larger than the XLM initialized encoder. This can explain the previous highlighted phenomenon in Table 3, that is, after bt-only, if we continue train with standard, only XLM initialization can recover certain performance while the other two stay failure. The reason is that without DAE, the encoder representation is being contaminated without contain any useful information of X. Actually, DAE not only preserve the MI between encoder input and output, in Table 4 and Figure 4, we have drawn the crosslingual MI, i.e. I(X,Y ′), contained in pseudo-bitext generated by dae-only trained model, it seems that dae-only with XLM initialization have already learned initial word-to-word translation ability. This can be further leveraged by online BT to learn towards real sentence-by-sentence translation." }, { "heading": "4 CONCLUSION", "text": "This paper conducts thorough comparative studies of the standard UNMT training protocol and its variants from two theoretical views, i) marginal likelihood maximization and ii) mutual information. We find that standard training implicitly optimizes ELBO so as other failed variants, indicating the importance of DAE as a regularization for helping the model learn the correct target task. Low BT losses (cross-lingual reconstruction loss) is a self-evident sufficient condition for successful training of UNMT, and the high mutual information betweenX andZ = enc(X) is a necessary condition for preventing the model from degeneration. In addition, DAE loss plays the role of preserving I(X,Z) as well as I(X,Y ′); meanwhile, online BT is the main task that enables the model to actually learn from emerging cross-lingual signals unveiled by DAE in the pseudo-bitext." }, { "heading": "A APPENDIX", "text": "" }, { "heading": "A.1 RELATED LITERATURE", "text": "" }, { "heading": "A.1.1 BETTER INITIALIZATION STRATEGIES FOR UNMT", "text": "In our introduction, we have mentioned that if the encoder-decoder model has already achieved certain decent initial performance, then by using BT loss solely can reach comparative or even better performance than XLM initialization. MASS (Song et al., 2019) is the first work that achieves this, they pretrain a sequence-to-sequence model for predicting a span of a sentence given the spandropped sentence as input. They find in their experiments that dropping half of the sentence as a contiguous span achieves the best result. Recently, mBART (Liu et al., 2020) extends the idea of denoising pre-training on Lewis et al. (2020a) to the multilingual setting. They pre-train sequenceto-sequence model on monolingual corpora of 25 languages, and only use BT loss to finetune the model for UNMT. In their paper, they claim that when relying only on BT, they use constrained decoding to obtain sentence on the other language at initial epochs to overcome the copy issue. Conceptually, mBART actually redefines the role of DAE loss as a pre-training objective, and this largely matches our findings in Table 3’s third row (+BT loss), that is we use DAE loss alone to train the model under certain initialization and then continue to train it solely with BT, and this matches the standard training protocol. However, when using DAE as pre-training loss from random initialization, the model could only achieve 10+ BLEU far less than 30+. Some reason for this gap might be: 1) the noisy function C of the DAE loss in standard training is a little bit different than that in BART; 2) we haven’t use such larger corpora for pre-training. We think to find out the reason might have significant contribution to the community on questions like: a) what is the datascale for pre-training to actually work? and b) what kind of self-supervision is more effective than others? Other works like Ren et al. (2019) directly conduct masked language model pre-training with explicitly constructed cross-lingual prediction signals, which is obtained from cross-lingual word translation techniques (Conneau et al., 2017). Although they do not apply their method on sequence-to-sequence pre-training, and their method could be applied directly to MASS." }, { "heading": "A.1.2 PRACTICAL ISSUES OF UNMT", "text": "Recently, several works start to criticize the practicality of the standard UNMT training. Kim et al. (2020) and Marchisio et al. (2020) both claim that domain mismatch of the two monolingual corpora and the dissimilarity of the language pair correlate well with performance degradation. In Kim et al. (2020), they investigate pratical scenario with three factors: i) linguistic distance; ii) availability of large-scale bitext; iii) availability of large-scale monolingual text. They instantiate the factors with 5 chosen language pairs and find that the standard UNMT training protocol only works for pairs with close linguistic distance and abundent monolingual text. Similar to Kim et al. (2020), Marchisio et al. (2020) also conduct an extensive empirical evaluation of for unsupervised machine translation using dissimilar language pairs, domains and authentic low-resource languages. However, instead of using pure NMT model, they also rely on statistical machine translation model for warming up the NMT model, which is not the standard training protocol that we have investigated. In fact, although using different training protocol, they find similar observations as that in Kim et al. (2020).\nA.1.3 IMPROVED TRAINING PROTOCOL\nTran et al. (2020) proposes a novel cross-lingual retrieval method for finding comparative sentence pairs from monolingual corpora of the two language. They use the multilingual pre-trained encoder of mBART (Liu et al., 2020) to get universal semantic representations of sentences (by doing this, they just average the token-level vectors from mBART as a single vector for nearest neighbor search) for retrieval of potentially aligned sentence pairs for iterative self-supervised training. This method resembles that of Wu et al. (2019) who use the UNMT model’s own encoder representation instead of a self-supervised pre-trained encoder, and can be seen as its multilingual pre-training extension. All of the above proposed specific training methods for UNMT together with recent paraphrase-based pre-training objective (Lewis et al., 2020b) can all be thought of as implicit maximum likelihood training (Li & Malik, 2018), since the retrieval phase is certain instantiation of k-Nearest-Neighbor search. Duan et al. (2020) also proposes a new training method by constructing mixed code pseudo-\nbitext. Their method proves the effectiveness of using unsupervisedly induced bilingual lexicon as ’anchor’ for better preventing BT from learning self-generated noised bitext." }, { "heading": "A.2 THE STANDARD TRAINING PROTOCOL", "text": "Please refer to Algorithm 1 for a detailed description of the standard training protocol.\nAlgorithm 1: The Standard UNMT Training Protocol" }, { "heading": "Input:", "text": "A large-scale pre-training corpus Dpt; Two monolingual fine-tuning corpora Ds and Dt; An untrained encoder-decoderMθe,d with θe and θd and specifically θee ⊂ θe, θed ⊂ θd as the embeddings.\nOutput: The estimated UNMT modelMθe,d .\n1: // initialization 2: Learn joint BPE code on Ds ∪ Dt; 3: Apply BPE to the pre-training corpus Dpt; 4: if pretrain = ’JointEmb’ then 5: Apply fastText on Dpt to learn embeddings 6: Initialize θee and θed with the learned joint embeddings 7: else if pretrain = ’XLM’ then 8: TrainMθe,d with self-supervised losse(s) on Dpt; 9: Initialize the θe,d with the learned parameters;\n10: else 11: Initialize the θe and θd randomly; 12: end if 13: // fine-tuning 14: step = 0; 15: Sample monolingual batch bs ∈ Ds, bt ∈ Dt 16: Construct denoising language modelling loss\naccording to Eq. 1; 17: Update model parameters using ADAM by back-propagating Eq. 1; 18: Sample monolingual batch bs ∈ Ds, bt ∈ Dt; 19: UseMθe,d to translate each batch to the other\nlanguage side as b̂t and b̂s; 20: Construct back-translation loss according to\nEq. 2 on the two paired bilingual batches (b̂t, bs), (b̂s, bt);\n21: Update model parameters using ADAM by back-propagating Eq. 2; 22: step += 1; 23: if step = MAX STEP then 24: End training; 25: else 26: Go to line 15; 27: end if 28: return Mθe,d ;" }, { "heading": "A.3 CONSTRAINED DECODING AND CROSS-LINGUAL RECERR", "text": "Let us take En-Fr translation task as an example. Since English language and French language share a large amount of vocabularies, and the sharing will be enhanced due to subword tokenization, i.e. BPE. The percentage of shared vocabulary of En and Fr are above 70% in our data setting. Thus, here we leverage a simple but effective heuristic to divide the shared vocabulary into En-dominant and Fr-dominant subones. The idea is to use a token’s frequency ratio over the English monolingual and French monolingual corpus as an indicator of language it mostly likely belongs to. Say given a token t, we can compute its frequency ratio as r = freqen(t)/freqfr(t). If the ratio r is larger than certain threshold τ , we say t belongs to English since it is mostly in use in English than in French, and vice versa. In experiment, we set τ = 2 to get reasonable vocabulary division. And then during decoding, we set the logits of tokens in the other language to −∞ for satisfying our constraint.\nDefinition A.1. (RecErr) Given an UNMT modelM, its reconstruction error on a mono-lingual textD is defined as: 1|D| ∑ x∈D 1 |x| logP (x|M[x]).HereM[x] denotes the model’s output sequence through greedy decoding or sampling. Definition A.2. (Cross-Lingual RecErr) Given an UNMT modelM, the cross-lingual reconstruction error on a mono-lingual text D is defined as: 1|D| ∑ x∈D 1 |x| logP (x|M\nc[x]). HereMc[x] denotes the result of constrained decoding for predicting a sequence of tokens in the other language." }, { "heading": "A.4 COMPUTE ELBO", "text": "Since in our following experiments, we are going to visualize the learning curve of ELBO along the training life cycle, we should be able to empirically compute the two terms of ELBO which both involve expectation over qφ(y|x). Instead of using greedy decoding for obtaining samples, we use sampling (k=2) to compute reconstruction error and KL divergence, and both terms are computed via Monte Carlo method. For the KL term, pθ(y) is a language model trained on Dt. Instead of using token-level ELBO like that used in He et al. (2016), we do not normalize the ELBO values by the number of tokens in y, and use the sentence-level ELBO value for visualization.\nA.5 ESTIMATING H(X), H(X|Z) AND I(X,Y ′)\nFor estimating I(X,Z) which is the MI between discrete and continuous r.v.s, we use the equality I(X,Z) = H(X) − H(X|Z), and then estimate H(X) and H(X|Z) respectively. For the entropy of X , we use a 1-gram language model on the same training corpus for training the UNMT model, and use the average token-level entropy as a surrogate; as for the entropy of X|Z, we train an extra reconstruction model over a fixed UNMT encoder to make sure we are using the representation from certain UNMT model, then we use the reconstruction model (a decoder)’s token-level entropy as a surrogate. This is motivated by Gao & Chaudhari (2020) who leverage the reconstruction error as a measure for how much information has discarded in the hidden representation, and also similar to recent probing methodology (Conneau et al., 2018).\nFor estimating I(X,Y ′), we use the token-by-token point-wise mutual information (PMI) over some pseudo bitext as a surrogate to the sentence-by-sentence MI. Note that since we use different estimators (continuous v.s. discrete) for computing I(X,Y ′) and I(X,Z), moreover, according to Pimentel et al. (2020), the model-based estimation of H(X) − H(X|Z) is a lower bound, so it is hard to compare between I(X,Y ′) and I(X,Z). But values within one estimator are comparable.\nHere we give a detailed introduction of how we estimate the above statistics. For estimating the first two terms, we follow the formula introduced in Pimentel et al. (2020) to estimate the entropy:\nHqθ (X;C) ≈ − 1\nN N∑ i=1 log qθ(X|C). (6)\nHere if C = ∅ is null, we use it to estimate H(X); or if C = Z, we use it to estimate H(X|Z). Estimate H(X) We estimate the token-level entropy instead of sentence-level, that is, X denotes a token r.v.. The qθ we use is an 1-gram language model on the concatenated En, Fr corpora.\nEstimate H(X|Z) The qθ we use to estimate H(X|Z) is a Transformer decoder over fixed encoder that provides hidden representations Z. So we first train a decoder over the training corpus, and then use the decoder to provide the log likelihood of every token X .\nHere, to make H(X) and H(X|Z) comparable, the estimates of H(X) and H(X|Z) are calculated on the same training set held-out, about 50k sentences.\nEstimate I(X,Y ′) Given a large amount of pseudo-bitext, we use the point-wise mutual information as token-level estimates of the actual mutual information in paired sentences, that is:\nPMI(X,Y ′) = 1 N ∗ 1 lx · ly ∑ xi,yj P (xi, yj) P (xi)P (yj) . (7)" } ]
2,020
null
SP:df623838a4d93f4a6c518f23424d5d2ce2cbf704
[ "In this paper, the authors propose a general approach for image completion with large-scale missing regions. The key is to combine image-conditional and modulated unconditional generative architectures via co-modulation. The presented approach has demonstrated strong performance in the image painting with large-scale missing pixels and some image-to-image translation tasks. A new metric P-IDS/U-IDS is proposed to evaluate the perceptual fidelity of inpainted images." ]
Numerous task-specific variants of conditional generative adversarial networks have been developed for image completion. Yet, a serious limitation remains that all existing algorithms tend to fail when handling large-scale missing regions. To overcome this challenge, we propose a generic new approach that bridges the gap between image-conditional and recent modulated unconditional generative architectures via co-modulation of both conditional and stochastic style representations. Also, due to the lack of good quantitative metrics for image completion, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS), which robustly measures the perceptual fidelity of inpainted images compared to real images via linear separability in a feature space. Experiments demonstrate superior performance in terms of both quality and diversity over state-of-the-art methods in free-form image completion and easy generalization to image-to-image translation. Code is available at https://github.com/zsyzzsoft/co-mod-gan.
[ { "affiliations": [], "name": "Shengyu Zhao" }, { "affiliations": [], "name": "Jonathan Cui" }, { "affiliations": [], "name": "Yilun Sheng" }, { "affiliations": [], "name": "Yue Dong" }, { "affiliations": [], "name": "Yan Xu" } ]
[ { "authors": [ "Fazil Altinel", "Mete Ozay", "Takayuki Okatani" ], "title": "Deep structured energy-based image inpainting", "venue": "24th International Conference on Pattern Recognition (ICPR),", "year": 2018 }, { "authors": [ "Coloma Ballester", "Marcelo Bertalmio", "Vicent Caselles", "Guillermo Sapiro", "Joan Verdera" ], "title": "Filling-in by joint interpolation of vector fields and gray levels", "venue": "IEEE transactions on image processing,", "year": 2001 }, { "authors": [ "Connelly Barnes", "Eli Shechtman", "Adam Finkelstein", "Dan B Goldman" ], "title": "Patchmatch: A randomized correspondence algorithm for structural image editing", "venue": "In ACM Transactions on Graphics (ToG),", "year": 2009 }, { "authors": [ "Mikołaj Bińkowski", "Dougal J Sutherland", "Michael Arbel", "Arthur Gretton" ], "title": "Demystifying mmd gans", "venue": "arXiv preprint arXiv:1801.01401,", "year": 2018 }, { "authors": [ "Yochai Blau", "Tomer Michaeli" ], "title": "The perception-distortion tradeoff", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Léon Bottou", "Chih-Jen Lin" ], "title": "Support vector machine solvers", "venue": "Large scale kernel machines,", "year": 2007 }, { "authors": [ "Holger Caesar", "Jasper Uijlings", "Vittorio Ferrari" ], "title": "Coco-stuff: Thing and stuff classes in context", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Weiwei Cai", "Zhanguo Wei" ], "title": "Diversity-generated image inpainting with style extraction", "venue": "arXiv preprint arXiv:1912.01834,", "year": 2019 }, { "authors": [ "Ting Chen", "Mario Lucic", "Neil Houlsby", "Sylvain Gelly" ], "title": "On self modulation for generative adversarial networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yunjey Choi", "Minje Choi", "Munyoung Kim", "Jung-Woo Ha", "Sunghun Kim", "Jaegul Choo" ], "title": "Stargan: Unified generative adversarial networks for multi-domain image-to-image translation", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Soheil Darabi", "Eli Shechtman", "Connelly Barnes", "Dan B Goldman", "Pradeep Sen" ], "title": "Image melding: Combining inconsistent images using patch-based synthesis", "venue": "ACM Trans. Graph.,", "year": 2012 }, { "authors": [ "Terrance DeVries", "Adriana Romero", "Luis Pineda", "Graham W Taylor", "Michal Drozdzal" ], "title": "On the evaluation of conditional gans", "venue": null, "year": 1907 }, { "authors": [ "Ding Ding", "Sundaresh Ram", "Jeffrey J Rodríguez" ], "title": "Image inpainting using nonlocal texture matching and nonlinear filtering", "venue": "IEEE Transactions on Image Processing,", "year": 2018 }, { "authors": [ "Vincent Dumoulin", "Jonathon Shlens", "Manjunath Kudlur" ], "title": "A learned representation for artistic style", "venue": "arXiv preprint arXiv:1610.07629,", "year": 2016 }, { "authors": [ "Alexei A Efros", "William T Freeman" ], "title": "Image quilting for texture synthesis and transfer", "venue": "In Proceedings of the 28th annual conference on Computer graphics and interactive techniques,", "year": 2001 }, { "authors": [ "Alexei A Efros", "Thomas K Leung" ], "title": "Texture synthesis by non-parametric sampling", "venue": "In Proceedings of the seventh IEEE international conference on computer vision,", "year": 1999 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Swaminathan Gurumurthy", "Ravi Kiran Sarvadevabhatla", "R Venkatesh Babu" ], "title": "Deligan: Generative adversarial networks for diverse and limited data", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Martin Heusel", "Hubert Ramsauer", "Thomas Unterthiner", "Bernhard Nessler", "Sepp Hochreiter" ], "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Xun Huang", "Serge Belongie" ], "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Xun Huang", "Ming-Yu Liu", "Serge Belongie", "Jan Kautz" ], "title": "Multimodal unsupervised image-to-image translation", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Satoshi Iizuka", "Edgar Simo-Serra", "Hiroshi Ishikawa" ], "title": "Globally and locally consistent image completion", "venue": "ACM Transactions on Graphics (ToG),", "year": 2017 }, { "authors": [ "Phillip Isola", "Jun-Yan Zhu", "Tinghui Zhou", "Alexei A Efros" ], "title": "Image-to-image translation with conditional adversarial networks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Jianmin Jiang", "Hossam M Kasem", "Kwok-Wai Hung" ], "title": "Robust image completion via deep feature transformations", "venue": "IEEE Access,", "year": 2019 }, { "authors": [ "Youngjoo Jo", "Jongyoul Park" ], "title": "Sc-fegan: Face editing generative adversarial network with user’s sketch and color", "venue": "arXiv preprint arXiv:1902.06838,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Timo Aila" ], "title": "A style-based generator architecture for generative adversarial networks", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Tero Karras", "Samuli Laine", "Miika Aittala", "Janne Hellsten", "Jaakko Lehtinen", "Timo Aila" ], "title": "Analyzing and improving the image quality of stylegan", "venue": "arXiv preprint arXiv:1912.04958,", "year": 2019 }, { "authors": [ "Junho Kim", "Minjae Kim", "Hyeonwoo Kang", "Kwanghee Lee" ], "title": "U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation", "venue": null, "year": 1907 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Rolf Köhler", "Christian Schuler", "Bernhard Schölkopf", "Stefan Harmeling" ], "title": "Mask-specific inpainting with deep neural networks", "venue": "In German Conference on Pattern Recognition,", "year": 2014 }, { "authors": [ "Tuomas Kynkäänniemi", "Tero Karras", "Samuli Laine", "Jaakko Lehtinen", "Timo Aila" ], "title": "Improved precision and recall metric for assessing generative models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Avisek Lahiri", "Arnav Jain", "Prabir Kumar Biswas", "Pabitra Mitra" ], "title": "Improving consistency and correctness of sequence inpainting using semantically guided generative adversarial network", "venue": "arXiv preprint arXiv:1711.06106,", "year": 2017 }, { "authors": [ "Avisek Lahiri", "Arnav Kumar Jain", "Sanskar Agrawal", "Pabitra Mitra", "Prabir Kumar Biswas" ], "title": "Prior guided gan based semantic inpainting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Justin Lazarow", "Long Jin", "Zhuowen Tu" ], "title": "Introspective neural networks for generative modeling", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 }, { "authors": [ "Jingyuan Li", "Ning Wang", "Lefei Zhang", "Bo Du", "Dacheng Tao" ], "title": "Recurrent feature reasoning for image inpainting", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Guilin Liu", "Fitsum A Reda", "Kevin J Shih", "Ting-Chun Wang", "Andrew Tao", "Bryan Catanzaro" ], "title": "Image inpainting for irregular holes using partial convolutions", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hongyu Liu", "Bin Jiang", "Yi Xiao", "Chao Yang" ], "title": "Coherent semantic attention for image inpainting", "venue": "arXiv preprint arXiv:1905.12384,", "year": 2019 }, { "authors": [ "Hongyu Liu", "Bin Jiang", "Yibing Song", "Wei Huang", "Chao Yang" ], "title": "Rethinking image inpainting via a mutual encoder-decoder with feature equalizations", "venue": "arXiv preprint arXiv:2007.06929,", "year": 2020 }, { "authors": [ "Ming-Yu Liu", "Thomas Breuel", "Jan Kautz" ], "title": "Unsupervised image-to-image translation networks", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Ming-Yu Liu", "Xun Huang", "Arun Mallya", "Tero Karras", "Timo Aila", "Jaakko Lehtinen", "Jan Kautz" ], "title": "Few-shot unsupervised image-to-image translation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "David Lopez-Paz", "Maxime Oquab" ], "title": "Revisiting classifier two-sample tests", "venue": "arXiv preprint arXiv:1610.06545,", "year": 2016 }, { "authors": [ "Yuqing Ma", "Xianglong Liu", "Shihao Bai", "Lei Wang", "Aishan Liu", "Dacheng Tao", "Edwin Hancock" ], "title": "Region-wise generative adversarial imageinpainting for large missing areas", "venue": null, "year": 1909 }, { "authors": [ "Qi Mao", "Hsin-Ying Lee", "Hung-Yu Tseng", "Siwei Ma", "Ming-Hsuan Yang" ], "title": "Mode seeking generative adversarial networks for diverse image synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Lars Mescheder", "Andreas Geiger", "Sebastian Nowozin" ], "title": "Which training methods for gans do actually converge", "venue": "arXiv preprint arXiv:1801.04406,", "year": 2018 }, { "authors": [ "Kamyar Nazeri", "Eric Ng", "Tony Joseph", "Faisal Qureshi", "Mehran Ebrahimi" ], "title": "Edgeconnect: Generative image inpainting with adversarial edge learning", "venue": null, "year": 1901 }, { "authors": [ "Augustus Odena", "Jacob Buckman", "Catherine Olsson", "Tom B Brown", "Christopher Olah", "Colin Raffel", "Ian Goodfellow" ], "title": "Is generator conditioning causally related to gan performance", "venue": "arXiv preprint arXiv:1802.08768,", "year": 2018 }, { "authors": [ "Taesung Park", "Ming-Yu Liu", "Ting-Chun Wang", "Jun-Yan Zhu" ], "title": "Semantic image synthesis with spatiallyadaptive normalization", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Deepak Pathak", "Philipp Krahenbuhl", "Jeff Donahue", "Trevor Darrell", "Alexei A Efros" ], "title": "Context encoders: Feature learning by inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Pengda Qin", "Weiran Xu", "William Yang Wang" ], "title": "Dsgan: Generative adversarial training for distant supervision relation extraction", "venue": "arXiv preprint arXiv:1805.09929,", "year": 2018 }, { "authors": [ "Jimmy SJ Ren", "Li Xu", "Qiong Yan", "Wenxiu Sun" ], "title": "Shepard convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Yurui Ren", "Xiaoming Yu", "Ruonan Zhang", "Thomas H Li", "Shan Liu", "Ge Li" ], "title": "Structureflow: Image inpainting via structure-aware appearance flow", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mark Sabini", "Gili Rusak" ], "title": "Painting outside the box: Image outpainting with gans", "venue": "arXiv preprint arXiv:1808.08483,", "year": 2018 }, { "authors": [ "Min-cheol Sagong", "Yong-goo Shin", "Seung-wook Kim", "Seung Park", "Sung-jea Ko" ], "title": "Pepsi: Fast image inpainting with parallel decoding network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Mehdi SM Sajjadi", "Olivier Bachem", "Mario Lucic", "Olivier Bousquet", "Sylvain Gelly" ], "title": "Assessing generative models via precision and recall", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Tim Salimans", "Ian Goodfellow", "Wojciech Zaremba", "Vicki Cheung", "Alec Radford", "Xi Chen" ], "title": "Improved techniques for training gans", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Jake Snell", "Karl Ridgeway", "Renjie Liao", "Brett D Roads", "Michael C Mozer", "Richard S Zemel" ], "title": "Learning to generate images with perceptual similarity metrics", "venue": "IEEE International Conference on Image Processing (ICIP),", "year": 2017 }, { "authors": [ "Yi Wang", "Xin Tao", "Xiaojuan Qi", "Xiaoyong Shen", "Jiaya Jia" ], "title": "Image inpainting via generative multi-column convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Yi Wang", "Xin Tao", "Xiaoyong Shen", "Jiaya Jia" ], "title": "Wide-context semantic image extrapolation", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ze Wang", "Xiuyuan Cheng", "Guillermo Sapiro", "Qiang Qiu" ], "title": "Stochastic conditional generative networks with basis decomposition", "venue": "arXiv preprint arXiv:1909.11286,", "year": 2019 }, { "authors": [ "Sitao Xiang", "Hao Li" ], "title": "On the effects of batch and weight normalization in generative adversarial networks", "venue": "arXiv preprint arXiv:1704.03971,", "year": 2017 }, { "authors": [ "Chaohao Xie", "Shaohui Liu", "Chao Li", "Ming-Ming Cheng", "Wangmeng Zuo", "Xiao Liu", "Shilei Wen", "Errui Ding" ], "title": "Image inpainting with learnable bidirectional attention maps", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Junyuan Xie", "Linli Xu", "Enhong Chen" ], "title": "Image denoising and inpainting with deep neural networks. In Advances in neural information processing", "venue": null, "year": 2012 }, { "authors": [ "Wei Xiong", "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xin Lu", "Connelly Barnes", "Jiebo Luo" ], "title": "Foreground-aware image inpainting", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2019 }, { "authors": [ "Zhaoyi Yan", "Xiaoming Li", "Mu Li", "Wangmeng Zuo", "Shiguang Shan" ], "title": "Shift-net: Image inpainting via deep feature rearrangement", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chao Yang", "Xin Lu", "Zhe Lin", "Eli Shechtman", "Oliver Wang", "Hao Li" ], "title": "High-resolution image inpainting using multi-scale neural patch synthesis", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Yang Yang", "Xiaojie Guo", "Jiayi Ma", "Lin Ma", "Haibin Ling" ], "title": "Lafin: Generative landmark guided face inpainting", "venue": "arXiv preprint arXiv:1911.11394,", "year": 2019 }, { "authors": [ "Zongxin Yang", "Jian Dong", "Ping Liu", "Yi Yang", "Shuicheng Yan" ], "title": "Very long natural scenery image prediction by outpainting", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zili Yi", "Hao Zhang", "Ping Tan", "Minglun Gong" ], "title": "Dualgan: Unsupervised dual learning for image-to-image translation", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S Huang" ], "title": "Generative image inpainting with contextual attention", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Jiahui Yu", "Zhe Lin", "Jimei Yang", "Xiaohui Shen", "Xin Lu", "Thomas S Huang" ], "title": "Free-form image inpainting with gated convolution", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Tao Yu", "Zongyu Guo", "Xin Jin", "Shilin Wu", "Zhibo Chen", "Weiping Li", "Zhizheng Zhang", "Sen Liu" ], "title": "Region Normalization for Image Inpainting", "venue": "arXiv e-prints, art", "year": 2019 }, { "authors": [ "Yanhong Zeng", "Jianlong Fu", "Hongyang Chao", "Baining Guo" ], "title": "Learning pyramid-context encoder network for high-quality image inpainting", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Lei Zhao", "Qihang Mo", "Sihuan Lin", "Zhizhong Wang", "Zhiwen Zuo", "Haibo Chen", "Wei Xing", "Dongming Lu" ], "title": "Uctgan: Diverse image inpainting based on unsupervised cross-space translation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Shengyu Zhao", "Zhijian Liu", "Ji Lin", "Jun-Yan Zhu", "Song Han" ], "title": "Differentiable augmentation for data-efficient gan training", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Chuanxia Zheng", "Tat-Jen Cham", "Jianfei Cai" ], "title": "Pluralistic image completion", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Bolei Zhou", "Agata Lapedriza", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Places: A 10 million image database for scene recognition", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2017 }, { "authors": [ "Tong Zhou", "Changxing Ding", "Shaowen Lin", "Xinchao Wang", "Dacheng Tao" ], "title": "Learning oracle attention for high-fidelity face completion", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Jun-Yan Zhu", "Taesung Park", "Phillip Isola", "Alexei A Efros" ], "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Jun-Yan Zhu", "Richard Zhang", "Deepak Pathak", "Trevor Darrell", "Alexei A Efros", "Oliver Wang", "Eli Shechtman" ], "title": "Toward multimodal image-to-image translation", "venue": "In Advances in neural information processing systems,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Generative adversarial networks (GANs) have received a great amount of attention in the past few years, during which a fundamental problem emerges from the divergence of development between image-conditional and unconditional GANs. Image-conditional GANs have a wide variety of computer vision applications (Isola et al., 2017). As vanilla U-Net-like generators cannot achieve promising performance especially in free-form image completion (Liu et al., 2018; Yu et al., 2019), a multiplicity of task-specific approaches have been proposed to specialize GAN frameworks, mostly focused on hand-engineered multi-stage architectures, specialized operations, or intermediate structures like edges or contours (Altinel et al., 2018; Ding et al., 2018; Iizuka et al., 2017; Jiang et al., 2019; Lahiri et al., 2020; Li et al., 2020; Liu et al., 2018; 2019a; 2020; Nazeri et al., 2019; Ren et al., 2019; Wang et al., 2018; Xie et al., 2019; Xiong et al., 2019; Yan et al., 2018; Yu et al., 2018; 2019; Yu et al., 2019; Zeng et al., 2019; Zhao et al., 2020a; Zhou et al., 2020). These branches of works have made significant progress in reducing the generated artifacts like color discrepancy and blurriness. However, a serious challenge remains that all existing algorithms tend to fail when handling large-scale missing regions. This is mainly due to their lack of the underlying generative capability — one can never learn to complete a large proportion of an object so long as it does not have the capability of generating a completely new one. We argue that the key to overcoming this challenge is to bridge the gap between image-conditional and unconditional generative architectures.\n∗Corresponding author\nRecently, the performance of unconditional GANs has been fundamentally advanced, chiefly owing to the success of modulation approaches (Chen et al., 2019; Karras et al., 2019a;b) with learned style representations produced by a latent vector. Researchers also extend the application of modulation approaches to image-conditional GANs with the style representations fully determined by an input image (Park et al., 2019; Huang et al., 2018; Liu et al., 2019b); however, the absence of stochasticity makes them hardly generalizable to the settings where only limited conditional information is available. This limitation is fatal especially in large scale image completion. Although some multi-modal unpaired image-to-image translation methods propose to encode the style from another reference image (Huang et al., 2018; Liu et al., 2019b), this unreasonably assumes that the style representations are entirely independent of the conditional input and hence compromises the consistency.\nTherefore, we propose co-modulated generative adversarial networks, a generic approach that leverages the generative capability from unconditional modulated architectures, embedding both conditional and stochastic style representations via co-modulation. Co-modulated GANs are thus able to generate diverse and consistent contents and generalize well to not only small-scale inpainting but also extremely large-scale image completion, supporting both regular and irregular masks even with only little conditional information available. See Fig. 1 for qualitative examples. Due to the effectiveness of co-modulation, we do not encounter any problem suffered in the image completion literature (Liu et al., 2018; Yu et al., 2019), successfully bridging the long-existing divergence.\nAnother major barrier in the image completion literature is the lack of good quantitative metrics. The vast majority of works in this literature seek to improve their performance in terms of similarity-based\nmetrics that heavily prefer blurry results, e.g., L1, L2, PSNR, and SSIM, among which many state that there are yet no good quantitative metrics for image completion (Liu et al., 2018; Yu et al., 2018; 2019). The only gold standard in this literature is the user study, which conducts real vs. fake test giving a pair of images to subjects (i.e., the users). However, the user study is subject to large variance and costly, therefore lacking reproducibility. Inspired by the user study, we propose the new Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS). Besides its intuitiveness and scalability, we demonstrate that P-IDS/U-IDS is robust to sampling size and effective of capturing subtle differences and further correlates well with human preferences.\nOur contributions are summarized as follows:\n• We propose co-modulated GANs, a generic approach that bridges the gap between imageconditional and recent modulated unconditional generative architectures. • We propose the new P-IDS/U-IDS for robust assessment of the perceptual fidelity of GANs. • Experiments demonstrate superior performance in terms of both quality and diversity in\nfree-form image completion and easy generalization to image-to-image translation." }, { "heading": "2 RELATED WORK", "text": "Image-Conditional GANs. Image-conditional GANs can be applied to a variety of image-to-image translation tasks (Isola et al., 2017). The unpaired setting is also investigated when paired training data is not available (Choi et al., 2018; Huang et al., 2018; Kim et al., 2019; Lazarow et al., 2017; Liu et al., 2017; Yi et al., 2017; Zhao et al., 2020b; Zhu et al., 2017a). Recent works exploit normalization layers with learned style representations embedded from the conditional input or another reference image to enhance the output fidelity (Huang et al., 2018; Kim et al., 2019; Liu et al., 2019b; Park et al., 2019). They can be regarded as a set of conditional modulation approaches, but still lack stochastic generative capability and hence poorly generalize when limited conditional information is available. Isola et al. (2017) initially find that the generator tends to ignore the noise input although they try to feed it, in contrast to unconditional or class-conditional GANs. A branch of works aims to enforce the intra-conditioning diversity using VAE-based latent sampling strategies (Zhu et al., 2017b) or imposing distance-based loss terms (Huang et al., 2018; Mao et al., 2019; Qin et al., 2018). Wang et al. (2019b) also propose to decompose the convolution kernels into stochastic basis. However, the enforcement of diversity conversely results in the deterioration of image quality. Our co-modulation approach not only learns the stochasticity inherently but also makes the trade-off easily controllable.\nImage Completion. Image completion, also referred to as image inpainting when incapable of completing large-scale missing regions, has received a significant amount of attention. It is a constrained image-to-image translation problem but exposes more serious challenges. Traditional methods (Ballester et al., 2001; Barnes et al., 2009; Darabi et al., 2012; Efros & Freeman, 2001; Efros & Leung, 1999) utilize only low-level features and fail to generate semantically consistent contents. Then, (Köhler et al., 2014; Ren et al., 2015; Xie et al., 2012) adopt deep neural networks for image completion; (Pathak et al., 2016) first exploits conditional GANs. Numerous follow-up works focus on the semantic context and texture, edges and contours, or hand-engineered architectures (Altinel et al., 2018; Ding et al., 2018; Iizuka et al., 2017; Jiang et al., 2019; Jo & Park, 2019; Lahiri et al., 2017; Liu et al., 2019a; Nazeri et al., 2019; Ren et al., 2019; Sagong et al., 2019; Wang et al., 2018; Xie et al., 2019; Xiong et al., 2019; Yan et al., 2018; Yang et al., 2017; 2019a; Yu et al., 2018; Yu et al., 2019; Zeng et al., 2019; Lahiri et al., 2020; Zhao et al., 2020a; Li et al., 2020; Zhou et al., 2020), among which (Liu et al., 2018; Yu et al., 2019) introduce partial convolution and gated convolution to address free-form image completion. The lack of stochasticity is also observed in image completion (Cai & Wei, 2019; Ma et al., 2019; Zheng et al., 2019). Other works address the so-called outpainting subtasks (Sabini & Rusak, 2018; Wang et al., 2019a; Yang et al., 2019b). To our knowledge, none of these methods produce promising results in the presence of free-form large-scale missing regions.\nEvaluation Metrics. Great research interest has been drawn on the evaluation of GANs (DeVries et al., 2019; Gurumurthy et al., 2017; Sajjadi et al., 2018; Snell et al., 2017; Xiang & Li, 2017). Inception Score (IS) (Salimans et al., 2016), and some other metrics like FCN-Score (Isola et al., 2017), are specialized to the pre-trained task thus cannot generalize. While FID (Heusel et al., 2017) is generally acceptable, few promising metrics for image completion exist. Previous works heavily rely on similarity-based metrics such as L1, L2, PSNR, and SSIM, which fail to capture stochastic\nPublished as a conference paper at ICLR 2021\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormula : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nconst\nuncondit ional modulat ion\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFor ula : Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y z\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormula : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y z\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\n(a)\nFo mulas: Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue ong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormula : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormula : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y z\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\n(b)\ncondit ional modulat ion\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 0 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nForm las: Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 0 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 0 2017011407 x\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nForm la : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y z\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on N v, 4 2019\nYue Dong YaoClass 70 2 17 1407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas: Instructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011 07\nx\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\n(c)\nco-modulat ion\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 201 011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 201 011407 x y\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nForm la : Instructed by Due on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nForm las: Instructed by Due n Nov, 4 2019 Yue Dong Ya Class 70 2017011407 x\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nForm la : Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 x y z\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstruct d by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on N v, 4 2019\nYue Dong YaoClass 70 2 17 1407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas: Instructed by\nDue on Nov, 4 2019\nYue ong YaoClass 70 2017011 07\nx\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1\n3\n5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\n(z)\ns\ns′\nA\nψ = 1\nψ = 3\nψ = 5\n(d)\nFigure 2: Illustration from modulation to co-modulation: (a) unconditional modulated generator; (b) vanilla image-conditional generator; (c) conditional modulated generator; and (d) co-modulated generator. y, z represent the conditional input and the latent vector respectively; E ,D,M represent the conditional encoder, the generative decoder, and the mapping network, respectively.\nregions and are ill-fitted for GANs. Our proposed metric is also related to the classifier-based tests (Blau & Michaeli, 2018; Lopez-Paz & Oquab, 2016). However, previous classifier-based metrics require separate sets for training and testing the classifier, making them sensitive to the underlying generalizability of the trained classifier. We formulate the discriminability as a simple scalable metric for both the paired and unpaired ver ions without relying on the generalizability." }, { "heading": "3 CO-MODUL TED GENERATIVE ADVERSARIAL NETWORKS", "text": "Image-conditional GANs address the problem of translating an image-form conditional input y to an output image x (Isola et al., 2017). We assume for the setting where paired correspondence between input conditions and output images is available in the training data. The generator takes as input an image y along with the latent vector z and produces the output x; the di criminator takes as input a pair of (x,y) and seeks to distinguish fake generated pairs from the real distribution. Image completion can be regarded as a constrained image-conditional generation problem where known pixels are restricted to be unchanged. In contrast to the extensive literature on specialized image completion frameworks, we introduce a generic approach that bridges between image-conditional GANs and recent success of unconditional modulated architectures." }, { "heading": "3.1 REVISITING MODULATION APPROACHES", "text": "Modulation approaches emerge from the style transfer literature (Dumoulin et al., 2016; Huang & Belongie, 2017) and are well exploited in state-of-the-art unconditional or class-conditional GANs. They generally apply scalar denormalization factors (e.g., bias and scaling) to the normalized feature maps, while the learned denormalization factors are conditioned on the side information such as class label (Odena et al., 2018) or the latent vector (Chen et al., 2019). Typical normalization layers used in the modulation blocks include batch normalization (Chen et al., 2019; Odena et al., 2018), adaptive instance normalization (Huang & Belongie, 2017; Karras et al., 2019a), and weight demodulation (Karras et al., 2019b) referred to the weight normalization (Salimans & Kingma, 2016).\nHere we take StyleGAN2 (Karras et al., 2019b) as an example to show how intermediate activations are modulated as a function of the latent vector. As illustrated in Fig. 2(a), the decoder D simply originates from a learned constant, while the latent vector z is passed through a multi-layer fully connected mapping networkM. The mapped latent vector linearly generates a style vector s for each subsequent modulation via a learned affine transformation A (i.e., a dense layer without activation):\ns = A(M(z)). (1) Consider a vanilla convolutional layer with kernel weights wijk, where i, j, k enumerate the input channels, the output channels, and the spatial footprint of the convolution, respectively. Given the style vector s, the input feature maps are first channel-wise multiplied by s, passed through the convolution, and finally channel-wise multiplied by s′ where s′j = √ 1/ ∑ i,k (siwijk)\n2 acts as the weight demodulation step that normalizes the feature maps into statistically unit variance.\n4\nWhile modulation approaches have significantly improved the performance of unconditional or class-conditional generators, we wonder whether they could similarly work for image-conditional generators. An intuitive extension to the vanilla image-conditional generator (Fig. 2(b)) would be the conditional modulated generator (see Fig. 2(c)), where the modulation is conditioned on the learned flattened features from the image encoder E . Similar structures also exist in the well-conditioned image-to-image translation tasks (Huang et al., 2018; Liu et al., 2019b; Park et al., 2019). In this case, the style vector can be rewritten as\ns = A(E(y)). (2)\nHowever, a significant drawback of the conditional modulation approach would be the lack of stochastic generative capability. This problem emerges more apparently in respect of large scale image completion. In most cases, the outputs should be weakly conditioned, i.e., they are not sufficiently determined by the conditional input. As a result, it not only cannot produce diverse outputs but also poorly generalizes to the settings where limited conditional information is available." }, { "heading": "3.2 CO-MODULATION", "text": "To overcome this challenge, we propose co-modulation, a generic new approach that easily adapts the generative capability from the unconditional modulated generators to the image-conditional generators. We rewrite the co-modulated style vector as (see Fig. 2(d)):\ns = A(E(y),M(z)), (3) i.e., a joint affine transformation conditioning on both style representations. Generally, the style vector can be a non-linear learned mapping from both inputs, but here we simply assume that they can be linearly correlated in the style space and already observe considerable improvements. The linear correlation facilitates the inherent stochasticity as will see in §5.1 that co-modulated GANs can easily trade-off between quality and intra-conditioning diversity without imposing any external losses, and moreover, co-modulation contributes to not only stochasticity but also visual quality especially at large-scale missing regions. Co-modulated GANs are encouraged to be trained with regular discriminator losses, while not requiring any direct guidance like the L1 term (Isola et al., 2017), to fully exploit their stochastic generative capability." }, { "heading": "4 PAIRED/UNPAIRED INCEPTION DISCRIMINATIVE SCORE", "text": "Our proposed Paired/Unpaired Inception Discriminative Score (P-IDS/U-IDS) aims to reliably measure the linear separability in a pre-trained feature space, inspired by the “human discriminators” in the user study. Let I(·) be the pre-trained Inception v3 model that maps the input image to the output features of 2048 dimensions. We sample the same number of real images and their correspondingly generated fake images (drawn from the joint distribution (x,x′) ∈ X , where x corresponds to the real image and x′ corresponds to the fake image), from which the features are extracted and then fitted by a linear SVM. The linear SVM reflects the linear separability in the feature space and is known to be numerically stable in training. Let f(·) be the (linear) decision function of the SVM, where f(I(x)) > 0 if and only if x is considered real. The P-IDS is given by\nP-IDS(X) = Pr (x,x′)∈X\n{f(I(x′)) > f(I(x))}, (4)\ni.e., the probability that a fake sample is considered more realistic than the corresponding real sample.\nWe also provide an unpaired alternative that could generalize to the settings where no paired information is available. We similarly sample the same number of real images (drawn from distribution X) and fake images (drawn from distribution X ′) and fit the linear SVM f(·). We directly calculate the misclassification rate instead:\nU-IDS(X,X ′) = 1 2 Pr x∈X {f(I(x)) < 0}+ 1 2 Pr x′∈X′ {f(I(x′)) > 0}. (5)\nIn addition to the super intuitiveness of P-IDS/U-IDS, we would like to emphasize three of their major advantages over FID: the robustness to sampling size, the effectiveness of capturing subtle differences, and the good correlation to human preferences.\nRobustness to Sampling Size. We test the response of P-IDS, U-IDS, FID, and KID to four manipulation strategies: masking the image (to zeros) with a random square of width w = 1, 2, 4, 8, respectively. Images are sampled from the FFHQ dataset (Karras et al., 2019a) at 512×512 resolution. The reference distribution for calculating FID is measured using 50k samples. As plotted in Fig. 3, both P-IDS and U-IDS converge fast within a small number of samples and successfully distinguish the manipulation strategies; FID fails to converge within 10k samples, while the highest convergence line (1.13 when w = 8, measured using 50k samples) is even below the lowest FID at 10k samples (1.63 when w = 1). Although KID addresses the “biased” problem of FID (Bińkowski et al., 2018), we find that the estimates are still subject to huge variance like FID especially when the two distributions are close. KID requires a fixed block size (Bińkowski et al., 2018) to achieve unbiased estimates; even with a block size of 1000 that minimizes its variance, the estimates are still hardly distinguishable especially between w = 1 and w = 2 as plotted in Fig. 4.\nEffectiveness of Capturing Subtle Differences. Capturing subtle differences is particularly important in image completion, since the difference between inpainted and real images only exists in a partial region. We construct subtle image manipulation strategies by masking n random pixels which are then nearest-point interpolated by the neighboring pixels, using the same environment as the last experiment. As plotted in Fig. 5, P-IDS successfully distinguishes the number of manipulated pixels, while FID and KID fail to respond within 29 noisy pixels. We note that U-IDS is still more robust in this case since the central tendency of FID and KID is significantly dominated by the variance.\nCorrelation to Human Preferences. P-IDS imitates the “human discriminators” and is expected to correlate well with human preferences. While it seems clear in Fig. 6, we quantitatively measure the correlation using these data points (20 in total): the correlation coefficient is 0.870 between P-IDS and human preference rate, significantly better than −0.765 of FID. Table 3 further provides a case analysis where our P-IDS/U-IDS coincides with clear human preferences as opposed to FID.\nComputational Analysis. The time complexity of training a linear SVM is between O(n2) and O(n3) (Bottou & Lin, 2007), compared to O(nd2 + d3) of FID (Heusel et al., 2017) and O(n2d) of KID (Bińkowski et al., 2018), where n is the sampling size and d is the dimension of feature space. In practice, P-IDS/U-IDS incurs mild computational overhead in addition to the feature extraction process. For example, with 10k samples, extracting the Inception features on an NVIDIA P100 GPU takes 221s, and fitting the SVM (which only uses CPU) takes an extra of 88s; with 50k samples, the feature extraction process and the SVM take 1080s and 886s respectively." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 IMAGE COMPLETION", "text": "We conduct image completion experiments at 512×512 resolution on the FFHQ dataset (Karras et al., 2019a) and the Places2 dataset (Zhou et al., 2017). Implementation details are provided in Appendix A. FFHQ is augmented with horizontal flips; Places2 is central cropped or padded. The sampling strategy of free-form masks for training and evaluating is specified in the appendix. We preserve 10k out of 70k images from the FFHQ dataset for validation. Places2 has its own validation set of 36.5k images and a large training set of 8M images. We train our model for 25M images on FFHQ and 50M images on Places2. Our model is compared against RFR (Li et al., 2020) and DeepFillv2 (Yu et al., 2019), the state-of-the-art algorithms for free-form image completion, using both their official pre-trained models and our retrained version of DeepFillv2 (using the official code, our datasets, and our sampling strategy) at 1M iterations (i.e., 32M images). We sample the output once per validation image for all the metrics (P-IDS, U-IDS, and FID). The overall results are summarized in Table 1. Fig. 6 plots the user study results, P-IDS, and FID of DeepFillv2 (retrained) and ours w.r.t. different masked ratios. See Fig. 7 for a qualitative comparison. All these results demonstrate our superior performance. More qualitative examples, numerical user study results and complete tables w.r.t. the masked ratio, and details of the user study are provided in the appendix.\nThe Inherent Stochasticity. Co-modulated GANs are inherently stochastic, i.e., they naturally learn to utilize the stochastic style representations without imposing any external losses, and they are able to produce diverse results even when both the input image and the input mask are fixed. Furthermore, by tuning the truncation ψ (Karras et al., 2019b; Kynkäänniemi et al., 2019) that explicitly amplifies the stochastic branch by ψ times, co-modulated GANs can easily trade-off between quality and diversity (see Fig. 8).\nAblation Study. Co-modulation promotes not only stochasticity but also image quality. We compare vanilla, conditional modulated, and co-modulated GANs as illustrated in Figs. 2(b) to 2(d). Experiments are run on the FFHQ dataset with the same setting as § 5.1. While the vanilla version completely fails, our co-modulation approach dominates the conditional modulated version and especially when the masked ratio becomes large (see Fig. 10). We refer the readers to the appendix for the complete results (Table 6). Qualitatively, we often observe some unusual artifacts of the conditional modulated one in the large missing regions (see Fig. 9), which we hypothesize is due to the lack of stochastic generative capability." }, { "heading": "5.2 IMAGE-TO-IMAGE TRANSLATION", "text": "Edges to Photos. Co-modulated GANs are generic image-conditional models that can be easily adopted to image-to-image translation tasks. We follow the common setting (DeVries et al., 2019; Wang et al., 2019b) on the edges to photos datasets (Isola et al., 2017) at 256×256 resolution, where FID samples once per validation image (200 in total) and the training set is used as the reference distribution; LPIPS measures the intra-conditioning diversity for which we sample 2k pairs. As summarized in Table 2, our approach easily achieves superior fidelity (FID) over state-of-the-art methods (Huang et al., 2018; Isola et al., 2017; Wang et al., 2019b; Zhu et al., 2017b) despite the fact that MUNIT assumes for the different unpaired setting (Huang et al., 2018), and also superior diversity on the Edges2Handbags dataset by simply tuning the truncation ψ as well as in the trade-off view (see Fig. 11). Our model does not learn to produce diverse outputs on the Edges2Shoes dataset\nOurs ( )\nOurs ( )\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1 ψ = 3 ψ = 5\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1 ψ = 3 ψ = 5\nOurs ( ) Masked\nOriginal\nFormulas:\nInstructed by\nDue on Nov, 4 2019\nYue Dong YaoClass 70 2017011407\nx\ny\nz\nD\nE\nM\nE(y)\nM(z)\ns\ns′\nA\nψ = 1 ψ = 3 ψ = 5\nFigure 8: The inherent stochasticity. Comodulated GANs can easily trade-off between quality and diversity by tuning the truncation ψ.\ndespite its high fidelity, which we hypothesize is due to the learned strong correspondence between the input edge map and the color information extracted from the limited training set.\nLabels to Photos (COCO-Stuff). We further experiment on the COCO-Stuff dataset (Caesar et al., 2018) at 256×256 resolution following the experimental setting of SPADE (Caesar et al., 2018). The real images are resized to a short edge of 256 and then random cropped. The input label map has 182 classes; an embedding layer is used before feeding it into the network. We sample the output once per validation image (5k in total) for all the evaluation metrics. Table 3 shows that our method matches the FID of SPADE but significantly outperforms its P-IDS and U-IDS, without any direct supervision like the perceptual loss used in SPADE. We further conduct a user study between SPADE and ours. The user study indicates consistent human preference of ours over SPADE in accordance with our proposed P-IDS/U-IDS. Qualitative results and the user study details are provided in the appendix." }, { "heading": "6 CONCLUSION", "text": "We propose the co-modulated generative adversarial networks, a generic approach that bridges the gap between conditional and unconditional modulated generative architectures, significantly improves free-form large scale image completion, and easily generalizes to image-to-image translation. We also propose the intuitive new metric — P-IDS/U-IDS — for robustly assessing the perceptual fidelity for GANs. We expect our approach to be a fundamental solution to the image completion literature and contribute as reliable quantitative benchmarks." }, { "heading": "ACKNOWLEDGEMENTS", "text": "This work is supported by the National Science and Technology Major Project of the Ministry of Science and Technology in China under Grant 2017YFC0110903, Microsoft Research under the eHealth program, the National Natural Science Foundation in China under Grant 81771910, the\nFundamental Research Funds for the Central Universities of China under Grant SKLSDE-2017ZX-08 from the State Key Laboratory of Software Development Environment in Beihang University in China, the 111 Project in China under Grant B13003." }, { "heading": "APPENDIX A IMPLEMENTATION DETAILS", "text": "We mostly borrow the network details and hyperparameters from StyleGAN2 (Karras et al., 2019b), including the number of convolutional layers (2) at each level, the number of channels (64 at 512×512 resolution, doubled at each coarser level with a maximum of 512), architecture of the mapping network M (8-layer MLP), layer-wise noise injection, style mixing regularization (with a probability of 0.5 instead), non-saturating logistic loss (Goodfellow et al., 2014) with R1 regularization (Mescheder et al., 2018) of γ = 10, and the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.002.\nOur conditional encoder E imitates a similar architecture as the discriminator but without the crosslevel residual connections. Skip residual connections are used between each level of E and D. To produce the conditional style representation, the final 4×4 feature map of E is flattened and passed through a fully connected layer of 1024 channels with a dropout rate of 0.5. The dropout layer keeps enabled during testing since we observe that it partially correlates to the inherent stochasticity.\nOur model has 109M parameters in total. All the experiments are run on 8 cards of NVIDIA Tesla V100 GPUs. The batch size is 4 per GPU, 32 in total. The training length is 25M images unless specified, which takes about 1 week at 512×512 resolution." }, { "heading": "APPENDIX B FREE-FORM MASK SAMPLING", "text": "We sample free-form masks for training by simulating random brush strokes and rectangles. The algorithm of generating brush strokes is borrowed from DeepFillv2 (Yu et al., 2019), while the width of the brush is uniformly sampled within [12, 48], the number of vertices is uniformly sampled within [4, 18], and the number of strokes is uniformly sampled within [0, 20]. We then generate multiple rectangles with uniformly random widths, heights, and locations, while the number of up to full-size rectangles is uniformly sampled within [0, 5] and the number of up to half-size rectangles is uniformly sampled within [0, 10]. See Fig. 12 for the sampled free-form masks. During evaluation, we use the same sampling strategy of free-form masks as used in training if no masked ratio is specified; otherwise, we repeatedly apply the same algorithm until the specified range is satisfied." }, { "heading": "APPENDIX C USER STUDY", "text": "For the user study of image completion, we randomly sample the same number (256) of validation images, free-form masks (using the algorithm above), and the corresponding outputs from each\nPublished as a conference paper at ICLR 2021\nOurs ( )\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 E(x) M(z) A E M x y\nz\nD\nA\ns\ns′\nψ = 1 ψ = 3 ψ = 5\nFormulas: Instructed by Due on Nov, 4 2019 Yue Dong YaoClass 70 2017011407 E(x) M(z) A E M\nx\ny\nz\nD\nA\ns\ns′\nψ = 1 ψ = 3 ψ = 5Ours ( )\nFigure 13: Samples on Edges2Handbags. Masked Original Ours\nFigure 14: Failure cases.\nmethod, for each dataset and each range of masked ratio. The user is given a pair of fake and the corresponding real images in each round and has 5 seconds to decide which one is fake or “don’t know”; overtime rounds are also treated as “don’t know”. No user will see a real for more than once. To compute the user preference rate of fakes over reals, we regard a correct answer as 0, an incorrect answer as 1, and a “don’t know” as 0.5. We have received totally 14336 rounds of answers from 28 participants. See Table 4 for the numerical results.\nWe adopt a similar protocol for the user study on COCO-Stuff. In each round, the user is given a pair of generated images of SPADE (Park et al., 2019) and ours using the same validation input. The user has 5 seconds to decide which one is preferred or “don’t know”; overtime rounds are also treated as “don’t know”. We regard a “don’t know” as 0.5. We have received 720 rounds of answers from 12 participants, among which 319 prefer ours, 189 prefer SPADE, and 212 “don’t know”." }, { "heading": "APPENDIX D MORE QUANTITATIVE RESULTS", "text": "Table 5 presents the quantitative results for image completion across methods and masked ratios. Table 6 presents the quantitative results of the ablation experiment. Experiments demonstrate our superior performance at all masked ratios." }, { "heading": "APPENDIX E MORE QUALITATIVE RESULTS", "text": "Fig. 13 presents our generated samples for image-to-image translation on the Edges2Handbags dataset under both ψ = 1 (which achieves superior fidelity) and ψ = 3 (which achieves superior diversity). See Fig. 15 for a qualitative comparison for image-to-image translation on the COCO-Stuff dataset. Extensive examples for free-form image completion are presented in Figs. 18-23." }, { "heading": "APPENDIX F DISCUSSION", "text": "Large scale image completion is a challenging task that requires not only generative but also recognition capability. Although our model generates promising results in most of the cases, it sometimes fails to recognize the semantic information in the surrounding areas hence produces strange artifacts (see Fig. 14), especially in the challenging Places2 dataset that contains millions of scenes under various style and quality. The readers are encouraged to discover more examples from our interactive demo." } ]
2,021
null
SP:9e72893f6675196c62be20b31e686364f690479a
[ "This paper studies a family of Markov Decision Process (MDP) models with a low-dimensional unobserved state, called the block MDP. The authors assume that system dynamics of the underlying MDP is sufficiently summarized by a parameter $\\theta$. This learning setting could be seen as a combination of Block MDP and the Hidden Parameter MDP; hence the name HiP-BMDP." ]
Many control tasks exhibit similar dynamics that can be modeled as having common latent structure. Hidden-Parameter Markov Decision Processes (HiP-MDPs) explicitly model this structure to improve sample efficiency in multi-task settings. However, this setting makes strong assumptions on the observability of the state that limit its application in real-world scenarios with rich observation spaces. In this work, we leverage ideas of common structure from the HiP-MDP setting, and extend it to enable robust state abstractions inspired by Block MDPs. We derive instantiations of this new framework for both multi-task reinforcement learning (MTRL) and meta-reinforcement learning (Meta-RL) settings. Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work that use the same environment assumptions. To further demonstrate the efficacy of the proposed method, we empirically compare and show improvement over multi-task and meta-reinforcement learning baselines.
[ { "affiliations": [], "name": "Amy Zhang" }, { "affiliations": [], "name": "Shagun Sodhani" }, { "affiliations": [], "name": "Khimya Khetarpal" }, { "affiliations": [], "name": "Joelle Pineau" } ]
[ { "authors": [ "David Abel", "Dilip Arumugam", "Lucas Lehnert", "Michael Littman" ], "title": "State abstractions for lifelong reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Ron Amit", "Ron Meir" ], "title": "Meta-learning by adjusting priors based on extended PAC-Bayes theory", "venue": "In International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Haitham Bou Ammar", "Eric Eaton", "Paul Ruvolo", "Matthew Taylor" ], "title": "Online multi-task learning for policy gradient methods", "venue": "In International conference on machine learning,", "year": 2014 }, { "authors": [ "Richard Bellman" ], "title": "Dynamic Programming", "venue": null, "year": 1957 }, { "authors": [ "Dimitri P. Bertsekas", "John N. Tsitsiklis" ], "title": "Neuro-Dynamic Programming", "venue": "Athena Scientific, 1st edition,", "year": 1996 }, { "authors": [ "Emma Brunskill", "Lihong Li" ], "title": "Sample complexity of multi-task reinforcement learning. Uncertainty in Artificial Intelligence ", "venue": "Proceedings of the 29th Conference, UAI 2013,", "year": 2013 }, { "authors": [ "Daniele Calandriello", "Alessandro Lazaric", "Marcello Restelli" ], "title": "Sparse multi-task reinforcement learning", "venue": "Advances in neural information processing systems", "year": 2014 }, { "authors": [ "Pablo Samuel Castro", "Doina Precup" ], "title": "Using bisimulation for policy transfer in mdps", "venue": "In TwentyFourth AAAI Conference on Artificial Intelligence,", "year": 2010 }, { "authors": [ "Zhao Chen", "Vijay Badrinarayanan", "Chen-Yu Lee", "Andrew Rabinovich" ], "title": "Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks", "venue": "Proceedings of the 35th International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Carlo D’Eramo", "Davide Tateo", "Andrea Bonarini", "Marcello Restelli", "Jan Peters" ], "title": "Sharing knowledge in multi-task deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Simon S. Du", "Akshay Krishnamurthy", "Nan Jiang", "Alekh Agarwal", "Miroslav Dudı́k", "John Langford" ], "title": "Provably efficient RL with rich observations via latent state decoding", "venue": null, "year": 1901 }, { "authors": [ "Norm Ferns", "Prakash Panangaden", "Doina Precup" ], "title": "Metrics for finite markov decision processes", "venue": "In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence,", "year": 2004 }, { "authors": [ "Norm Ferns", "Prakash Panangaden", "Doina Precup" ], "title": "Bisimulation metrics for continuous markov decision processes", "venue": "SIAM J. Comput.,", "year": 2011 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Carles Gelada", "Saurabh Kumar", "Jacob Buckman", "Ofir Nachum", "Marc G Bellemare" ], "title": "Deepmdp: Learning continuous latent space models for representation learning", "venue": null, "year": 1906 }, { "authors": [ "Robert Givan", "Thomas Dean", "Matthew Greig" ], "title": "Equivalence notions and model minimization in markov decision processes", "venue": "Artificial Intelligence,", "year": 2003 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Nan Jiang", "Alex Kulesza", "Satinder Singh" ], "title": "Abstraction selection in model-based reinforcement learning", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "Leslie Pack Kaelbling", "Michael L. Littman", "Anthony R. Cassandra" ], "title": "Planning and acting in partially observable stochastic domains", "venue": "Artif. Intell.,", "year": 1998 }, { "authors": [ "Nicholas C. Landolfi", "Garrett Thomas", "Tengyu Ma" ], "title": "A model-based approach for sample-efficient multi-task reinforcement learning, 2019", "venue": null, "year": 2019 }, { "authors": [ "Lihong Li", "Thomas J. Walsh", "Michael L. Littman" ], "title": "Towards a unified theory of state abstraction for mdps", "venue": "In Proceedings of the Ninth International Symposium on Artificial Intelligence and Mathematics,", "year": 2006 }, { "authors": [ "Yuping Luo", "Huazhe Xu", "Yuanzhi Li", "Yuandong Tian", "Trevor Darrell", "Tengyu Ma" ], "title": "Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Andreas Maurer", "Massimiliano Pontil", "Bernardino Romera-Paredes" ], "title": "The benefit of multitask representation learning", "venue": "J. Mach. Learn. Res.,", "year": 2016 }, { "authors": [ "Rémi Munos" ], "title": "Error bounds for approximate value iteration", "venue": "In Proceedings of the 20th National Conference on Artificial Intelligence - Volume 2,", "year": 2005 }, { "authors": [ "Alfred Müller" ], "title": "Integral probability metrics and their generating classes of functions", "venue": "Advances in Applied Probability,", "year": 1997 }, { "authors": [ "Emilio Parisotto", "Jimmy Ba", "Ruslan Salakhutdinov" ], "title": "Actor-mimic: Deep multitask and transfer reinforcement learning", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Christian F. Perez", "Felipe Petroski Such", "Theofanis Karaletsos" ], "title": "Generalized Hidden Parameter MDPs Transferable Model-based RL in a Handful of Trials", "venue": null, "year": 2020 }, { "authors": [ "Martin L Puterman" ], "title": "Markov decision processes: Discrete stochastic dynamic programming", "venue": "Journal of the Operational Research Society,", "year": 1995 }, { "authors": [ "Kate Rakelly", "Aurick Zhou", "Chelsea Finn", "Sergey Levine", "Deirdre Quillen" ], "title": "Efficient off-policy meta-reinforcement learning via probabilistic context variables", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Jonas Rothfuss", "Dennis Lee", "Ignasi Clavera", "Tamim Asfour", "Pieter Abbeel" ], "title": "ProMP: Proximal meta-policy search", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yanchao Sun", "Xiangyu Yin", "Furong Huang" ], "title": "Temple: Learning template of transitions for sample efficient multi-task rl, 2020", "venue": null, "year": 2020 }, { "authors": [ "Yuval Tassa", "Yotam Doron", "Alistair Muldal", "Tom Erez", "Yazhe Li", "Diego de Las Casas", "David Budden", "Abbas Abdolmaleki", "Josh Merel", "Andrew Lefrancq", "Timothy Lillicrap", "Martin Riedmiller" ], "title": "DeepMind control suite", "venue": null, "year": 2018 }, { "authors": [ "Yee Teh", "Victor Bapst", "Wojciech M. Czarnecki", "John Quan", "James Kirkpatrick", "Raia Hadsell", "Nicolas Heess", "Razvan Pascanu" ], "title": "Distral: Robust multitask reinforcement learning", "venue": "Advances in neural information processing systems", "year": 2017 }, { "authors": [ "Andrea Tirinzoni", "Riccardo Poiani", "Marcello Restelli" ], "title": "Sequential transfer in reinforcement learning with a generative model", "venue": "In Proceedings of the 35th International Conference on Machine Learning,", "year": 2020 }, { "authors": [ "Denis Yarats", "Amy Zhang", "Ilya Kostrikov", "Brandon Amos", "Joelle Pineau", "Rob Fergus" ], "title": "Improving sample efficiency in model-free reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Mingzhang Yin", "George Tucker", "Mingyuan Zhou", "Sergey Levine", "Chelsea Finn" ], "title": "Meta-learning without memorization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Tianhe Yu", "Saurabh Kumar", "Abhishek Gupta", "Sergey Levine", "Karol Hausman", "Chelsea Finn" ], "title": "Gradient surgery for multi-task learning, 2020", "venue": null, "year": 2020 }, { "authors": [ "Amy Zhang", "Zachary C. Lipton", "Luis Pineda", "Kamyar Azizzadenesheli", "Anima Anandkumar", "Laurent Itti", "Joelle Pineau", "Tommaso Furlanello" ], "title": "Learning causal state representations of partially observable environments", "venue": "The Multi-disciplinary Conference on Reinforcement Learning and Decision Making,", "year": 2019 }, { "authors": [ "Amy Zhang", "Clare Lyle", "Shagun Sodhani", "Angelos Filos", "Marta Kwiatkowska", "Joelle Pineau", "Yarin Gal", "Doina Precup" ], "title": "Invariant causal prediction for block mdps", "venue": "In International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Amy Zhang", "Rowan McAllister", "Roberto Calandra", "Yarin Gal", "Sergey Levine" ], "title": "Learning invariant representations for reinforcement learning without reconstruction", "venue": "arXiv preprint arXiv:2006.10742,", "year": 2020 }, { "authors": [ "Luo" ], "title": "First, we let Zk denote the discounted sum of rewards if the first k steps are", "venue": null, "year": 2019 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nA key open challenge in AI research that remains is how to train agents that can learn behaviors that generalize across tasks and environments. When there is common structure underlying the tasks, we have seen that multi-task reinforcement learning (MTRL), where the agent learns a set of tasks simultaneously, has definite advantages (in terms of robustness and sample efficiency) over the singletask setting, where the agent independently learns each task. There are two ways in which learning multiple tasks can accelerate learning: the agent can learn a common representation of observations, and the agent can learn a common way to behave. Prior work in MTRL has also\nleveraged the idea by sharing representations across tasks (D’Eramo et al., 2020) or providing pertask sample complexity results that show improved sample efficiency from transfer (Brunskill & Li, 2013). However, explicit exploitation of the shared structure across tasks via a unified dynamics has been lacking. Prior works that make use of shared representations use a naive unification approach that posits all tasks lie in a shared domain (Figure 1, left). On the other hand, in the single-task setting, research on state abstractions has a much richer history, with several works on improved generalization through the aggregation of behaviorally similar states (Ferns et al., 2004; Li et al., 2006; Luo et al., 2019; Zhang et al., 2020b).\nIn this work, we propose to leverage rich state abstraction models from the single-task setting, and explore their potential for the more general multi-task setting. We frame the problem as a structured super-MDP with a shared state space and universal dynamics model conditioned on a task-specific hidden parameter (Figure 1, right). This additional structure gives us better sample efficiency, both\n∗Corresponding author: amy.x.zhang@mail.mcgill.ca\ntheoretically, compared to related bounds (Brunskill & Li, 2013; Tirinzoni et al., 2020) and empirically against relevant baselines (Yu et al., 2020; Rakelly et al., 2019; Chen et al., 2018; Teh et al., 2017). We learn a latent representation with smoothness properties for better few-shot generalization to other unseen tasks within this family. This allows us to derive new value loss bounds and sample complexity bounds that depend on how far away a new task is from the ones already seen.\nWe focus on multi-task settings where dynamics can vary across tasks, but the reward function is shared. We show that this setting can be formalized as a hidden-parameter MDP (HiP-MDP) (DoshiVelez & Konidaris, 2013), where the changes in dynamics can be defined by a latent variable, unifying dynamics across tasks as a single global function. This setting assumes a global latent structure over all tasks (or MDPs). Many real-world scenarios fall under this framework, such as autonomous driving under different weather and road conditions, or even different vehicles, which change the dynamics of driving. Another example is warehouse robots, where the same tasks are performed in different conditions and warehouse layouts. The setting is also applicable to some cases of RL for medical treatment optimization, where different patient groups have different responses to treatment, yet the desired outcome is the same. With this assumed structure, we can provide concrete zero-shot generalization bounds to unseen tasks within this family. Further, we explore the setting where the state space is latent and we have access to only high-dimensional observations, and we show how to recover robust state abstractions in this setting. This is, again, a highly realistic setting in robotics when we do not always have an amenable, Lipschitz low-dimensional state space. Cameras are a convenient and inexpensive way to acquire state information, and handling pixel observations is key to approaching these problems. A block MDP (Du et al., 2019) provides a concrete way to formalize this observation-based setting. Leveraging this property of the block MDP framework, in combination with the assumption of a unified dynamical structure of HiP-MDPs, we introduce the hidden-parameter block MDP (HiP-BMDP) to handle settings with high-dimensional observations and structured, changing dynamics.\nKey contributions of this work are a new viewpoint of the multi-task setting with same reward function as a universal MDP under the HiP-BMDP setting, which naturally leads to a gradientbased representation learning algorithm. Further, this framework allows us to compute theoretical generalization results with the incorporation of a learned state representation. Finally, empirical results show that our method outperforms other multi-task and meta-learning baselines in both fast adaptation and zero-shot transfer settings." }, { "heading": "2 BACKGROUND", "text": "In this section, we introduce the base environment as well as notation and additional assumptions about the latent structure of the environments and multi-task setup considered in this work.\nA finite1, discrete-time Markov Decision Process (MDP) (Bellman, 1957; Puterman, 1995) is a tuple 〈S,A, R, T, γ〉, where S is the set of states, A is the set of actions, R : S × A → R is the reward function, T : S × A → Dist(S) is the environment transition probability function, and γ ∈ [0, 1) is the discount factor. At each time step, the learning agent perceives a state st ∈ S , takes an action at ∈ A drawn from a policy π : S ×A → [0, 1], and with probability T (st+1|st, at) enters next state st+1, receiving a numerical reward Rt+1 from the environment. The value function of policy π is defined as: Vπ(s) = Eπ[ ∑∞ t=0 γ\ntRt+1|S0 = s]. The optimal value function V ∗ is the maximum value function over the class of stationary policies.\nHidden-Parameter MDPs (HiP-MDPs) (Doshi-Velez & Konidaris, 2013) can be defined by a tuple M: 〈S,A,Θ, Tθ, R, γ, PΘ〉 where S is a finite state space, A a finite action space, Tθ describes the transition distribution for a specific task described by task parameter θ ∼ PΘ, R is the reward function, γ is the discount factor, and PΘ the distribution over task parameters. This defines a family of MDPs, where each MDP is described by the parameter θ ∼ PΘ. We assume that this parameter θ is fixed for an episode and indicated by an environment id given at the start of the episode.\nBlock MDPs (Du et al., 2019) are described by a tuple 〈S,A,X , p, q, R〉 with an unobservable state space S , action spaceA, and observable space X . p denotes the latent transition distribution p(s′|s, a) for s, s′ ∈ S, a ∈ A, q is the (possibly stochastic) emission mapping that emits the observations q(x|s) for x ∈ X , s ∈ S, and R the reward function. We are interested in the setting where this\n1We use this assumption only for theoretical results, but our method can be applied to continuous domains.\nmapping q is one-to-many. This is common in many real world problems with many tasks where the underlying states and dynamics are the same, but the observation space that the agent perceives can be quite different, e.g. navigating a house of the same layout but different decorations and furnishings.\nAssumption 1 (Block structure (Du et al., 2019)). Each observation x uniquely determines its generating state s. That is, the observation space X can be partitioned into disjoint blocks Xs, each containing the support of the conditional distribution q(·|s).\nAssumption 1 gives the Markov property inX , a key difference from partially observable MDPs (Kaelbling et al., 1998; Zhang et al., 2019), which has no guarantee of determining the generating state from the history of observations. This assumption allows us to compute reasonable bounds for our algorithm in k-order MDPs2 (which describes many real world problems) and avoiding the intractability of true POMDPs, which have no guarantees on providing enough information to sufficiently predict future rewards. A relaxation of this assumption entails providing less information in the observation for predicting future reward, which will degrade performance. We show empirically that our method is still more robust to a relaxation of this assumption compared to other MTRL methods.\nBisimulation is a strict form of state abstraction, where two states are bisimilar if they are behaviorally equivalent. Bisimulation metrics (Ferns et al., 2011) define a distance between states as follows: Definition 1 (Bisimulation Metric (Theorem 2.6 in Ferns et al. (2011))). Let (S,A, P, r) be a finite MDP and met the space of bounded pseudometrics on S equipped with the metric induced by the uniform norm. Define F : met 7→ met by\nF (d)(s, s′) = max a∈A (|ras − ras′ |+ γW (d)(P as , P as′)),\nwhere W (d) is the Wasserstein distance between transition probability distributions. Then F has a unique fixed point d̃ which is the bisimulation metric.\nA nice property of this metric d̃ is that difference in optimal value between two states is bounded by their distance as defined by this metric.\nTheorem 1 (V ∗ is Lipschitz with respect to d̃ (Ferns et al., 2004)). Let V ∗ be the optimal value function for a given discount factor γ. Then V ∗ is Lipschitz continuous with respect to d̃ with Lipschitz constant 11−γ ,\n|V ∗(s)− V ∗(s′)| ≤ 1 1− γ d̃(s, s′).\nTherefore, we see that bisimulation metrics give us a Lipschitz value function with respect to d̃.\nFor downstream evaluation of the representations we learn, we use Soft Actor Critic (SAC) (Haarnoja et al., 2018), an off-policy actor-critic method that uses the maximum entropy framework for soft policy iteration. At each iteration, SAC performs soft policy evaluation and improvement steps. The policy evaluation step fits a parametric soft Q-function Q(st, at) using transitions sampled from the replay buffer D by minimizing the soft Bellman residual,\nJ(Q) = E(st,st,rt,st+1)∼D [( Q(st, at)− rt − γV̄ (xt+1) )2] .\nThe target value function V̄ is approximated via a Monte-Carlo estimate of the following expectation, V̄ (xt+1) = Eat+1∼π [ Q̄(xt+1, at+1)− α log π(at+1|st+1) ] ,\nwhere Q̄ is the target soft Q-function parameterized by a weight vector obtained from an exponentially moving average of the Q-function weights to stabilize training. The policy improvement step then attempts to project a parametric policy π(at|st) by minimizing KL divergence between the policy and a Boltzmann distribution induced by the Q-function, producing the following objective,\nJ(π) = Est∼D [ Eat∼π[α log(π(at|st))−Q(st, at)] ] .\n2Any k-order MDP can be made Markov by stacking the previous k observations and actions together." }, { "heading": "3 THE HIP-BMDP SETTING", "text": "The HiP-MDP setting (as defined in Section 2) assumes full observability of the state space. However, in most real-world scenarios, we only have access to high-dimensional, noisy observations, which often contain irrelevant information to the reward. We combine the Block MDP and HiP-MDP settings to introduce the Hidden-Parameter Block MDP setting (HiP-BMDP), where states are latent, and transition distributions change depending on the task parameters θ. This adds an additional dimension of complexity to our problem – we first want to learn an amenable state space S, and a universal dynamics model in that representation3. In this section, we formally define the HiP-BMDP family in Section 3.1, propose an algorithm for learning HiP-BMDPs in Section 3.2, and finally provide theoretical analysis for the setting in Section 3.3." }, { "heading": "3.1 THE MODEL", "text": "A HiP-BMDP family can be described by tuple 〈S,A,Θ, Tθ, R, γ, PΘ,X , q〉, with a graphical model of the framework found in Figure 2. We are given a label k ∈ {1, ..., N} for each of N environments. We plan to learn a candidate Θ that unifies the transition dynamics across all environments, effectively finding T (·, ·, θ). For two environment settings θi, θj ∈ Θ, we define a distance metric:\nd(θi, θj) := max s,a∈{S,A}\n[ W ( Tθi(s, a), Tθj (s, a) )] . (1)\nThe Wasserstein-1 metric can be written as Wd(P,Q) = supf∈Fd ∥∥Ex∼P f(x) − Ey∼Qf(y)∥∥1, where Fd is the set of 1-Lipschitz functions under metric d (Müller, 1997). We omit d but use d(x, y) = ‖x − y‖1 in our setting. This ties distance between θ to the maximum difference in the next state distribution of all state-action pairs in the MDP.\nGiven a HiP-BMDP familyMΘ, we assume a multi-task setting where environments with specific θ ∈ Θ are sampled from this family. We do not have access to θ, and instead get environment labels I1, I2, ..., IN . The goal is to learn a latent space for the hyperparameters θ4. We want θ to be smooth with respect to changes in dynamics from environment to environment, which we can set explicitly through the following objective:\n||ψ(I1)− ψ(I2)||1 = max s∈S a∈A\n[ W2 ( p(st+1|st, at, ψ(I1)), p(st+1|st, at, ψ(I2)) )] , (2)\ngiven environment labels I1, I2 and ψ : Z+ 7→ Rd, the encoder that maps from environment label, the set of positive integers, to θ." }, { "heading": "3.2 LEARNING HIP-BMDPS", "text": "The premise of our work is that the HiP-BMDP formulation will improve sample efficiency and generalization performance on downstream tasks. We examine two settings, multi-task reinforcement\n3We overload notation here since the true state space is latent. 4We again overload notation here to refer to the learned hyperparameters as θ, as the true ones are latent.\nlearning (MTRL) and meta-reinforcement learning (meta-RL). In both settings, we have access to N training environments and a held-out set of M evaluation environments, both drawn from a defined family. In the MTRL setting, we evaluate model performance across all N training environments and ability to adapt to new environments. Adaptation performance is evaluated in both the few-shot regime, where we collect a small number of samples from the evaluation environments to learn each hidden parameter θ, and the zero-shot regime, where we average θ over all training tasks. We evaluate against ablations and other MTRL methods. In the meta-RL setting, the goal for the agent is to leverage knowledge acquired from the previous tasks to adapt quickly to a new task. We evaluate performance in terms of how quickly the agent can achieve a minimum threshold score in the unseen evaluation environments (by learning the correct θ for each new environment).\nLearning a HiP-BMDP approximation of a family of MDPs requires the following components: i) an encoder that maps observations from state space to a learned, latent representation, φ : X 7→ Z , ii) an environment encoder ψ that maps an environment identifier to a hidden parameter θ, iii) a universal dynamics model T conditioned on task parameter θ. Figure 2 shows how the components interact during training. In practice, computing the maximum Wasserstein distance over the entire state-action space is computationally infeasible. Therefore, we relax this requirement by taking the expectation over Wasserstein distance with respect to the marginal state distribution of the behavior policy. We train a probabilistic universal dynamics model T to output the desired next state distributions as Gaussians5, for which the 2-Wasserstein distance has a closed form:\nW2(N (m1,Σ1), N (m2,Σ2))2 = ||m1 −m2||22 + ||Σ 1/2 1 − Σ 1/2 2 ||2F ,\nwhere || · ||F is the Frobenius norm. Given that we do not have access to the true universal dynamics function across all environments, it must be learned. The objective in Equation (2) is accompanied by an additional objective to learn T , giving a final loss function:\nL(ψ, T ) = MSE (∣∣∣∣∣∣ψ(I1)− ψ(I2)∣∣∣∣∣∣\n2 ,W2\n( T (sI1t , π(s I1 t ), ψ(I1)), T (s I2 t , π(s I2 t ), ψ(I2)) )) ︸ ︷︷ ︸\nΘ learning error\n+MSE ( T (sI1t , a I1 t , ψ(I1)), s I1 t+1 ) +MSE ( T (sI2t , a I2 t , ψ(I2)), s I2 t+1 ) ︸ ︷︷ ︸\nModel learning error\n.\n(3)\nwhere red indicates gradients are stopped. Transitions {sI1t , a I1 t , s I1 t+1, I1} and {s I2 t , a I2 t , s I2 t+1, I2} from two different environments (I1 6= I2) are sampled randomly from a replay buffer. In practice, we scale the Θ learning error, our task bisimulation metric loss, using a scalar value denoted as αψ ." }, { "heading": "3.3 THEORETICAL ANALYSIS", "text": "In this section, we provide value bounds and sample complexity analysis of the HiP-BMDP approach. We have additional new theoretical analysis of the simpler HiP-MDP setting in Appendix B. We first define three additional error terms associated with learning a R, T , θ-bisimulation abstraction,\nR := sup a∈A,\nx1,x2∈X ,φ(x1)=φ(x2) ∣∣R(x1, a)−R(x2, a)∣∣, T := sup\na∈A, x1,x2∈X ,φ(x1)=φ(x2) ∥∥ΦT (x1, a)− ΦT (x2, a)∥∥1, θ := ‖θ̂ − θ‖1.\nΦT denotes the lifted version of T , where we take the next-step transition distribution from observation space X and lift it to latent space S. We can think of R, T as describing a new MDP which is close – but not necessarily the same, if R, T > 0 – to the original Block MDP. These two error terms can be computed empirically over all training environments and are therefore not task-specific.\n5This is not a restrictive assumption to make, as any distribution can be mapped to a Gaussian with an encoder of sufficient capacity.\nθ, on the other hand, is measured as a per-task error. Similar methods are used in Jiang et al. (2015) to bound the loss of a single abstraction, which we extend to the HiP-BMDP setting with a family of tasks.\nValue Bounds. We first evaluate how the error in θ prediction and the learned bisimulation representation affect the optimal Q∗M̄θ̂\nof the learned MDP, by first bounding its distance from the optimal Q∗ of the true MDP for a single-task. Theorem 2 (Q error). Given an MDP M̄θ̂ built on a ( R, T , θ)-approximate bisimulation abstraction of an instance of a HiP-BMDPMθ, we denote the evaluation of the optimal Q function of M̄θ̂ onM as [Q∗M̄θ̂ ]Mθ . The value difference with respect to the optimal Q\n∗ M is upper bounded by∥∥Q∗Mθ − [Q∗M̄θ̂ ]Mθ∥∥∞ ≤ R + γ( T + θ) Rmax2(1− γ) .\nProof in Appendix C. As in the HiP-MDP setting, we can measure the transferability of a specific policy π learned on one task to another, now taking into account error from the learned representation. Theorem 3 (Transfer bound). Given two MDPsMθi andMθj , we can bound the difference in Qπ between the two MDPs for a given policy π learned under an R, T , θi -approximate abstraction of Mθi and applied to∥∥Q∗Mθj − [Q∗M̄θ̂i ]Mθj ∥∥∞ ≤ R + γ( T + θi + ‖θi − θj‖1) Rmax2(1− γ) . This result clearly follows directly from Theorem 2. Given a policy learned for task i, Theorem 3 gives a bound on how far from optimal that policy is when applied to task j. Intuitively, the more similar in behavior tasks i and j are, as denoted by ‖θi − θj‖1, the better π performs on task j.\nFinite Sample Analysis. In MDPs (or families of MDPs) with large state spaces, it can be unrealistic to assume that all states are visited at least once, in the finite sample regime. Abstractions are useful in this regime for their generalization capabilities. We can instead perform a counting analysis based on the number of samples of any abstract state-action pair.\nWe compute a loss bound with abstraction φ which depends on the size of the replay buffer D, collected over all tasks. Specifically, we define the minimal number of visits to an abstract state-action pair, nφ(D) = minx∈φ(S),a∈A |Dx,a|. This sample complexity bound relies on a Hoeffding-style inequality, and therefore requires that the samples in D be independent, which is usually not the case when trajectories are sampled. Theorem 4 (Sample Complexity). For any φ which defines an ( R, T , θ)-approximate bisimulation abstraction on a HiP-BMDP familyMΘ, we define the empirical measurement of Q∗M̄θ̂ over D to be Q∗M̄D\nθ̂\n. Then, with probability ≥ 1− δ,\n∥∥Q∗Mθ − [Q∗M̄D θ̂ ]Mθ ∥∥ ∞ ≤ R + γ( T + θ) Rmax 2(1− γ) + Rmax (1− γ)2\n√ 1\n2nφ(D) log 2|φ(X )||A| δ . (4)\nThis performance bound applies to all tasks in the family and has two terms that are affected by using a state abstraction: the number of samples nφ(D), and the size of the state space |φ(X )|. We know that |φ(X )| ≤ |X | as behaviorally equivalent states are grouped together under bisimulation, and nφ(D) is the minimal number of visits to any abstract state-action pair, in aggregate over all training environments. This is an improvement over the sample complexity of applying single-task learning without transfer over all tasks, and the method proposed in Brunskill & Li (2013), which both would rely on the number of tasks or number of MDPs seen." }, { "heading": "4 EXPERIMENTS & RESULTS", "text": "We use environments from Deepmind Control Suite (DMC) (Tassa et al., 2018) to evaluate our method for learning HiP-BMDPs for both multi-task RL and meta-reinforcement learning settings.\nWe consider two setups for evaluation: i) an interpolation setup and ii) an extrapolation setup where the changes in the dynamics function are interpolations and extrapolations between the changes in the dynamics function of the training environment respectively. This dual-evaluation setup provides a more nuanced understanding of how well the learned model transfers across the environments. Implementation details can be found in Appendix D and sample videos of policies at https://sites.google.com/view/hip-bmdp.\nEnvironments. We create a family of MDPs using the existing environment-task pairs from DMC and change one environment parameter to sample different MDPs. We denote this parameter as the perturbation-parameter. We consider the following HiP-BMDPs: 1. Cartpole-Swingup-V0: the mass of the pole varies, 2. Cheetah-Run-V0: the size of the torso varies, 3.\nWalker-Run-V0: the friction coefficient between the ground and the walker’s legs varies, 4. Walker-Run-V1: the size of left-foot of the walker varies, and 5. Finger-Spin-V0: the size of the finger varies. We show an example of the different pixel observations for Cheetah-Run-V0 in Figure 3. Additional environment details are in Appendix D.\nWe sample 8 MDPs from each MDP family by sampling different values for the perturbationparameter. The MDPs are arranged in order of increasing values of the perturbation-parameter such that we can induce an order over the family of MDPs. We denote the ordered MDPs as A − H . MDPs {B,C, F,G} are training environments and {D,E} are used for evaluating the model in the interpolation setup (i.e. the value of the perturbation-parameter can be obtained by interpolation). MDPs {A,H} are for evaluating the model in the extrapolation setup (i.e. the value of the perturbation-parameter can be obtained by extrapolation). We evaluate the learning agents by computing average reward (over 10 episodes) achieved by the policy after training for a fixed number of steps. All experiments are run for 10 seeds, with mean and standard error reported in the plots.\nMulti-Task Setting. We first consider a multi-task setup where the agent is trained on four related, but different environments with pixel observations. We compare our method, HiP-BMDP, with the following baselines and ablations: i) DeepMDP (Gelada et al., 2019) where we aggregate data across all training environments, ii) HiP-BMDP-nobisim, HiP-BMDP without the task bisimulation metric loss on task embeddings, iii) Distral, an ensemble of policies trained using the Distral algorithm (Teh et al., 2017) with SAC-AE (Yarats et al., 2019) as the underlying policy, iv) PCGrad (Yu et al., 2020), and v) GradNorm (Chen et al., 2018). For all models, the agent sequentially performs one update per environment. For fair comparison, we ensure that baselines have at least as many parameters as HiP-BMDP. Distral has more parameters as it trains one policy per environment. Additional implementation details about baselines are in Appendix D.1.\nIn Figures 4, and 10 (in Appendix), we observe that for all the models, performance deteriorates when evaluated on interpolation/extrapolation environments. We only report extrapolation results in the main paper because of space constraints, as they were very similar to the interpolation performance. The gap between the HiP-BMDP model and other baselines also widens, showing that the proposed approach is relatively more robust to changes in environment dynamics.\nAt training time (Figure 9 in Appendix), we observe that HiP-BMDP consistently outperforms other baselines on all the environments. The success of our proposed method can not be attributed to task embeddings alone as HiP-BMDP-nobisim also uses task embeddings. Moreover, only incorporating the task-embeddings is not guaranteed to improve performance in all the environments (as can be seen in the case of Cheetah-Run-V0). We also note that the multi-task learning baselines like Distral, PCGrad, and GradNorm sometimes lag behind even the DeepMDP baseline, perhaps because they do not leverage a shared global dynamics model.\nMeta-RL Setting. We consider the Meta-RL setup for evaluating the few-shot generalization capabilities of our proposed approach on proprioceptive state, as meta-RL techniques are too timeintensive to train on pixel observations directly. Specifically, we use PEARL (Rakelly et al., 2019), an off-policy meta-learning algorithm that uses probabilistic context variables, and is shown to outperform common meta-RL baselines like MAML-TRPO (Finn et al., 2017) and ProMP (Rothfuss et al., 2019) on proprioceptive state. We incorporate our proposed approach in PEARL by training the inference network qφ(z|c) with our additional HiP-BMDP loss. The algorithm pseudocode can be found in Appendix D. In Figure 5 we see that the proposed approach (blue) converges faster to a\nthreshold reward (green) than the baseline for Cartpole-Swingup-V0 and Walker-Walk-V1. We provide additional results in Appendix E.\nEvaluating the Universal Transition Model. We investigate how well the transition model performs in an unseen environment by only adapting the task parameter θ. We instantiate a new MDP, sampled from the family of MDPs, and use a behavior policy to collect transitions. These transitions are used to update only the θ parameter, and the transition model is evaluated by unrolling the transition model for k-steps. We report the average, perstep model error in latent space, averaged over 10 environments. While we expect both the proposed setup and baseline setups to adapt to the new environment, we expect the proposed setup to adapt faster because of the exploitation of underlying structure. In Figure 6, we indeed observe that the proposed HiP-BMDP model adapts much faster than the ablation HiP-BMDP-nobisim.\nRelaxing the Block MDP Assumption. We incorporate sticky observations into the environment to determine how HiP-BMDP behaves when the Block MDP assumption is relaxed. For some probability p (set to 0.1 in practice), the current observation is dropped, and the agent sees the previous observation again. In Figure 7, we see that even in this setting the proposed HiP-BMDP model outperforms the other baseline models." }, { "heading": "5 RELATED WORK", "text": "Multi-task learning has been extensively studied in RL with assumptions around common properties of different tasks, e.g., reward and transition dynamics. A lot of work has focused on considering tasks as MDPs and learning optimal policies for each task while maximizing shared knowledge. However, in most real-world scenarios, the parameters governing the dynamics are not observed. Moreover, it is not explicitly clear how changes in dynamics across tasks are controlled. The HiP-BMDP setting provides a principled way to change dynamics across tasks via a latent variable.\nMuch existing work in the multi-task reinforcement learning (MTRL) setting focuses on learning shared representations (Ammar et al., 2014; Parisotto et al., 2016; Calandriello et al., 2014; Maurer et al., 2016; Landolfi et al., 2019). D’Eramo et al. (2020) extend approximate value iteration bounds in the single-task setting to the multi-task by computing the average loss across tasks and Brunskill & Li (2013) offer sample complexity results, which still depend on the number of tasks, unlike ours. Sun et al. (2020); Tirinzoni et al. (2020) also obtain PAC bounds on sample complexity for the MTRL setting, but Sun et al. (2020) relies on a constructed state-action abstraction that assumes\na discrete and tractably small state space. Tirinzoni et al. (2020) assumes access to a generative model for any state-action pair and scales with the minimum of number of tasks or state space. In the rich observation setting, this minimum will almost always be the number of tasks. Similar to our work, Perez et al. (2020) also treats the multi-task setting as a HiP-MDP by explicitly designing latent variable models to model the latent parameters, but require knowledge of the structure upfront, whereas our approach does not make any such assumptions.\nMeta-learning, or learning to learn, is also a related framework with a different approach. We focus here on context-based approaches, which are more similar to the shared representation approaches of MTRL and our own method. Rakelly et al. (2019) model and learn latent contexts upon which a universal policy is conditioned. However, no explicit assumption of a universal structure is leveraged. Amit & Meir (2018); Yin et al. (2020) give a PAC-Bayes bound for meta-learning generalization that relies on the number of tasks n. Our setting is quite different from the typical assumptions of the meta-learning framework, which stresses that the tasks must be mutually exclusive to ensure a single model cannot solve all tasks. Instead, we assume a shared latent structure underlying all tasks, and seek to exploit that structure for generalization. We find that under this setting, our method indeed outperforms policies initialized through meta-learning.\nThe ability to extract meaningful information through state abstractions provides a means to generalize across tasks with a common structure. Abel et al. (2018) learn transitive and PAC state abstractions for a distribution over tasks, but they concentrate on finite, tabular MDPs. One approach to form such abstractions is via bisimulation metrics (Givan et al., 2003; Ferns et al., 2004) which formalize a concrete way to group behaviorally equivalent states. Prior work also leverages bisimulation for transfer (Castro & Precup, 2010), but on the policy level. Our work instead focuses on learning a latent state representation and established theoretical results for the MTRL setting. Recent work (Gelada et al., 2019) also learns a latent dynamics model and demonstrates connections to bisimulation metrics, but does not address multi-task learning." }, { "heading": "6 DISCUSSION", "text": "In this work, we advocate for a new framework, HiP-BMDP, to address the multi-task reinforcement learning setting. Like previous methods, HiP-BMDP assumes a shared state and action space across tasks, but additionally assumes latent structure in the dynamics. We exploit this structure through learning a universal dynamics model with latent parameter θ, which captures the behavioral similarity across tasks. We provide error and value bounds for the HiP-MDP (in appendix) and HiP-BMDP settings, showing improvements in sample complexity over prior work by producing a bound that depends on the number of samples in aggregate over tasks, rather than number of tasks seen at training time. Our work relies on an assumption that we have access to an environment id, or knowledge of when we have switched environments. This assumption could be relaxed by incorporating an environment identification procedure at training time to cluster incoming data into separate environments. Further, our bounds rely L∞ norms for measuring error and the value and transfer bounds. In future work we will investigate tightening these bounds with Lp norms." }, { "heading": "A BISIMULATION BOUNDS", "text": "We first look at the Block MDP case only (Zhang et al., 2020a), which can be thought of as the single-task setting in a HiP-BMDP. We can compute approximate error bounds in this setting by denoting φ an ( R, T )-approximate bisimulation abstraction, where\nR := sup a∈A,\nx1,x2∈X ,φ(x1)=φ(x2) ∣∣R(x1, a)−R(x2, a)∣∣, T := sup\na∈A, x1,x2∈X ,φ(x1)=φ(x2) ∥∥ΦT (x1, a)− ΦT (x2, a)∥∥1. ΦT denotes the lifted version of T , where we take the next-step transition distribution from observation space X and lift it to latent space S. Theorem 5. Given an MDP M̄ built on a ( R, T )-approximate bisimulation abstraction of Block MDPM, we denote the evaluation of the optimal Q function of M̄ onM as [Q∗M̄]M. The value difference with respect to the optimal Q∗M is upper bounded by∥∥Q∗M − [Q∗M̄]M∥∥∞ ≤ R + γ T Rmax2(1− γ) . Proof. From Theorem 2 in Jiang (2018)." }, { "heading": "B THEORETICAL RESULTS FOR THE HIP-MDP SETTING", "text": "We explore the HiP-MDP setting, where a low-dimensional state space is given, to highlight the results that can be obtained just from assuming this hierarchical structure of the dynamics.\nB.1 VALUE BOUNDS\nGiven a family of environmentsMΘ, we bound the difference in expected value between two sampled MDPs,Mθi ,Mθj ∈ MΘ using d(θi, θj). Additionally, we make the assumption that we have a behavior policy π that is near both optimal policies π∗θi , π ∗ θj\n. We use KL divergence to define this neighborhood for π∗θi ,\ndKL(π, π∗θi) = Es∼ρπ [ KL(π(·|s), π∗θi(·|s)) 1/2 ] . (5)\nWe start with a bound for a specific policy π. One way to measure the difference between two tasks Mθi ,Mθj is to measure the difference in value when that policy is applied in both settings. We show the relationship between the learned θ and this difference in value. The following results are similar to error bounds in approximate value iteration (Munos, 2005; Bertsekas & Tsitsiklis, 1996), but instead of tracking model error, we apply these methods to compare tasks with differences in dynamics. Theorem 6. Given policy π, the difference in expected value between two MDPs drawn from the family of MDPsMθi ,Mθj ∈MΘ is bounded by\n|V πθi − V π θj | ≤\nγ\n1− γ ‖θi − θj‖1. (6)\nProof. We use a telescoping sum to prove this bound, which is similar to Luo et al. (2019). First, we let Zk denote the discounted sum of rewards if the first k steps are inMθi , and all steps t > k are in Mθj ,\nZk := E ∀t≥0,at∼π(st) ∀j>t≥0,st+1∼Tθi (st,at) ∀t≥j,st+1∼Tθj (st,at)\n[ ∞∑ t=0 γtR(st, at) ] .\nBy definition, we have Z∞ = V πθi and Z0 = V π θj . Now, the value function difference can be written as a telescoping sum,\nV πθi − V π θj = ∞∑ k=0 (Zk+1 − Zk). (7)\nEach term can be simplified to\nZk+1 − Zk = γk+1Esk,ak∼π,Tθi [ Esk+1∼Tθj (·|sk,ak), s′k+1∼Tθi (·|sk,ak) [ V πθj (sk+1)− V π θj (s ′ k+1 ]] .\nPlugging this back into Equation (7),\nV πθi − V π θj =\nγ\n1− γ Es∼ρπθi , a∼π(s)\n[ Es′∼Tθi (·|s,a)V π θj (s ′)− Es′∼Tθj (·|s,a)V π θj (s ′) ] .\nThis expected value difference is bounded by the Wasserstein distance between Tθi , Tθj ,\n|V πθi − V π θj | ≤\nγ\n1− γ W (Tθi , Tθj )\n= γ\n1− γ ‖θi − θj‖1 using Equation (1).\nAnother comparison to make is how different the optimal policies in different tasks are with respect to the distance ‖θi − θj‖. Theorem 7. The difference in expected optimal value between two MDPs Mθi ,Mθj ∈ MΘ is bounded by,\n|V ∗θi − V ∗ θj | ≤\nγ\n(1− γ)2 ‖θi − θj‖1. (8)\nProof.\n|V ∗θi(s)− V ∗ θj (s)| = |maxa Q ∗ θi(s, a)−maxa′ Q ∗ θj (s, a ′)|\n≤ max a |Q∗θi(s, a)−Q ∗ θj (s, a)|\nWe can bound the RHS with\nsup s,a |Q∗θi(s, a)−Q ∗ θj (s, a)| ≤ sup s,a |rθi(s, a)−rθj (s, a)|+γ sup s,a |Es′∼Tθi (·|s,a)V ∗ θi(s ′)−Es′′∼Tθj (·|s,a)V ∗ θj (s ′′)|\nAll MDPs inMΘ have the same reward function, so the first term is 0. sup s,a ∣∣Q∗θi(s, a)−Q∗θj (s, a)∣∣ ≤ γ sup s,a |Es′∼Tθi (·|s,a)V ∗ θi(s ′)− Es′′∼Tθj (·|s,a)V ∗ θj (s ′′)|\n= γ sup s,a ∣∣∣∣Es′∼Tθi (·|s,a)[V ∗θi(s′)− V ∗θj (s′)]+ Es′′∼Tθj (·|s,a), s′∼Tθi (·|s,a) [ V ∗θj (s ′)− V ∗θj (s ′′) ]∣∣∣∣\n≤ γ sup s,a ∣∣Es′∼Tθi (·|s,a)[V ∗θi(s′)− V ∗θj (s′)]∣∣+ γ sups,a ∣∣Es′′∼Tθj (·|s,a), s′∼Tθi (·|s,a) [ V ∗θj (s ′)− V ∗θj (s ′′) ]∣∣\n≤ γ sup s,a ∣∣Es′∼Tθi (·|s,a)[V ∗θi(s′)− V ∗θj (s′)]∣∣+ γ1− γ ‖θi − θj‖1 ≤ γmax\ns ∣∣V ∗θi(s)− V ∗θj (s)∣∣+ γ1− γ ‖θi − θj‖1 = γmax\ns ∣∣max a Q∗θi(s, a)−maxa′ Q ∗ θj (s, a ′) ∣∣+ γ 1− γ ‖θi − θj‖1\n≤ γ sup s,a ∣∣Q∗θi(s, a)−Q∗θj (s, a)∣∣+ γ1− γ ‖θi − θj‖1 Solving for sups,a\n∣∣Q∗θi(s, a)−Q∗θj (s, a)∣∣, sup s,a\n∣∣Q∗θi(s, a)−Q∗θj (s, a)∣∣ ≤ γ(1− γ)2 ‖θi − θj‖1. Plugging this back in,\n|V ∗θi(s)− V ∗ θj (s)| ≤\nγ\n(1− γ)2 ‖θi − θj‖1.\nBoth these results lend more intuition for casting the multi-task setting under the HiP-MDP formalism. The difference in the optimal performance between any two environments is controlled by the distance between the hidden parameters for corresponding environments. One can interpret the hidden parameter as a knob to allow precise changes across the tasks." }, { "heading": "B.2 EXPECTED ERROR BOUNDS", "text": "In MTRL, we are concerned with the performance over a family of tasks. The empirical risk is typically defined as follows for T tasks (Maurer et al., 2016):\navg(θ) = 1\nT T∑ t=1 E[`(ft(h(wt(X))), Y ))]. (9)\nConsequently, we bound the expected loss over the family of environments E with respect to θ. In particular, we are interested in the average approximation error and define it as the absolute model error averaged across all environments:\navg(θ) = 1\n|E| E∑ i=1 ∣∣∣V ∗ θ̂i (s)− V ∗θi(s) ∣∣∣. (10)\nTheorem 8. Given a family of environments MΘ, each parameterized with an underlying true hidden parameter θ1, θ2, · · · , θE , and let θ̂1, θ̂2, · · · , θ̂E be their respective approximations such that the average approximation error across all environments is bounded as follows:\navg(θ) ≤ γ\n(1− γ)2 , (11)\nwhere each environment’s parameter θi is -close to its approximation θ̂i i.e. d(θ̂i, θi) ≤ , where d is the distance metric defined in Eq. 1.\nProof. We here consider the approximation error averaged across all environments as follows:\navg(θ) = 1\nE E∑ i=1 ∣∣∣V ∗ θ̂i (s)− V ∗θi(s) ∣∣∣\navg(θ) = 1\nE E∑ i=1 |max a Q∗ θ̂i (s, a)−max a′ Q∗θi(s, a ′)|\n≤ 1 E E∑ i=1 max a |Q∗ θ̂i (s, a)−Q∗θi(s, a)|\n(12)\nLet us consider an environment θi ∈ME for which we can bound the RHS with\nsup s,a |Q∗ θ̂i (s, a)−Q∗θi(s, a)| ≤ sup s,a |rθ̂i(s, a)−rθi(s, a)|+γ sup s,a |Es′∼Tθ̂i (·|s,a)V ∗ θ̂i) (s′)−Es′′∼Tθi (·|s,a)V ∗ θi(s ′′)|\nConsidering the family of environmentsME have the same reward function and is known, resulting in first term to be 0.\nsup s,a\n∣∣Q∗ θ̂i (s, a)−Q∗θi(s, a) ∣∣ ≤ γ sup s,a |Es′∼Tθ̂i (·|s,a)V ∗ θ̂i (s′)− Es′′∼Tθi (·|s,a)V ∗ θi(s ′′)|\n= γ sup s,a ∣∣∣∣Es′∼Tθ̂i (·|s,a)[V ∗θ̂i(s′)− V ∗θi(s′)]+ Es′′∼Tθi (·|s,a), s′∼Tθ̂i (·|s,a) [ V ∗ θ̂i (s′)− V ∗θi(s ′′) ]∣∣∣∣\n≤ γ sup s,a ∣∣Es′∼Tθ̂i (·|s,a)[V ∗θ̂i(s′)− V ∗θi(s′)]∣∣+ γ sups,a ∣∣Es′′∼Tθi (·|s,a), s′∼Tθ̂i (·|s,a) [ V ∗ θ̂i (s′)− V ∗θi(s ′′) ]∣∣\n≤ γ sup s,a ∣∣Es′∼Tθ̂i (·|s,a)[V ∗θ̂i(s′)− V ∗θi(s′)]∣∣+ γ1− γ |θ̂i − θi| ≤ γmax\ns\n∣∣V ∗ θ̂i (s)− V ∗θi(s) ∣∣+ γ 1− γ |θ̂i − θi|\n= γmax s ∣∣max a Q∗ θ̂i (s, a)−max a′ Q∗θi(s, a ′) ∣∣+ γ 1− γ |θ̂i − θi|\n≤ γ sup s,a ∣∣Q∗ θ̂i (s, a)−Q∗θi(s, a) ∣∣+ γ 1− γ |θ̂i − θi|\nSolving for sups,a ∣∣Q∗\nθ̂i (s, a)−Q∗θi(s, a) ∣∣, sup s,a ∣∣Q∗ θ̂i (s, a)−Q∗θi(s, a) ∣∣ ≤ γ (1− γ)2 |θ̂i − θi| (13)\nPlugging Eq. 13 back in Eq. 12,\navg(θ) ≤ 1\nE E∑ i=1\nγ\n(1− γ)2 |θ̂i − θi|\n= γ\nE(1− γ)2\n[ |θ̂i=1 − θi=1|+ |θ̂i=2 − θi=2|+ · · ·+ |θ̂i=E − θi=E | ]\nWe now consider that the distance between the approximated θ̂i and the underlying hidden parameter θi ∈ME is defined as in Eq. 1, such that: d(θ̂i, θi) ≤ θ Plugging this back concludes the proof,\navg(θ) ≤ γ θ\n(1− γ)2 .\nIt is interesting to note that the average approximation error across all environments is independent of the number of environments and primarily governed by the error in approximating the hidden parameter θ for each environment." }, { "heading": "C ADDITIONAL RESULTS AND PROOFS FOR HIP-BMDP RESULTS", "text": "We first compute L∞ norm bounds for Q error under approximate abstractions and transfer bounds. Theorem 9 (Q error). Given an MDP M̄θ̂ built on a ( R, T , θ)-approximate bisimulation abstraction of an instance of a HiP-BMDPMθ, we denote the evaluation of the optimal Q function of M̄θ̂ onM as [Q∗M̄θ̂ ]Mθ . The value difference with respect to the optimal Q\n∗ M is upper bounded by∥∥Q∗Mθ − [Q∗M̄θ̂ ]Mθ∥∥∞ ≤ R + γ( T + θ) Rmax2(1− γ) .\nProof. In the HiP-BMDP setting, we have a global encoder φ over all tasks, but the difference in transition distribution also includes θ. The reward functions are the same across tasks, so there is\nno change to R. However, we now must incorporate difference in dynamics in T . Assuming we have two environments with hidden parameters θi, θj ∈ Θ, we can compute θi,θj T across those two environments by joining them into a super-MDP:\nθi,θj T = sup\na∈A, x1,x2∈X ,φ(x1)=φ(x2) ∥∥ΦTθi(x1, a)− ΦTθj (x2, a)∥∥1 ≤ sup\na∈A, x1,x2∈X ,φ(x1)=φ(x2) (∥∥ΦTθi(x1, a)− ΦTθi(x2, a)∥∥1 + ∥∥ΦTθi(x2, a)− ΦTθj (x2, a)∥∥1) ≤ sup\na∈A, x1,x2∈X ,φ(x1)=φ(x2)\n∥∥ΦTθi(x1, a)− ΦTθi(x2, a)∥∥1 + sup a∈A,\nx1,x2∈X ,φ(x1)=φ(x2) ∥∥ΦTθi(x2, a)− ΦTθj (x2, a)∥∥1 = θiT + ‖θi − θj‖1\nThis result is intuitive in that with a shared encoder learning a per-task bisimulation relation, the distance between bisimilar states from another task depends on the change in transition distribution between those two tasks. We can now extend the single-task bisimulation bound (Theorem 5) to the HiP-BMDP setting by denoting approximation error of θ as ‖θ − θ̂‖1 < θ.\nTheorem 4. For any φ which defines an ( R, T , θ)-approximate bisimulation abstraction on a HiP-BMDP familyMΘ, we define the empirical measurement of Q∗M̄θ̂ over D to be Q ∗ M̄D\nθ̂\n. Then,\nwith probability ≥ 1− δ,∥∥Q∗Mθ − [Q∗M̄D θ̂ ]Mθ ∥∥ ∞ ≤ R+γ( T + θ) Rmax 2(1− γ) + Rmax (1− γ)2 √ 1 2nφ(D) log 2|φ(X )||A| δ . (14)\nProof. ∥∥Q∗Mθ − [Q∗M̄D θ̂ ]Mθ ∥∥ ∞ ≤ ∥∥Q∗Mθ − [Q∗M̄θ̂ ]Mθ∥∥∞ + ∥∥[Q∗M̄θ̂ ]Mθ − [Q∗M̄Dθ̂ ]Mθ∥∥∞ = ∥∥Q∗Mθ − [Q∗M̄θ̂ ]Mθ∥∥∞ + ∥∥Q∗M̄θ̂ −Q∗M̄Dθ̂ ∥∥∞ The first term is solved by Theorem 2, so we only need to solve the second term using McDiarmid’s inequality and the knowledge that the value function of a bisimulation representation is 11−γ -Lipschitz from Theorem 1.\nFirst, we write this difference to be a deviation from an expectation in order to apply the concentration inequality. ∥∥Q∗M̄θ̂ −Q∗M̄Dθ̂ ∥∥∞ = ∥∥Q∗M̄θ̂ − T φDQ∗M̄θ̂ + T φDQ∗M̄θ̂ − T φDQ∗M̄Dθ̂ ∥∥∞\n≤ ‖Q∗M̄θ̂ − T φ DQ ∗ M̄θ̂ ‖∞ + γ‖Q∗M̄θ̂ −Q ∗ M̄D\nθ̂\n‖∞\n≤ 1 1− γ ‖T φDQ ∗ M̄θ̂ − T φQ∗M̄θ̂‖∞\nNow we can apply McDiarmid’s inequality, PD [∣∣Q∗M̄θ̂ −Q∗M̄Dθ̂ ∣∣ ≥ t ] ≤ 2 exp ( − 2t2|Dφ(x),a| R2max/(1− γ)2 ) . Solve for the t that makes this inequality hold for all (φ(x), a) ∈ X ×A with a union bound over all |φ(X )||A| abstract states,\nt > Rmax 1− γ\n√ 1\n2nφ(D) log 2|φ(X )||A| δ .\nCombine to get∥∥Q∗Mθ − [Q∗M̄D θ̂ ]Mθ ∥∥ ∞ ≤ R + γ( T + θ) Rmax 2(1− γ) + Rmax (1− γ)2 √ 1 2nφ(D) log 2|φ(X )||A| δ .\nAlgorithm 1 HiP-BMDP training for the Multi-task RL setting. Require: Along with DeepMDP components (Actor, Critic, Dynamics Model (M )), an additional\nenvironment encoder ψ to generated task-specific θ parameters. 1: for each timestep t = 1..T do 2: for each Ti do 3: ait ∼ πi(·|sit) 4: s′it ∼ pi(·|sit, ait) 5: D ← D ∪ (sit, ait, r(sit, ait), s′it ), i 6: UPDATECRITIC(D, i) (uses data only from ith task) 7: UPDATEACTOR(D, i) (uses data only from ith task) 8: UPDATEUSINGHIP-BMDPLOSS(D, i) 9: end for\n10: end for\nAlgorithm 2 UpdateModelUsingHip-BMDPLoss Require: Batches of data for the different tasks {Ti}i=1...T sampled from the Replay Buffer D,\nlearning rates α1 and α2, index of the current task i, Transition Model M , environment encoder ψ.\n1: for each batch of dataset t = 1..T , t 6= i do 2: Compute L(ψ,M) = Li(ψ,M, i, t) using Equation (3) 3: ψ ← ψ − α1∇θ ∑ i L\n4: M ←M − α2∇θ ∑ i L 5: end for" }, { "heading": "D ADDITIONAL IMPLEMENTATION DETAILS", "text": "In Figure 8, we show the variation in the left foot of the walker.\nMTRL Algorithm The Multitask RL algorithm for the HiP-BMDP setting can be found in Algorithm 1. We take the DeepMDP baseline Gelada et al. (2019) and incorporate our HiP-BMDP objective (text shown in red color)\nMeta-RL Algorithm The meta-RL algorithm for the HiP-MDP setting can be found in Algorithm 3. We take the PEARL algorithm (Rakelly et al., 2019) and incorporate our HiP-MDP objective (text shown in red color)" }, { "heading": "D.1 BASELINES", "text": "For PCGrad, the authors recommend projecting the gradient with respect to all previous tasks. In practice, that leads to very poor training. Instead, we observe that it is better to project the gradients with respect to any one task (randomly selected per update). We use this scheme in all the experiments. For GradNorm, we observe that the learned weights wi (for weighing per-task loss) can become negative for some tasks (which means the model tries to unlearn those tasks). In practice, we clamp the wi values to not become smaller than a threshold." }, { "heading": "D.2 HYPER PARAMETERS", "text": "" }, { "heading": "D.2.1 MTRL ALGORITHM", "text": "All the hyper parameters (for MTRL algorithm) are listed in Table 1." }, { "heading": "D.2.2 METARL ALGORITHM", "text": "For MetaRL, we use the same hyperparameters as used by PEARL Rakelly et al. (2019). We set αφ = 0.01 for all environments, other than Walker-Stand environments where αφ = 0.001." }, { "heading": "E ADDITIONAL RESULTS", "text": "Along with the environments described in 4, we considered the following additional environments:\n1. Walker-Stand-V0: Walker-Stand task where the friction coefficient, between the ground and the walker’s leg, varies across different environments.\n2. Walker-Walk-V0: Walker-Walk task where the friction coefficient, between the ground and the walker’s leg, varies across different environments.\n3. Walker-Stand-V1: Walker-Stand task where the size of left-foot of the walker varies across different environments.\nAlgorithm 3 HiP-MDP training for the meta-RL setting. Require: Batch of training tasks {Ti}i=1...T from p(T ), learning rates α1, α2, α3, αφ\n1: Initialize replay buffers Bi for each training task 2: while not done do 3: for each Ti do 4: Initialize context Ci = {} 5: for k = 1, . . . ,K do 6: Sample z ∼ qφ(z|Ci) 7: Gather data from πθ(a|s, z) and add to Bi 8: Update Ci = {(sj ,aj , s′j , rj)}j:1...N ∼ Bi 9: end for\n10: end for 11: for step in training steps do 12: for each Ti do 13: Sample context Ci ∼ Sc(Bi) and RL batch bi ∼ Bi 14: Sample z ∼ qφ(z|Ci) 15: Liactor = Lactor(bi, z) 16: Licritic = Lcritic(bi, z) 17: LiKL = βDKL(q(z|Ci)||r(z)) 18: Sample a RL batch bj from any other task j 19: Compute LiBiSim = Li(q, T, i, j) using the equation 3 20: end for 21: φ← φ− α1∇φ ∑ i ( Licritic + LiKL + αφ × LiBiSim\n) 22: θπ ← θπ − α2∇θ ∑ i Liactor\n23: θQ ← θQ − α3∇θ ∑ i Licritic 24: end for 25: end while\n4. Walker-Walk-V1: Walker-Walk task where the size of left-foot of the walker varies across different environments." }, { "heading": "E.1 MULTI-TASK SETTING", "text": "In Figure 10, we observe that the HiP-BMDP method consistently outperforms other baselines when evaluated on the interpolation environments (zero-shot transfer). As noted previously, the effectiveness of our proposed model can not be attributed to task-embeddings alone as HiP-BMDPnobisim model uses the same architecture as the HiP-BMDP model but does not include the task bisimulation metric loss. We hypothesise that the Distral-Ensemble baseline behaves poorly because it cannot leverage a shared global dynamics model." }, { "heading": "E.2 META-RL SETTING", "text": "We provide the Meta-RL results for the additional environments. Recall that we extend the PEARL algorithm (Rakelly et al., 2019) by training the inference network qφ(z|c) with our additional HiPBMDP loss. The algorithm pseudocode can be found in Appendix D. In Figure 13, we show the\nresults on the interpolation setup and in Figure 14, we show the results on the extrapolation setup. In some environments (eg Walker-Walk-V1), the proposed approach (blue) converges faster to a threshold reward (green) than the baseline. In the other environments, the gains are quite small." }, { "heading": "E.3 EVALUATING THE UNIVERSAL TRANSITION MODEL.", "text": "We investigate how well the transition model performs in an unseen environment by only adapting the task parameter θ. We instantiate a new MDP, sampled from the family of MDPs, and use a behavior policy to collect transitions. These transitions are used to update only the θ parameter, and the transition model is evaluated by unrolling the transition model for k-steps. In Figures 11 and 12, we report the average, per-step model error in latent space, averaged over 10 environments over 5 and 100 steps respectively. While we expect both the proposed setup and baseline setups to adapt to the new environment, we expect the proposed setup to adapt faster because of the exploitation of underlying structure. We indeed observe that for both 5 step and 100 step unrolls, the proposed HiP-BMDP model adapts much faster than the baseline HiP-BMDP-nobisim (Figures 11 and 12)" } ]
2,021
HIDDEN-PARAMETER BLOCK MDPS
SP:dc75166137ad902cb0b08966bc25914e0f141c63
[ "The paper considers the question of quantifying the uncertainty that arises from the optimiser used to perform inference in a given model. Taking a Bayesian approach, the aim is to deduce the posterior over the space of optimisers. The form for the posterior is chosen to be a Boltzmann distribution which is then approximated with a multivariate Gaussian using a KL divergence. The parameterisation for the posterior is defined using an LSTM neural network." ]
Optimizing an objective function with uncertainty awareness is well-known to improve the accuracy and confidence of optimization solutions. Meanwhile, another relevant but very different question remains yet open: how to model and quantify the uncertainty of an optimization algorithm itself? To close such a gap, the prerequisite is to consider the optimizers as sampled from a distribution, rather than a few pre-defined and fixed update rules. We first take the novel angle to consider the algorithmic space of optimizers, each being parameterized by a neural network. We then propose a Boltzmann-shaped posterior over this optimizer space, and approximate the posterior locally as Gaussian distributions through variational inference. Our novel model, Bayesian learning to optimize (BL2O) is the first study to recognize and quantify the uncertainty of the optimization algorithm. Our experiments on optimizing test functions, energy functions in proteinprotein interactions and loss functions in image classification and data privacy attack demonstrate that, compared to state-of-the-art methods, BL2O improves optimization and uncertainty quantification (UQ) in aforementioned problems as well as calibration and out-of-domain detection in image classification.
[]
[ { "authors": [ "Martı́n Abadi", "Paul Barham", "Jianmin Chen", "Zhifeng Chen", "Andy Davis", "Jeffrey Dean", "Matthieu Devin", "Sanjay Ghemawat", "Geoffrey Irving", "Michael Isard" ], "title": "Tensorflow: A system for largescale machine learning", "venue": "In 12th USENIX Symposium on Operating Systems Design and Implementation", "year": 2016 }, { "authors": [ "Mohamed Osama Ahmed", "Bobak Shahriari", "Mark Schmidt" ], "title": "Do we need “harmless” bayesian optimization and “first-order” bayesian optimization", "venue": "NIPS BayesOpt,", "year": 2016 }, { "authors": [ "Marcin Andrychowicz", "Misha Denil", "Sergio Gomez", "Matthew W Hoffman", "David Pfau", "Tom Schaul", "Brendan Shillingford", "Nando De Freitas" ], "title": "Learning to learn by gradient descent by gradient descent", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Yoshua Bengio", "Jérôme Louradour", "Ronan Collobert", "Jason Weston" ], "title": "Curriculum learning", "venue": "In Proceedings of the 26th annual international conference on machine learning,", "year": 2009 }, { "authors": [ "Hildo Bijl", "Thomas B Schön", "Jan-Willem van Wingerden", "Michel Verhaegen" ], "title": "A sequential monte carlo approach to thompson sampling for bayesian optimization", "venue": "arXiv preprint arXiv:1604.00169,", "year": 2016 }, { "authors": [ "Charles Blundell", "Julien Cornebise", "Koray Kavukcuoglu", "Daan Wierstra" ], "title": "Weight uncertainty in neural networks", "venue": "arXiv preprint arXiv:1505.05424,", "year": 2015 }, { "authors": [ "Shahin Boluki", "Randy Ardywibowo", "Siamak Zamani Dadaneh", "Mingyuan Zhou", "Xiaoning Qian" ], "title": "Learnable bernoulli dropout for bayesian deep learning", "venue": "arXiv preprint arXiv:2002.05155,", "year": 2020 }, { "authors": [ "Eric Brochu", "Vlad M Cora", "Nando De Freitas" ], "title": "A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning", "venue": "arXiv preprint arXiv:1012.2599,", "year": 2010 }, { "authors": [ "Yue Cao", "Yang Shen" ], "title": "Bayesian active learning for optimization and uncertainty quantification in protein docking", "venue": "Journal of chemical theory and computation,", "year": 2020 }, { "authors": [ "Yue Cao", "Tianlong Chen", "Zhangyang Wang", "Yang Shen" ], "title": "Learning to optimize in swarms", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Yue Cao", "Yuanfei Sun", "Mostafa Karimi", "Haoran Chen", "Oluwaseyi Moronfoye", "Yang Shen" ], "title": "Predicting pathogenicity of missense variants with weakly supervised regression", "venue": "Human mutation,", "year": 2019 }, { "authors": [ "Yutian Chen", "Matthew W Hoffman", "Sergio Gómez Colmenarejo", "Misha Denil", "Timothy P Lillicrap", "Matt Botvinick", "Nando de Freitas" ], "title": "Learning to learn without gradient descent by gradient descent", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Jeremy S De Bonet", "Charles Lee Isbell Jr.", "Paul A Viola" ], "title": "Mimic: Finding optima by estimating probability densities", "venue": "In Advances in neural information processing systems,", "year": 1997 }, { "authors": [ "Matt Fredrikson", "Somesh Jha", "Thomas Ristenpart" ], "title": "Model inversion attacks that exploit confidence information and basic countermeasures", "venue": "In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security,", "year": 2015 }, { "authors": [ "David E Goldenberg" ], "title": "Genetic algorithms in search, optimization and machine learning", "venue": null, "year": 1989 }, { "authors": [ "Chuan Guo", "Geoff Pleiss", "Yu Sun", "Kilian Q Weinberger" ], "title": "On calibration of modern neural networks", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Philipp Hennig", "Christian J Schuler" ], "title": "Entropy search for information-efficient global optimization", "venue": "The Journal of Machine Learning Research,", "year": 2012 }, { "authors": [ "José Miguel Henrández-Lobato", "Matthew W. Hoffman", "Zoubin Ghahramani" ], "title": "Predictive Entropy Search for Efficient Global Optimization of Black-box Functions", "venue": "In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1,", "year": 2014 }, { "authors": [ "José Miguel Hernández-Lobato", "Matthew W Hoffman", "Zoubin Ghahramani" ], "title": "Predictive entropy search for efficient global optimization of black-box functions", "venue": "In Advances in neural information processing systems,", "year": 2014 }, { "authors": [ "Howook Hwang", "Thom Vreven", "Joël Janin", "Zhiping Weng" ], "title": "Protein-Protein Docking Benchmark", "venue": "Version 4.0. Proteins,", "year": 2010 }, { "authors": [ "Momin Jamil", "Xin-She Yang" ], "title": "A literature survey of benchmark functions for global optimisation problems", "venue": "International Journal of Mathematical Modelling and Numerical Optimisation,", "year": 2013 }, { "authors": [ "Alex Kendall", "Yarin Gal" ], "title": "What uncertainties do we need in bayesian deep learning for computer", "venue": null, "year": 2017 }, { "authors": [ "J Kennedy", "R Eberhart" ], "title": "Particle swarm optimization, proceedings of ieee international conference on neural networks", "venue": null, "year": 1995 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Scott Kirkpatrick", "C Daniel Gelatt", "Mario P Vecchi" ], "title": "Optimization by simulated annealing", "venue": null, "year": 1983 }, { "authors": [ "Marc F. Lensink", "Raúl Méndez", "Shoshana J. Wodak" ], "title": "Docking and scoring protein complexes: CAPRI 3rd Edition", "venue": "Proteins: Structure, Function, and Bioinformatics,", "year": 2007 }, { "authors": [ "Daniel James Lizotte" ], "title": "Practical bayesian optimization", "venue": "University of Alberta,", "year": 2008 }, { "authors": [ "Kaifeng Lv", "Shunhua Jiang", "Jian Li" ], "title": "Learning gradient descent: Better generalization and longer horizons", "venue": "In Proceedings of the 34th International Conference on Machine Learning-Volume", "year": 2017 }, { "authors": [ "Iain H. Moal", "Paul A. Bates" ], "title": "SwarmDock and the Use of Normal Modes in Protein-Protein Docking", "venue": "International Journal of Molecular Sciences,", "year": 2010 }, { "authors": [ "Milad Nasr", "Reza Shokri", "Amir Houmansadr" ], "title": "Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inference attacks", "venue": "arXiv preprint arXiv:1812.00910,", "year": 2018 }, { "authors": [ "Pedro Ortega", "Jordi Grau-Moya", "Tim Genewein", "David Balduzzi", "Daniel Braun" ], "title": "A nonparametric conjugate prior distribution for the maximizing argument of a noisy function", "venue": "In Advances in Neural Information Processing Systems,", "year": 2012 }, { "authors": [ "Michael A Osborne", "Roman Garnett", "Stephen J Roberts" ], "title": "Gaussian processes for global optimization", "venue": "In 3rd international conference on learning and intelligent optimization", "year": 2009 }, { "authors": [ "Martin Pelikan", "David E Goldberg", "Erick Cantú-Paz" ], "title": "Boa: The bayesian optimization algorithm", "venue": "In Proceedings of the genetic and evolutionary computation conference GECCO-99,", "year": 1999 }, { "authors": [ "Brian G. Pierce", "Kevin Wiehe", "Howook Hwang", "Bong-Hyun Kim", "Thom Vreven", "Zhiping Weng" ], "title": "ZDOCK server: interactive docking prediction of protein–protein complexes and symmetric multimers", "venue": "Bioinformatics, 30(12):1771–1773,", "year": 2014 }, { "authors": [ "K.A. Porter", "I. Desta", "D. Kozakov", "S. Vajda" ], "title": "What method to use for protein-protein docking", "venue": "Curr. Opin. Struct. Biol., 55:1–7,", "year": 2019 }, { "authors": [ "B. Shahriari", "K. Swersky", "Z. Wang", "R.P. Adams", "N. de Freitas" ], "title": "Taking the Human Out of the Loop: A Review of Bayesian Optimization", "venue": "Proceedings of the IEEE,", "year": 2016 }, { "authors": [ "Alexander Shapiro" ], "title": "Probabilistic constrained optimization: Methodology and applications. Statistical inference of stochastic optimization problems, pp", "venue": null, "year": 2000 }, { "authors": [ "Graham R Smith", "Michael JE Sternberg" ], "title": "Prediction of protein–protein interactions by docking methods", "venue": "Current opinion in structural biology,", "year": 2002 }, { "authors": [ "Niranjan Srinivas", "Andreas Krause", "Sham M Kakade", "Matthias Seeger" ], "title": "Gaussian process optimization in the bandit setting: No regret and experimental design", "venue": "arXiv preprint arXiv:0912.3995,", "year": 2009 }, { "authors": [ "Emmanuel Vazquez", "Julien Bect" ], "title": "Convergence properties of the expected improvement algorithm with fixed mean and covariance functions", "venue": "Journal of Statistical Planning and inference,", "year": 2010 }, { "authors": [ "Zi Wang", "Stefanie Jegelka" ], "title": "Max-value entropy search for efficient bayesian optimization", "venue": "arXiv preprint arXiv:1703.01968,", "year": 2017 }, { "authors": [ "Jian Wu", "Matthias Poloczek", "Andrew G Wilson", "Peter Frazier" ], "title": "Bayesian optimization with gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Anatoly A Zhigljavsky" ], "title": "Theory of global random search, volume 65", "venue": "Springer Science & Business Media,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Computational models of many real-world applications involve optimizing non-convex objective functions. As the non-convex optimization problem is NP-hard, no optimization algorithm (or optimizer) could guarantee the global optima in general, and instead, their solutions’ usefulness (sometimes based on their proximity to the optima), when the optima are unknown, can be very uncertain. Being able to quantify such uncertainty is important to not only assessing the solution uncertainty after optimization but also enhancing the search efficiency during optimization. For instance, reliable and trustworthy machine learning models demand uncertainty awareness and quantification during training (optimizing) such models, whereas in reality deep neural networks without proper modeling of uncertainty suffer from overconfidence and miscalibration (Guo et al., 2017). In another application example of protein docking, although there exists epistemic uncertainty of the objective function and the aleatoric uncertainty of the protein structure data (Cao & Shen, 2020), state-ofthe-art methods only predict several single solutions (Porter et al., 2019) without any associated uncertainty, which makes those predictions hard to be interpreted by the end users.\nVarious optimization methods have been proposed in response to the need of uncertainty awareness. Stochastic optimization methods like random search (Zhigljavsky, 2012), simulated annealing (Kirkpatrick et al., 1983), genetic algorithms (Goldenberg, 1989) and particle swarm optimization (Kennedy & Eberhart, 1995) injected the randomness into the algorithms in order to reduce uncertainties. However, these methods do not provide the uncertainty quantification (UQ) of solutions. Recently, there have been growing interests in applying inference-based methods to optimization problems (Brochu et al., 2010; Shapiro, 2000; Pelikan et al., 1999). Generally, they transfer the uncertainties within the data and model into the final solution by modelling the posterior distribution over the global optima. For instance, Bijl et al. (2016) uses sequential Monte Carlo to approximate the distribution over the optima with Thompson sampling as the search strategy. Hernández-Lobato et al. (2014) uses kernel approximation for modelling the posterior over the optimum under Gaussian process. Ortega et al. (2012); Cao & Shen (2020) directly model the posterior over the optimum as a Boltzmann distribution. They not only surpass the previous methods in accuracy and efficiency, but also provide easy-to-interpret uncertainty quantification.\nDespite progress in optimization with uncertainty-awareness, significant open questions remain. Existing methods consider uncertainty either within the data or the model (including objective functions) (Kendall & Gal, 2017; Ortega et al., 2012; Cao & Shen, 2020). However, no attention was ever paid to the uncertainty arising from the optimizer that is directly responsible for deriving the end solutions with given data and models. The optimizer is usually pre-defined and fixed in the optimization algorithm space. For instance, there are several popular update rules in Bayesian optimization, such as expected improvement Vazquez & Bect (2010) or upper confidence bound Srinivas et al. (2009), that are chosen and fixed for the entire process. For Bayesian neural networks training, the update rule is usually chosen off-the-shelf, such as Adam, SGD, or RMSDrop. The uncertainty in the optimizer is intrinsically defined over the optimizer space and important to the optimization and UQ solutions. However, such uncertainty is unwittingly ignored when the optimizer is treated as a fixed sample in the space.\nTo fill the aforementioned gap, the core intellectual value of this work is to recognize and quantify a new form of uncertainty, that lies in the optimization algorithm (optimizer), besides the classical data- or model- based uncertainties (also known as epistemic and aleatoric uncertainties). The underlying innovation is to treat an optimizer as a random sample from the algorithmic space, rather than one of a few hand-crafted update rules. The key enabling technique is to consider the algorithmic space being parameterized by a neural network. We then leverage a Boltzmann-shaped posterior over the optimizers, and approximate the posterior locally as Gaussian distributions through variational inference. Our approach, Bayesian learning to optimize (BL2O), for the first time addresses the modeling of the optimizer-based uncertainty. Extensive experiments on optimizing test functions, energy functions in a bioinformatics application, and loss functions in the image classification and data privacy attack demonstrate that compared to the start-of-art methods, BL2O substantially improves the performance of optimization and uncertainty quantification, as well as calibration and out-of-domain detection in classification.\nIn the following sections, we first review related methods in details and reveal the remaining gap. We then formally define the problem of optimization with uncertainty quantification and point out the optimizer as a source of uncertainty. After formally defining the optimizer space, the optimal optimizer as a random vector in the space, and the optimizer uncertainty, we propose our novel model, BL2O. And lastly, we compare our BL2O with both Bayesian and non-Bayesian competing methods on extensive test functions and real-world applications." }, { "heading": "2 RELATED WORK", "text": "Many works (Wang & Jegelka, 2017; Hennig & Schuler, 2012) studied optimization with uncertainty quantification under the framework of Bayesian optimization (Shahriari et al., 2016; Brochu et al., 2010). In these studies, multiple objectives are sampled from the posterior over the objectives (p(f |D)), where D is the observed data. Each sampled objective is optimized for obtaining samples of the global optima: w∗ so that the empirical distribution over w∗ can be built. Approximation is much needed since those approaches need optimization for every sample. For instance, Henrández-Lobato et al. (2014) uses kernel approximation to approximate the posterior distirbution.\nAnother line of work uses various sampling schemes for estimating the density of posterior distributions. For instance, Bijl et al. (2016) uses sequential Monte Carlo sampling. De Bonet et al. (1997) designs a randomized optimization algorithm that directly samples global optima. These methods are much more efficient, but their performance heavily depends on the objective landscapes. Moreover, a few studies (Ahmed et al., 2016; Lizotte, 2008; Osborne et al., 2009; Wu et al., 2017) in Bayesian optimization utilize first-order information to boost the performance of optimization. For instance, Osborne et al. (2009) uses gradient information to improve the covariance matrix in Gaussian process. Wu et al. (2017) embeds the derivative knowledge into the acquisition function which is optimized in every iteration.\nFinally, there are approaches (Ortega et al., 2012; Cao & Shen, 2020) that directly model the shape of posterior as the Boltzmann distributions: p(w∗|D) ∝ exp(−αf(w∗)), where α is the scheduled temperature constant. They automatically adjust α during the search in order to balance the exploration-exploitation tradeoff. They beat previous work in terms of both efficiency and accuracy.\nHowever, as revealed earlier in the Introduction, none of the methods above consider the uncertainty within the optimizer." }, { "heading": "3 METHODS", "text": "Notation. We use a bold-faced uppercase letter to denote a matrix (e.g. W ), a bold-faced lowercase letter to denote a vector (e.g. w), and a normal lowercase letter to denote a scalar (e.g. w)." }, { "heading": "3.1 PROBLEM STATEMENT", "text": "The goal of optimization is to find the global optimum for an objective function f(w) w.r.t. w:\nw∗ = arg min w f(w). (1)\nw∗ is assumed unknown and treated as a random vector in this study. Once a optimizer obtains ŵ, its estimate of w∗, it is important to assess the quality and the uncertainty of the solution. Considering that many real-world objective functions are nonconvex and noisy in ŵ, solution quality is often measured by ||ŵ−w∗||, the proximity to the global optimum rather than that to the optimal function value. Examples include energy functions as the objective and RMSDs as the proximity measure in protein docking (Lensink et al., 2007). Therefore, the goal of uncertainty quantification (UQ) here is the following: P (||ŵ −w∗|| 6 rσ|D) = σ (2) where rσ is the upper bound of ||ŵ − w∗|| at σ confidence level, and D denotes samples during optimization. Such UQ results additionally provide confidence in the solution ŵ and improve model reliability for end users.\nTo calculate the probability defined in Eq 2 and perform UQ, a direct albeit challenging way is to model the posterior over w∗ (p(w∗|D)) and then sample from the posterior. When the optimizer g is regarded fixed in existing literature, the posterior is actually p(w∗|D, g). A central contribution of ours is to further consider the optimizer as a source of uncertainty, model it as a random vector in an optimizer space, and perform posterior estimation of p(w∗|D)." }, { "heading": "3.2 OPTIMIZER UNCERTAINTY: A FRAMEWORK", "text": "An optimizer is directly responsible for optimization and thus naturally a source of solution uncertainty. To address this often-neglected uncertainty source, we first define the space of optimizers and then model an optimizer as a point in this space. Considering that many widely-used optimizers are iterative and using first-order derivatives, we restrict the optimizer space as follows:\nDefinition 3.1 ((First-order Iterative) Optimizer Space) We define a first-order, iterative algorithmic space G, where each point g ∈ G is an iterative optimizer, that has the following mapping: g({∇f(wτ )}tτ=1) = δwt, where ∇f(w)τ and δwt are the gradient and the update vector at τ th and tth iteration, respectively.\nHere we use g(·) to denote a pre-defined update rule and the resulting optimizer. For instance, in gradient descent, g({∇f(wτ )}tτ=1) = −α∇f(wt), where α is the step size. Now that the optimizer space is defined, we next define the (unknown) optimal optimizer and its uncertainty.\nDefinition 3.2 (Optimal Optimizer) We define the optimal optimizer g∗ ∈ G as the optimizer that can obtain the lowest function value with a fixed budget T :\ng∗ = arg min g∈G ( T∑ t=1 f(wtg)) (3)\nwhere wtg = w t−1 g + g({∇f(wτg )}t−1τ=1) is the parameter value at tth iteration updated through the optimizer g.\nIn practice the optimal optimizer g∗ is unknown so we treat g∗ as a random vector and formally define the optimizer uncertainty as follows:\nDefinition 3.3 (Optimizer Uncertainty) Let G be the algorithmic space, where each point g ∈ G is an optimizer. We assume there is a prior distribution over the optimal optimizer g∗ as p(g∗). We also assume a likelihood distribution as p(D|g∗), whereD are the observed data (sample trajectory) given g∗. Then we define the optimizer uncertainty through p(g∗|D) ∝ p(D|g∗)p(g∗).\nTo inject the optimizer uncertainty into p(w∗|D), it is straightforward to have the following integration for posterior estimation:\np(w∗|D) = ∫ p(g∗|D)p(w∗|D, g∗)dg (4)" }, { "heading": "3.3 PARAMETERIZING THE OPTIMIZER SPACE", "text": "The optimizer uncertainty p(g∗|D) as defined in Def. 3.3 can be intractable when there is no proper parameterization of the optimizer space G. Therefore, we next introduce possible ways of parameterizing G as defined in Def. 3.1.\nParameterization through Hyperparameters of Specific Optimizers. A simple way to parameterize the optimizer space for classical optimizers (e.g. Gradient Descent, Adam) is based on their hyperparameters: G = H, where H is the hyperparameter space. For instance, for gradient descent, we haveH = (α), where α is the learning rate. For Adam, we haveH = (α, β1, β2), where β1 and β2 are the coefficients used for computing running averages of gradient and its square.\nHowever, such parameterization has significant drawbacks. The resulting algorithmic space G is very limited and heavily depends on the specific optimizer. The G (a 1D space) parameterized by the hyperparameters of gradient descent is different from that (a 3D space) parameterized by the hyperparameters of Adam. In fact, each is a rather restricted region of the actual G. The intrinsic flexibility (uncertainty) that lies in an iterative optimizer’s update rule is not explored at all in this parameterization. These drawbacks are empirically demonstrated in Sec. 4.\nParameterization through Neural Networks. In order to reasonably and accurately model the intrinsic uncertainty within the update rule, we need to find a much more flexible way for modelling g. We thus consider to parameterize the optimizer space as a neural network: G = Θ, where each θ ∈ Θ are the parameters in the neural network. Overcoming drawbacks of the optimizer spaceH by hyperparameters, Θ by neural network parameters generalizes update rules through neural networks that can represent a wide variety of functions. We note that this is also the space of meta-optimizers that learn to optimize (L2O) iterative update rules from data on a given task (Andrychowicz et al., 2016; Chen et al., 2017; Lv et al., 2017; Cao et al., 2019a). However, there has been no notion of uncertainty let alone the task of UQ for the learned optimizer in these L2O methods, which is to be addressed in our Bayesian L2O (BL2O)." }, { "heading": "3.4 MODELING AN OPTIMIZER AS A RANDOM VECTOR", "text": "Now that we have the optimizer space G properly defined and parameterized, we proceed to model an optimizer g as a random vector in the space.\nBoltzmann-shaped Posterior. Since we have G = Θ, we can rewrite each g ∈ G as gθ with θ ∈ Θ and the optimal optimizer g∗ as gθ∗ . Therefore, p(g∗|D) becomes p(θ∗|D). We consider a Gaussian prior over the parameters of the neural network : p(θ∗) ∝ exp(−λ||θ∗||22), where λ is a constant controlling the variance. We use the chain rule to decompose the likelihood function p(D|θ∗) at a fixed budge T : p(D|θ∗) = T∏ t=1 p(f(wtθ∗),w t θ∗ |θ∗, {f(wτθ∗),wτθ∗}t−1τ=0) = T∏ t=1 p(f(wtθ∗), |wtθ∗ ,θ∗, {f(wτθ∗),wτθ∗}t−1τ=0)\n(5)\nThe second equality is due to that wtθ∗ is fixed given θ ∗ and past data points. For the single sample likelihood, we apply the results from Ortega et al. (2012); Cao & Shen (2020) and obtain p(f(wtθ∗), |wtθ∗ ,θ∗, {f(wτθ∗),wτθ∗}t−1τ=1) ∝ exp(−f(wtθ∗)) (6)\nWe multiply the likelihood functions of all samples together and obtain the Boltzmann-shaped likelihood function as p(D|θ∗) ∝ exp(− ∑T t=1 f(w t θ∗)). We finally multiply the conjugate prior to the likelihood and obtain the Boltzmann-shaped posterior as:\np(θ∗|D) ∝ exp(− T∑ t=1 f(wtθ∗)) · exp(−λ||θ∗||22) = exp(−F (θ∗)) (7)\nwhere F (θ∗) = ∑T t=1 f(w t θ∗) + λ||θ∗||22, which actually contains the objective in Eq 3 plus a L2\nregularization constant.\nLocal Approximation and Bayesian Loss. However, the above posterior distribution involves an integral in the normalization constant which is computationally intractable. Moreover, the architecture of F (θ∗) is so complicated that it is impossible to directly sample from the posterior distribution. In order to overcome the aforementioned challenges, we would like to learn a distribution function q(θ∗|φ) that has the analytic form and is easy to be sampled, where φ is the parameter vector in q(θ∗|φ), to approximate the real posterior p(θ∗|D). Furthermore, due to the high dimensions of θ∗ and the complicated landscape of the posterior, it is impossible to approximate p(θ∗|D) at every position in the θ∗ space. We then consider to approximate it locally around θc, an optimum of interest for F (θ∗). We denote the local region as Θc, a neighborhood around θc, and re-normalization constant C =∫ θ∗∈Θc p(θ\n∗|D)dθ∗. Then the local posterior will be a conditioned (re-scaled) version of p(θ∗|D): p′(θ∗|D) = p(θ∗|D)/C, θ∗ ∈ Θc. In order to make q(θ∗|φ) ≈ p′(θ∗|D), we calculate the KL-divergence between these two:\nKL(q(θ∗|φ)||p′(θ∗|D)) = ∫ θ∗∈Θc q(θ∗|φ) log q(θ ∗|φ) p′(θ∗|D) dθ∗ = ∫ θ∗∈Θc q(θ∗|φ) log q(θ ∗|φ) p(θ∗|D)/C dθ∗\n= ∫ θ∗∈Θc q(θ∗|φ) log q(θ ∗|φ) exp(−F (θ∗)) dθ∗ + ∫ θ∗∈Θc q(θ∗|φ) log(ZC)dθ∗,\n(8) where Z = ∫ exp(−F (θ∗))dθ∗ is the normalization constant. The second term in the above equation equals to log(ZC), a constant w.r.t. φ, thus could be ignored during optimization.\nWe then propose our Bayesian loss as:\nFB(φ) = ∫ θ∗∈Θc q(θ∗|φ) log q(θ∗|φ)dθ∗ + ∫ θ∗∈Θc q(θ∗|φ)F (θ∗)dθ∗\n= −H(q(θ∗|φ)) + Eq(θ∗|φ)[F (θ∗)], (9)\nwhere the first term of FB measures the negative entropy of our approximated posterior, and the second term is the expectation of the loss function over of posterior.\nGaussian Posterior. For local approximation, we consider φ = (µ,Σ) and q(θ∗|φ) = N (µ,Σ), where µ is the mean vector and Σ is the covariance matrix of a normal distribution. For simplicity, we consider Σ to be a diagonal matrix: Σ = diag(σ21 , σ 2 2 , σ 2 3 , ...). The second term in Eq (9) involves the integral over F (θ∗), which is intractable. Therefore, we use Monte Carlo sampling through q(θ∗|φ) to replace the integral there. However, the direct sampling of the posterior parameters makes it difficult for the optimization as it is inaccessible to get the gradient w.r.t. µ and Σ. Moreover, the standard deviation σ1, σ2, ... must be non-negative, making the optimization constrained.\nTo overcome those two challenges, we use the trick introduced in (Blundell et al., 2015) to shift sampling from q(θ∗|φ) to sampling from a standard normal distributionN (0, I). And we reparameterize standard deviation σi to ρi as σi = log(1+exp(ρi)). Then for any sampled fromN (µ, I), we could calculate θ∗ as θ∗ = u+ log(1 + exp(ρ)) , where means element-wise product and ρ = (ρ1, ρ2, ...)." }, { "heading": "3.5 BAYESIAN AVERAGING", "text": "We recall our goal to build the posterior over the global optimum: p(w∗|D) through Eq 4. We consider using Monte Carlo sampling to approximate the integral as:\np(w∗|D) = ∫ p(g(·)|D)p(w∗|D, g)dg ≈ ∫ θ∗∈Θc q(θ∗|φ)p(w∗|gθ∗(·),D)dθ∗ ≈ N∑ i=1 p(w∗|gθ∗i (·),D) (10) where θ∗i is sampled from q(θ\n∗|φ). Since q(θ∗|φ) follows a multivariate Gaussian distribution where individual dimensions are independent of each other, we estimate the summation above in\neach dimension using independent MC samplings. In practice, N being 10, 000, 100, 000, and 500, 000 led to negligible differences in the 1D estimations, and N was thus fixed at 10, 000." }, { "heading": "3.6 META-TRAINING SET", "text": "In order to boost the robustness and generalizability of our optimizer posterior p(g∗|D), we consider using an ensemble of objective functions F = {fi}Ni=1. Specifically, we replace the objective function in Eq 3 with 1N ∑N i=1 ∑T t=1 fi(w t θ∗,i) and rewrite F (θ ∗) as:\nF (θ∗) = 1\nN N∑ i=1 T∑ t=1 fi(w t θ∗,i) + λ||θ∗||2 (11)\nwhere wtθ∗,i is the solution at tth iteration for objective fi optimized by gθ∗ . Such replacement can let our posterior generalize to novel objective functions. We regard the functional dataset F as the meta-training set. During the experiments, we create different meta-training sets for different problems, which will be described in details in Sec 4. We note that Eq 11 is also the objective or part of the objective in many meta-optimizers (Andrychowicz et al., 2016; Chen et al., 2017; Lv et al., 2017; Cao et al., 2019a). However, those methods are focusing on training a deterministic optimizer without uncertainty-awareness." }, { "heading": "3.7 TWO-STAGE TRAINING FOR EMPOWERING THE LOCAL POSTERIOR", "text": "As mentioned before, due to the extreme large optimizer space, we focus on modelling the posterior locally around θc, an optimum of interest. If we directly train our model through the Bayesian loss in Eq 9, we simply regard that our posterior is locally around the random initialized point. In order to obtain an real optimum of interest θc, we first train our model in a non-Bayesian way through minimizing the loss in Eq 11. We then use θc as the warm start for µ, and start the second Bayesian training stage through the loss in Eq 9. Both training stages are critical for empowering our local posterior. Such statement can be demonstrated through the ablation study in Appendix F." }, { "heading": "3.8 MODEL ARCHITECTURE, IMPLEMENTATION AND COMPUTATIONAL COMPLEXITY", "text": "The model is implemented in Tensorflow 1.13 (Abadi et al., 2016) and optimized by Adam (Kingma & Ba, 2014). For the optimizer architecture, we use the coordinate-wise LSTM from Andrychowicz et al. (2016). We also validate this design choice in Appendix F. Due to the coordinate-wise nature, our BL2O model only contains 10,282 free parameters. For all experiments, the length of LSTM is set to be 20. Both training stages include 5,000 training epochs.\nThe time complexity for BL2O isO(KBNe+KNeH2), whereK is the number of sampling trajectories, B is the minibatch size, Ne is the number of objective parameters, and H is the hidden size of LSTM (H = 20 in the study). As the batch size increases, the computational cost is close to the traditional Bayesian neural networks trained through SGD. Due to the coordinate-wise LSTM, the space coomplexity (memory cost) of BL2O is only O(H2), which remains the same as the number of objective parameters varies. Both the time and the space complexity of BL2O are the same as DM LSTM (Abadi et al., 2016), while those of Adam are O(KBNe) and O(Ne), respectively." }, { "heading": "4 EXPERIMENTS", "text": "We test our BL2O model extensively on optimizing: non-convex test functions, energy functions in protein-protein interactions, loss functions in image classification and loss functions in data privacy attack. We compare BL2O to three non-Bayesian methods: Adam, Particle Swarm Optimization (PSO) (Kennedy & Eberhart, 1995), DM LSTM (Andrychowicz et al., 2016) and a recently published Bayesian method, BAL (Cao & Shen, 2020). All algorithms are running for 10,000 times with random initializing points to obtain the empirical posterior distributions. During each run, the hyperparameters in Adam and PSO are sampled from Table 4 in Appendix A. Out of 10,000 solutions we choose the one with the lowest function value to be the final solution (ŵ).\nGenerally, for optimization performance, we assess the distance between the final solution and the global optima: ||ŵ − w∗||. The lower the distance is, the better the solution quality is. For uncertainty quantification, we assess the upper bound rσ and the real confidence σ given a fixed\nconfidence level σ. The real confidence σ is defined by the fraction of 10,000 solutions that actually fall in the bounded region. The lower the rσ is, the tighter the confidence interval is. And the closer of σ to σ is, the more accurate the confidence estimate is.\nComparison in optimizing test functions. We first test the performance on test functions in the global optimization benchmark set (Jamil & Yang, 2013). We choose three extremely rugged, nonconvex functions: Rastrigin, Ackley and Griewank in 5 dimensions: 6D, 12D, 18D, 24D, 30D. For each function, we create a diverse, broad family of similar functions fj(w) as the meta-training set used for training DM LSTM and BL2O. The analytical forms and the meta-training sets of those functions are shown in Table 5 in Appendix B.\nWe compare BL2O with all 4 competing methods. The optimization and UQ performances are shown in Fig. 1. In all three cases and 5 dimensions, BL2O has led to the best solution quality. In terms of UQ, BL2O has shown the most accurate confidence estimation ( σ ≈ σ) when σ = 0.9 and σ = 0.8, while BAL was the second best. And BL2O has shown much tighter confidence intervals rσ against BAL. In some cases, although DM LSTM has lower rσ than BL2O, it has much lower confidence level, indicating that this tight upper bound in DM LSTM is miscalibrated. As a result, BL2O has shown the best performance in both optimization and UQ.\n6 12 18 24 30 Dimension\n2 4 6 8 10 12 14 ||ŵ − w ̂ | | 2\nRastrigin Adam PSO BAL DM_LSTM BL2O\n6 12 18 24 30 Dimension\n2 6\n10 14\n0.7 0.8 0.9 1.0 ε 0 .9\nr 0 .9\nRastrigin\n6 12 18 24 30 Dimension\n2 6\n10 14\n0.7 0.8 0.9 1.0 ε 0 .8\nr 0 .8\nRastrigin\n6 12 18 24 30 Dimension\n0\n2\n4\n6\n8\n10\n||ŵ − w\n̂ | | 2\nAckley\n6 12 18 24 30 Dimension\n2 6\n10\n0.7 0.8 0.9 1.0 ε 0 .9\nr 0 .9\nAckley\n6 12 18 24 30 Dimension\n2 6\n10\n0.7 0.8 0.9 1.0 ε 0 .8\nr 0 .8\nAckley\nGriewank\nGriewank\nGriewank\nComparison in optimizing energy functions for protein docking. We then apply BL2O to a bioinformatics application: predicting the 3D structures of protein-complexes (Smith & Sternberg, 2002), called protein docking. Ab initio protein docking can be recast as optimizing a noisy and expensive energy function in a high-dimensional conformational space (Cao & Shen, 2020): x∗ = arg minx f(x). While solving such optimization problems still remains difficult, quantifying the uncertainty of resulting optima (docking solutions) is even more challenging. In this section, we apply our BL2O to optimization and uncertainty quantification in protein docking and compare with a state-of-the-art method BAL (Cao & Shen, 2020).\nWe describe the detailed settings of BL2O on protein docking in Appendix C. From BL2O, we obtain a posterior distribution p(w∗|D) over the native structurew∗ and the lowest energy structure, ŵ. In protein docking, the quality of a predicted structure is based on the distance to the native structure (the global optimum): ||ŵ −w∗||. For UQ, we assess the two-sided confidence interval at σ = 0.9 as P (l0.9 6 ||ŵ −w∗|| 6 r0.9) = 0.9.\nIn Table 1, we assess ||ŵ−w∗||, r0.9− l0.9 and whether ||ŵ−w∗|| is within the confidence interval. For optimization, BL2O clearly outperforms BAL in two medium cases while performing slightly worse in the other cases. Yet for UQ, BL2O shows clearly superior performance over BAL in all cases, with accurate or/and tight confidence intervals. We also visulize the posterior distributions over ||ŵ − w∗|| for protein 1JMO 4. As shown in Fig 2, we can see compared to that of BAL, BL2O’s distribution has real ||ŵ −w∗|| within the 90% C.I. and smaller variance. More posterior distributions are shown in Appendix D.\nComparison in optimizing loss functions in image classification. We then test the performance of optimizing the loss function in image classification on the MNIST dataset. We apply a 2- layers MLP network as the classifier. The competing methods include Adam, DM LSTM and two Bayesian neural network methods: variational inference (VI) (Blundell et al., 2015) and Learnable Bernoulli Dropout (LBO) (Boluki et al., 2020). Moreover, for DM LSTM and BL2O, we apply a trick during the optimizer training called curriculum learning (CL) and introduce it in detail in Appendix E for training over long-term iterations. We call DM LSTM with CL as DM LSTM C and BL2O with CL as BL2O C.\nThe assessment of the optimization and UQ for this machine learning task is different from that for optimization before. In terms of optimization, we assess the classification accuracy on the test set. In terms of UQ, we measure two metrics that assess the robustness and trustworthiness of the classifier: the in-domain calibration error and the out-of-domain detection rate.\nWe first compare the accuracy on the testing set among different methods. As shown in Table 2, Adam, DM LSTM C and BL2O C have almost the same best performance. The significant improvement from DM LSTM to DM LSTM C, and from BL2O to BL2O C shows the big advantage of curriculum learning in learning to optimize. In conclusion, BL2O C had on par accuracy with Adam and DM LSTM C on the MNIST dataset.\nHowever, classification models must not only be accurate, but also indicate when they are likely to be incorrect. Confidence calibration, the probability that estimates the true likelihood of each prediction is also important for classification models. In the ideal case, the maximum output probability (MaxConfidence) for each test sample should be equal to the prediction accuracy for that sample. To assess the calibration of each methods, we split the test set into 20 equal-sized bins and assess the calibration error as the average discrepancy between accuracy and MaxConfidence in each bin. As seen in Table 2, among all methods compared, BL2O C and BL2O had the least calibration error. The figure of Acc. vs MaxConf. is also shown in Fig. 4 in Appendix E.\nWe also inspect the out-of-domain detection of BL2O, BL2O C and competing methods. We train all models on the data belonging to the first 5 classes in the MNIST training dataset (the last layer of the optimizee is modified to have 5 rather than 10 neurons) and test them on the remaining\nsamples from the other 5 classes. An ideal model would predict a uniform distribution over the 5 wrong classes. Therefore, we define the out-of-domain detection rate at threshold t, qt, as the percentage of test samples with max class confidence below t. The larger the qt, the better out-ofdomain detection is. As shown in Table 2, BL2O and BL2O C shows superior performance with all competing methods. Notably, BL2O without curriculum learning had much better out-of-domain detection rates compared to BL2O with curriculum learning.\nComparison in optimizing loss functions for data privacy attack. We finally apply our model to an application that critically needs UQ. As many machine learning models are deployed publicly, it is important to avoid leaking private sensitive information, such as financial data, health data and so on. Data privacy attack (Nasr et al., 2018) studies this problem by playing the role of hackers and attacking the machine-learning models to quantify the risk of privacy leakage. Better attacks would help models to be better prepared for privacy defense.\nWe use the model and dataset in (Cao et al., 2019b), where each input has 9 features involving patient genetic information and the output p is the probability of the clinical significance (having cancer or not) for a patient. We study the following model inversion attack (Fredrikson et al., 2015): by giving 5 features w′ ∈ [0, 1]5 out of 9 and the label p of each patient, we want to recover the rest 4 features w∗ ∈ [0, 1]4 (potentially sensitive patient information). Therefore, for each patient, the objective is w∗ = arg minw∈[0,1]4(m(w\n′,w) − p)2, where w∗ is the ground-truth of w and m is the trained predictive model. The closeness between the predicted and the real input features can quantify the risk of information leakage and the quality of the attack. We compare BL2O with Adam, PSO, BAL and DM LSTM on optimization and UQ on all test cases in (Cao et al., 2019b). The meta-training objectives for BL2O and DM LSTM are the training set in (Cao et al., 2019b).\nAs shown in Table 3, BL2O has shown the best performance in both optimization and UQ compared to all competing methods. It is noteworthy that learned optimizers (DM LSTM and BL2O) had much better optimization performance than pre-defined optimizers. And the Bayesian methods (BAL and BL2O) had significantly better UQ performance than non-Bayesian methods. BL2O possessed the advantages of both learned and Bayesian optimizers to achieve the best performance." }, { "heading": "5 CONCLUSION", "text": "Current optimization algorithms, even with uncertainty-awareness, do not address the uncertainty arising within the optimizer itself. To close this gap, we parameterize the update rule as a neural network and build a Boltzmann-shaped posterior over the algorithmic space. We apply our Bayesian Learning-to-Optimize (BL2O) framework to optimize test functions, energy functions in protein docking, loss functions in image classification and loss functions in data privacy attack. The empirical results demonstrate that BL2O outperforms the state-of-the-art methods in both optimization and uncertainty quantification, as well as the calibration and out-of-domain detection in classification." }, { "heading": "A OPTIMIZER DISTRIBUTION SETTINGS FOR ADAM AND PSO", "text": "" }, { "heading": "B ANALYTIC FORMS AND META-TRAINING SETS OF TEST FUNCTIONS", "text": "" }, { "heading": "C SETTINGS FOR PROTEIN DOCKING EXPERIMENTS", "text": "We calculate the energy function (objective function f(x)) in a CHARMM 19 force field as in (Moal & Bates, 2010). 25 protein-protein complexes are chosen from the protein docking benchmark set 4.0 (Hwang et al., 2010) as the training set, which is shown in Table 6. For each target, we choose 5 starting points (top-5 models from ZDOCK (Pierce et al., 2014)). In total, our training set includes 125 samples. Moreover, we parameterize the search space as R12 as in BAL (Cao & Shen, 2020). The resulting f(x) is fully differentiable in the search space. We only consider 100 interface atoms due to the computational concern. The number of iterations for one training epoch is 600 and in total we have 5000 training epochs. Both BL2O and BAL have 600 iterations during the testing stage. For fair comparison, after optimization, we rescore the BL2O samples for UQ using the scoring function (random forest) in BAL." }, { "heading": "D POSTERIOR DISTRIBUTIONS IN PROTEIN DOCKING", "text": "" }, { "heading": "E CURRICULUM LEARNING AND CALIBRATION FIGURE IN IMAGE CLASSIFICATION", "text": "A common issue in learning to optimize for neural network training is that: the optimizer training usually takes hundreds of iterations, while training a neural network usually costs thousands or tens of thousand iterations. For instance, for a MNIST training dataset consisting of 50,000 images, training a neural network for 100 epochs corresponds to almost 20,000 iterations with a batch size of 128.\nHundreds of iterations are good for the first few epochs during optimizer training, since we would like to only focus on decreasing the loss in the first hundreds of iterations. However, as training goes on, we would like to see that the later iterations could also decrease the training loss. During this stage, only hundreds of iterations are clearly not enough.\nIn order to overcome this issue, we bring the idea from (Bengio et al., 2009). Specifically, We set a list for the number of iterations: [100, 200, 500, 1000, 1500, 2000, 2500, 3000] and we gradually increase the number of iterations following the list for every 100 epochs in optimizer training if the optimizee loss is decreasing. Once the number reaches 3000, it will not change any more until the training ends." }, { "heading": "F ABLATION STUDY.", "text": "In order to validate various design choices, we perform the ablation study as follows:\n• B1: We use the coordinate-wise gated recurrent unit (GRU) network as the optimizer architecture. We train our model directly on the Bayesian loss: Eq 9 without non-Bayesian training.\n• B2: We replace the GRU network with the LSTM network. • BL2O: We add the non-Bayesian training stage to find a local optimum of interest first\nbefore training on the Bayesian loss.\nWe test these three models on the Rastrigin test function. As shown in Fig 5, B2(LSTM) has slightly better performance in both optimization and UQ compared to GRU. But BL2O has shown superior performance compared to both B1 and B2 in optimization and UQ. These results clearly demonstrate that the two training stages must be coupled together to empower the local posterior." } ]
2,020
null
SP:18121c6a208ea58c09b24e3af951a17b9ed3cbc3
[ "This paper studies the asymptotic convergence properties of (population-level) policy gradient methods with two-layer neural networks, softmax parametrization, and entropic regularization, in the mean-field regime. By modelling the hidden layer as a probability distribution over the parameter space, the training dynamics of policy gradient methods can be written as a partial differential equation. Under certain regularity conditions, the paper shows that if the training dynamics converge to a stationary point, this limiting point is a globally optimal policy. The paper also presents results for finite-time convergence of the training dynamics for neural networks to the mean-field limit." ]
We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms. We concentrate on the training dynamics in the mean-field regime, modeling e.g., the behavior of wide single hidden layer neural networks, when exploration is encouraged through entropy regularization. The dynamics of these models is established as a Wasserstein gradient flow of distributions in parameter space. We further prove global optimality of the fixed points of this dynamics under mild conditions on their initialization.
[ { "affiliations": [], "name": "Andrea Agazzi" }, { "affiliations": [], "name": "Jianfeng Lu" } ]
[ { "authors": [ "Alekh Agarwal", "Sham M Kakade", "Jason D Lee", "Gaurav Mahajan" ], "title": "Optimality and approximation with policy gradient methods in markov decision processes", "venue": null, "year": 1908 }, { "authors": [ "Alekh Agarwal", "Mikael Henaff", "Sham Kakade", "Wen Sun" ], "title": "Pc-pg: Policy cover directed exploration for provable policy gradient learning", "venue": "arXiv preprint arXiv:2007.08459,", "year": 2020 }, { "authors": [ "Andrea Agazzi", "Jianfeng Lu" ], "title": "Temporal-difference learning for nonlinear value function approximation in the lazy training regime", "venue": "arXiv preprint arXiv:1905.10917,", "year": 2019 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Zhao Song" ], "title": "A convergence theory for deep learning via overparameterization", "venue": "arXiv preprint arXiv:1811.03962,", "year": 2018 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Luigi Ambrosio", "Nicola Gigli", "Giuseppe Savaré" ], "title": "Gradient flows: in metric spaces and in the space of probability measures", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "A.R. Barron" ], "title": "Universal approximation bounds for superpositions of a sigmoidal function", "venue": "IEEE Transactions on Information Theory,", "year": 1993 }, { "authors": [ "Jalaj Bhandari", "Daniel Russo" ], "title": "Global optimality guarantees for policy gradient methods", "venue": "arXiv preprint arXiv:1906.01786,", "year": 2019 }, { "authors": [ "Vivek S Borkar" ], "title": "Stochastic approximation: a dynamical systems viewpoint, volume 48", "venue": null, "year": 2009 }, { "authors": [ "Qi Cai", "Zhuoran Yang", "Jason D Lee", "Zhaoran Wang" ], "title": "Neural temporal-difference learning converges to global optima", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Shicong Cen", "Chen Cheng", "Yuxin Chen", "Yuting Wei", "Yuejie Chi" ], "title": "Fast global convergence of natural policy gradient methods with entropy regularization", "venue": "arXiv preprint arXiv:2007.06558,", "year": 2020 }, { "authors": [ "Lenaic Chizat" ], "title": "Sparse optimization on measures with over-parameterized gradient descent", "venue": "arXiv preprint arXiv:1907.10300,", "year": 2019 }, { "authors": [ "Lénaı̈c Chizat", "Francis Bach" ], "title": "On the global convergence of gradient descent for over-parameterized models using optimal transport", "venue": "In Proceedings of the 32Nd International Conference on Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Francis Bach" ], "title": "Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss", "venue": "arXiv preprint arXiv:2002.04486,", "year": 2020 }, { "authors": [ "Lenaic Chizat", "Edouard Oyallon", "Francis Bach" ], "title": "On lazy training in differentiable programming", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "G. Cybenko" ], "title": "Approximation by superpositions of a sigmoidal function", "venue": "Mathematics of Control, Signals and Systems,", "year": 1989 }, { "authors": [ "S.S. Du", "X. Zhai", "B. Poczos", "A. Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks, 2018", "venue": null, "year": 2054 }, { "authors": [ "Simon Du", "Jason Lee", "Haochuan Li", "Liwei Wang", "Xiyu Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Weinan E", "Chao Ma", "Lei Wu" ], "title": "Barron spaces and the compositional function spaces for neural network models", "venue": null, "year": 1906 }, { "authors": [ "Behrooz Ghorbani", "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "Linearized two-layers neural networks in high dimension, 2019", "venue": "arXiv preprint arXiv:1904.12191", "year": 1904 }, { "authors": [ "Behrooz Ghorbani", "Song Mei", "Theodor Misiakiewicz", "Andrea Montanari" ], "title": "When do neural networks outperform kernel methods", "venue": "arXiv preprint arXiv:2006.13409,", "year": 2020 }, { "authors": [ "Tuomas Haarnoja", "Sehoon Ha", "Aurick Zhou", "Jie Tan", "George Tucker", "Sergey Levine" ], "title": "Learning to walk via deep reinforcement learning", "venue": "arXiv preprint arXiv:1812.11103,", "year": 2018 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Kurt Hornik" ], "title": "Approximation capabilities of multilayer feedforward networks", "venue": "Neural Networks,", "year": 1991 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clément Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Adel Javanmard", "Marco Mondelli", "Andrea Montanari" ], "title": "Analysis of a two-layer neural network via displacement convexity", "venue": "arXiv preprint arXiv:1901.01375,", "year": 2019 }, { "authors": [ "Sham Kakade", "John Langford" ], "title": "Approximately optimal approximate reinforcement learning", "venue": "In Proceedings of the Nineteenth International Conference on Machine Learning,", "year": 2002 }, { "authors": [ "Jaehoon Lee", "Lechao Xiao", "Samuel Schoenholz", "Yasaman Bahri", "Roman Novak", "Jascha Sohl-Dickstein", "Jeffrey Pennington" ], "title": "Wide neural networks of any depth evolve as linear models under gradient descent", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "Jincheng Mei", "Chenjun Xiao", "Csaba Szepesvari", "Dale Schuurmans" ], "title": "On the global convergence rates of softmax policy gradient methods", "venue": "arXiv preprint arXiv:2005.06392,", "year": 2020 }, { "authors": [ "Song Mei", "Andrea Montanari", "Phan-Minh Nguyen" ], "title": "A mean field view of the landscape of two-layer neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Alex Graves", "Ioannis Antonoglou", "Daan Wierstra", "Martin Riedmiller" ], "title": "Playing Atari with deep reinforcement learning", "venue": "In NIPS Deep Learning Workshop", "year": 2013 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A. Rusu", "Joel Veness", "Marc G. Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K. Fidjeland", "Georg Ostrovski", "Stig Petersen", "Charles Beattie", "Amir Sadik", "Ioannis Antonoglou", "Helen King", "Dharshan Kumaran", "Daan Wierstra", "Shane Legg", "Demis Hassabis" ], "title": "Human-level control through deep reinforcement learning", "venue": "Nature, (7540):529–533,", "year": 2015 }, { "authors": [ "Ofir Nachum", "Mohammad Norouzi", "Kelvin Xu", "Dale Schuurmans" ], "title": "Bridging the gap between value and policy based reinforcement learning", "venue": "Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Phan-Minh Nguyen", "Huy Tuan Pham" ], "title": "A rigorous framework for the mean field limit of multilayer neural networks", "venue": "arXiv preprint arXiv:2001.11443,", "year": 2020 }, { "authors": [ "Samet Oymak", "Mahdi Soltanolkotabi" ], "title": "Towards moderate overparameterization: global convergence guarantees for training shallow neural networks", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Grant Rotskoff", "Eric Vanden-Eijnden" ], "title": "Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Grant Rotskoff", "Samy Jelassi", "Joan Bruna", "Eric Vanden-Eijnden" ], "title": "Neuron birth-death dynamics accelerates gradient descent and converges asymptotically", "venue": "In International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "David Silver", "Aja Huang", "Christopher J. Maddison", "Arthur Guez", "Laurent Sifre", "George van den Driessche", "Julian Schrittwieser", "Ioannis Antonoglou", "Veda Panneershelvam", "Marc Lanctot", "Sander Dieleman", "Dominik Grewe", "John Nham", "Nal Kalchbrenner", "Ilya Sutskever", "Timothy Lillicrap", "Madeleine Leach", "Koray Kavukcuoglu", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go with deep neural networks and tree search. 2016", "venue": null, "year": 2016 }, { "authors": [ "David Silver", "Julian Schrittwieser", "Karen Simonyan", "Ioannis Antonoglou", "Aja Huang", "Arthur Guez", "Thomas Hubert", "Lucas Baker", "Matthew Lai", "Adrian Bolton", "Yutian Chen", "Timothy Lillicrap", "Fan Hui", "Laurent Sifre", "George van den Driessche", "Thore Graepel", "Demis Hassabis" ], "title": "Mastering the game of Go without human knowledge", "venue": null, "year": 2017 }, { "authors": [ "David Silver", "Thomas Hubert", "Julian Schrittwieser", "Ioannis Antonoglou", "Matthew Lai", "Arthur Guez", "Marc Lanctot", "Laurent Sifre", "Dharshan Kumaran", "Thore Graepel", "Timothy Lillicrap", "Karen Simonyan", "Demis Hassabis" ], "title": "A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play", "venue": null, "year": 2018 }, { "authors": [ "R.S. Sutton", "A.G. Barto" ], "title": "Reinforcement Learning: An Introduction", "venue": null, "year": 2018 }, { "authors": [ "Richard S Sutton", "David A McAllester", "Satinder P Singh", "Yishay Mansour" ], "title": "Policy gradient methods for reinforcement learning with function approximation", "venue": "In Advances in neural information processing systems,", "year": 2000 }, { "authors": [ "Alain-Sol Sznitman" ], "title": "Topics in propagation of chaos", "venue": "In Ecole d’été de probabilités de Saint-Flour XIX—1989,", "year": 1991 }, { "authors": [ "Oriol Vinyals", "Igor Babuschkin", "Wojciech M Czarnecki", "Michaël Mathieu", "Andrew Dudzik", "Junyoung Chung", "David H Choi", "Richard Powell", "Timo Ewalds", "Petko Georgiev" ], "title": "Grandmaster level in starcraft ii using multi-agent reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Lingxiao Wang", "Qi Cai", "Zhuoran Yang", "Zhaoran Wang" ], "title": "Neural policy gradient methods: Global optimality and rates of convergence", "venue": null, "year": 1909 }, { "authors": [ "C. Wei", "J.D. Lee", "Q. Liu", "T. Ma" ], "title": "On the margin theory of feedforward neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Ronald J Williams", "Jing Peng" ], "title": "Function optimization using connectionist reinforcement learning algorithms", "venue": "Connection Science,", "year": 1991 }, { "authors": [ "Stephan Wojtowytsch" ], "title": "On the convergence of gradient descent training for two-layer relu-networks in the mean field regime", "venue": "arXiv preprint arXiv:2005.13530,", "year": 2020 }, { "authors": [ "Kaiqing Zhang", "Zhuoran Yang", "Tamer Başar" ], "title": "Multi-agent reinforcement learning: A selective overview of theories and algorithms", "venue": "arXiv preprint arXiv:1911.10635,", "year": 2019 }, { "authors": [ "Yufeng Zhang", "Qi Cai", "Zhuoran Yang", "Yongxin Chen", "Zhaoran Wang" ], "title": "Can temporal-difference and q-learning learn representation? a mean-field theory", "venue": "arXiv preprint arXiv:2006.04761,", "year": 2020 }, { "authors": [ "D. Zou", "Y. Cao", "D. Zhou", "Q. Gu" ], "title": "Stochastic gradient descent optimizes over-parametererized deep ReLU networks, 2018", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "In recent years, deep reinforcement learning has revolutionized the world of Artificial Intelligence by outperforming humans in a multitude of highly complex tasks and achieving breakthroughs that were deemed unthinkable at least for the next decade. Spectacular examples of such revolutionary potential have appeared over the last few years, with reinforcement learning algorithms mastering games and tasks of increasing complexity, from learning to walk to the games of Go and Starcraft (Mnih et al., 2013; 2015; Silver et al., 2016; 2017; 2018; Haarnoja et al., 2018a; Vinyals et al., 2019). In most cases, the main workhorse allowing artificial intelligence to pass such unprecedented milestones was a variation of a fundamental method to train reinforcement learning models: policy gradient (PG) algorithms (Sutton et al., 2000). This algorithm has a disarmingly simple approach to the optimization problem at hand: given a parametrization of the policy, it updates the parameters in the direction of steepest ascent of the associated integrated value function. Impressive progress has been made recently in the understanding of the convergence and optimization properties of this class of algorithms in the tabular setting (Agarwal et al., 2019; Cen et al., 2020; Bhandari & Russo, 2019), in particular leveraging the natural tradeoff between exploration and exploitation offered for entropy-regularized rewards by softmax policies (Haarnoja et al., 2018b; Mei et al., 2020). However, this simple algorithm alone is not sufficient to explain the multitude of recent breakthroughs in this field: in application domains such as Starcraft, robotics or movement planning, the space of possible states and actions are exceedingly large – or even continuous – and can therefore not be represented efficiently by tabular policies (Haarnoja et al., 2018a). Consequently, the recent impressive successes of artificial intelligence would be impossible without the natural choice of neural networks to approximate value functions and / or policy functions in reinforcement learning algorithms (Mnih et al., 2015; Sutton et al., 2000). While neural networks, in particular deep neural networks, provide a powerful and versatile tool to approximate high dimensional functions on continuous spaces (Cybenko, 1989; Hornik, 1991; Barron, 1993), their intrinsic nonlinearity poses significant obstacles to the theoretical understanding of their training and optimization properties. For instance, it is known that the optimization landscape of these models is highly nonconvex, preventing the use of most theoretical tools from classical optimization theory. For this reason, the unprecedented success of neural networks in artificial intelligence stands in contrast with the poor understanding of these methods from a theoretical perspective. Indeed, even in the supervised setting, which can be viewed as a special case of reinforcement learning, deep neural networks are still far from being understood despite having been an important and fashionable research focus in recent years. Only recently, a theory of neural network learning has started to emerge, including recent works on mean-field point of view of training dynamics (Mei et al.,\n2018; Rotskoff & Vanden-Eijnden, 2018; Rotskoff et al., 2019; Wei et al., 2018; Chizat & Bach, 2018) and on linearized dynamics in the over-parametrized regime (Jacot et al., 2018; Allen-Zhu et al., 2018; Du et al., 2018; 2019; Zou et al., 2018; Allen-Zhu et al., 2019; Chizat et al., 2019; Oymak & Soltanolkotabi, 2020; Ghorbani et al., 2019; Lee et al., 2019). More specifically to the context of reinforcement learning, some works focusing on value-based learning (Agazzi & Lu, 2019; Cai et al., 2019; Zhang et al., 2020), and others exploring the dynamics of policy gradient algorithms (Zhang et al., 2019) have recently appeared. Despite this progress, the theoretical understanding of deep reinforcement learning still poses a significant challenge to the theoretical machine learning community, and it is of crucial importance to understand the convergence and optimization properties of such algorithms to bridge the gap between theory and practice." }, { "heading": "CONTRIBUTIONS.", "text": "The main goal of this work is to investigate entropy-regularized policy gradient dynamics for wide, single hidden layer neural networks. In particular, we give the following contributions:\n• We give a mean-field formulation of policy gradient dynamics in parameter space, describing the evolution of neural network parameters in the form of a transport partial differential equation (PDE). We prove convergence of the particle dynamics to their mean-field counterpart. We further explore the structure of this problem by showing that such PDE is a gradient flow in the Wasserstein space for the appropriate energy functional.\n• We investigate the convergence properties of the above dynamics in the space of measures. In particular, we prove that under some mild assumptions on the initialization of the neural network parameters and on the approximating power of the nonlinearity, all fixed points of the dynamics are global optima, i.e., the approximate policy learned by the neural network is optimal," }, { "heading": "RELATED WORKS.", "text": "Recent progress in the understanding of the parametric dynamics of simple neural networks trained with gradient descent in the supervised setting has been made in (Mei et al., 2018; Rotskoff & Vanden-Eijnden, 2018; Wei et al., 2018; Chizat, 2019; Chizat & Bach, 2020). These results have further been extended to the multilayer setting in (Nguyen & Pham, 2020). In particular, the paper (Chizat & Bach, 2018) proves optimality of fixed points for wide single layer neural networks leveraging a Wasserstein gradient flow structure and the strong convexity of the loss functional WRT the predictor. We extend these results to the reinforcement learning framework, where the convexity that is heavily leveraged in (Chizat & Bach, 2018) is lost. We bypass this issue by requiring a sufficient expressivity of the used nonlinear representation, allowing to characterize global minimizer as optimal approximators. The convergence and optimality of policy gradient algorithms (including in the entropy-regularized setting) is investigated in the recent papers (Bhandari & Russo, 2019; Mei et al., 2020; Cen et al., 2020; Agarwal et al., 2019). These references establish convergence estimates through gradient domination bounds. In (Mei et al., 2020; Cen et al., 2020) such results are limited to the tabular case, while (Agarwal et al., 2019; 2020) also discuss neural softmax policy classes, but under a different algorithmic update and assuming certain well-conditioning assumptions along training. Furthermore, all these results heavily leverage the finiteness of action space. In contrast, this paper focuses on the continuous space and action setting with nonlinear function approximation. Further recent works discussing convergence properties of reinforcement learning algorithms with function approximation via neural networks include (Zhang et al., 2019; Cai et al., 2019). These results only hold for finite action spaces, and are obtained in the regime where the network behaves essentially like a linear model (known as the neural or lazy training regime), in contrast to the results of this paper, which considers training in a nonlinear regime. We also note the work (Wang et al., 2019) where the action space is continuous but the training is again in an approximately linear regime." }, { "heading": "2 MARKOV DECISION PROCESSES AND POLICY GRADIENTS", "text": "We denote a Markov Decision Process (MDP) by the 5-tuple (S,A, P, r, γ), where S is the state space, A is the action space, P = P (s, a, s′)s,s′∈S,a∈A a Markov transition kernel, r(s, a, s′)s,s′∈S,a∈A is the realvalued, bounded and continuous immediate reward function and γ ∈ (0, 1) is a discount factor. We will consider a probabilistic policy, mapping a state to a probability distribution on the action space, so that π : S → M1+(A) , where M1+(A) denotes the space of probability measures on A, and denote for any s ∈ S the corresponding density π(s, ·) : A → R+. The policy defines a state-to-state transition operator\nPπ(s,ds ′) = ∫ A P (s, a,ds\n′)π(s,da), and we assume that Pπ is Lipschitz continuous as an operator M1+(S)→ M1+(S) wrt the policy. We further encourage exploration by defining some (relative) entropy-regularized rewards (Williams & Peng, 1991)\nRτ (s, a, s ′) = r(s, a, s′)− τDKL(π(s, ·); π̄( · )) ,\nwhere DKL denotes the relative entropy, π̄ is a reference measure and τ indicates the strength of regularization. Throughout, we choose π̄ to be the Lebesgue measure on A, which we assume, like S, to be a compact subset of the Euclidean space. This regularization encourages exploration and absolute continuity of the policy WRT Lebesgue measure. Consequently, with some abuse of notation, we use throughout the same notation for a distribution and its density in phase space. Note that the original, unregularized MDP can be recovered in the limit τ → 0. In this context, given a policy π the associated value function Vπ : S → R maps each state to the infinite-horizon expected discounted reward obtained by following the policy π and the Markov process defined by P :\nVπ(s) = Eπ [ ∞∑ t=0 γtRτ (st, at, st+1) ∣∣∣s0 = s] (1)\n= Eπ [ ∞∑ t=0 γt ( r(st, at, st+1)− τDKL(π(st, ·); π̄( · )) )∣∣∣s0 = s] , where Eπ [ · |s0 = s] denotes the expectation of the stochastic process st starting at s0 = s and following the (stochastic) dynamics defined recursively by the transition operator Pπ(s,ds′) = ∫ P (s, a,ds′)π(s,da). Correspondingly, we define the Q-function Qπ : S ×A → R as\nQπ(s, a) = Eπ [ r(s0, a0, s1) +\n∞∑ t=1\nγtRτ (st, at, st+1) ∣∣∣s0 = s, a0 = a]\n= r̄(s, a) + γEπ [ Vπ(s1) ∣∣∣s0 = s, a0 = a] , (2) where r̄(s, a) = E[r(s, a, s′)] is the average reward from (s, a). Conversely, from the definition, we have the identity for Vπ(s) = Eπ [Qπ(s0, a0)|s0 = s]− τDKL(π(s, ·); π̄( · )). We are interested in learning the optimal policy π∗ of a given MDP (S,A, P, r, γ), which satisfies for all s ∈ S\nVπ∗(s) = max π : S→M1+(A) Vπ(s). (3)\nMore specifically we would like to estimate this function through a family of approximators πw : S →M1+(A) parametrized by a vector w ∈ W := Rp. Note that since we consider entropy-regularized rewards, the optimal policy will be a probabilistic policy (given as a Boltzmann distribution) instead of a deterministic one. A popular algorithm to solve this problem is given by policy gradient algorithm (Sutton & Barto, 2018). Starting from an initial condition w(0) ∈ W , this algorithm updates the parameters w of the predictor in the direction of steepest ascent of the average reward\nw(t+ 1) := w(t) + βt∇wẼs∼%0Vπ(s) , (4)\nfor a fixed absolutely continuous initial distribution of initial states %0 ∈ M1+(S) and sequence of time steps {βt}t. Here Ẽ[·] denotes an approximation of the expected value operator. This work investigates the regime of asymptotically small constant step-sizes βt → 0. In this adiabatic limit, the stochastic component of the dynamics is averaged before the parameters of the model can change significantly. This allows to consider the parametric update as a deterministic dynamical system emerging from the averaging of the underlying stochastic algorithm corresponding to the limit of infinite sample sizes. This is known as the ODE method (Borkar, 2009) for analyzing stochastic approximation. We focus on the analysis of this deterministic system to highlight the core dynamical properties of policy gradients with nonlinear function approximation. The averaged, deterministic dynamics is given by the set of ODEs\nd dt w(t) = Es∼%0 [∇wVπ(s)] = Es∼%π,a∼πw [∇w log πw(s, a) (Qπ(s, a)− τ log(πw(s, a)))] , (5)\nwhere in the second equality we have applied the policy gradient theorem (Sutton et al., 2000; Sutton & Barto, 2018), defining for a fixed %0 ∈M1+(S)\n%π(s0, s) := ∞∑ t=0 γtP tπ(s0, s) , %π(s) = ∫ S %π(s0, s)%0(ds0) , (6)\nas the (improper) discounted empirical measure. For completeness, we include a derivation of (5) in Appendix A." }, { "heading": "SOFTMAX POLICIES IN THE MEAN-FIELD REGIME", "text": "We choose to represent our policy as a softmax policy:\nπw(s, a) = exp(fw(s, a))∫ A exp(fw(s, a))da\nand parametrize the energy f as a two-layer neural network in the mean-field regime, i.e.,\nfw(s, a) = 1\nN N∑ i=1 ψ(s, a;wi)\nfor a fixed, usually nonlinear function ψ : S ×A×Ω→ R, where we have separated w ∈ W into N identical components wi ∈ Ω so thatW = ΩN . We can rewrite the above expression in terms of an empirical measure:\nfν(N)(s, a) := ∫ Ω ψ(s, a;ω)ν(N)(dω) where ν(N)(dω) = 1 N N∑ i=1 δw(i)(dω) ∈M1+(Ω). (7)\nThis empirical measure representation removes the symmetry of the approximating functions under permutations of parameters wi. It also facilitates the limit N →∞, when ν(N) → ν weakly, so that fν(N) → fν . Then, for a general distribution ν ∈M1+(Ω) the softmax mean-field policy reads:\nπν(s, a) = exp\n(∫ Ω ψ(s, a;ω)ν(dω) )∫ A exp (∫ Ω ψ(s, a;ω)ν(dω) ) da . (8)\nNote that by our choice of softmax policy and mean-field parametrization (7) we have ∇wi log πν(N)(s, a) = ∇wifν(N)(s, a)−∇wi log ∫ A exp fν(N)(s, a)da\n= ∇wi 1\nN N∑ i=1 ψ(s, a;wi)−\n∫ A∇wi exp [ 1 N ∑N i=1 ψ(s, a;wi) ] da∫\nA exp fν(N)(s, a)da\n= 1\nN\n( ∇wiψ(s, a;wi)− ∫ A ∇wiψ(s, a;wi)πν(N)(s,da) ) .\nThus the training dynamics (5), after an appropriate rescaling of time (t 7→ t/N , which is due to the mean-field parametrization for fw), can be rewritten as\nd dt wi(t) = ∫ S×A ( ∇wiψ(s, a;wi)− Eπν(N) [∇wiψ(s, ·;wi)] ) ×\n× ( Qπ\nν(N) (s, a)− τ log πν(N)(s, a) ) πν(N)(s,da)%πν(N) (ds). (9)\nThe training dynamics can be more compactly represented by the evolution of the measure ν ∈ M1+(Ω) in parameter space, given by a mean-field transport partial differential equation of the Vlasov type as\nd dt νt(ω) = div\n( νt(ω) ∫ S×A Cπν [∇ωψ(s, ·;ω), Qπν − τ log πν ](s)%πν (ds) ) , (10)\nwhere ω ∈ Ω and we have introduced the shorthand Cπ[f, g](s) to denote the covariance operator WRT the probability measure π(s,da). Note that the above partial differential equation also captures the dynamics of the finite-width system, i.e., of the empirical measure ν(N) where each wi follows (9).\nWe further note that the dynamics introduced above have a gradient flow structure in the probability space M1+(Ω): defining the expected value function\nE [ν] = Es0∼%0 [Vπν (s0)] (11)\nthe dynamics (10) is a gradient flow for E in the Wasserstein space (see e.g., (Santambrogio, 2017) for an introduction), as we prove in the appendix: Proposition 2.1. For a fixed initial distribution %0 ∈ M1+(S), the dynamics (10) is the Wasserstein gradient flow of the energy functional (11).\nAnalogous dynamics equation for evolution of parameter space measure in the supervised learning case has been derived in (Mei et al., 2018; Rotskoff & Vanden-Eijnden, 2018; Chizat & Bach, 2018) and in the TD learning case in (Agazzi & Lu, 2019; Zhang et al., 2020). In particular, in the case of supervised learning, the resulting dynamics is a Wasserstein gradient flow, the structure of which is used to obtain the convergence of the particle system to the mean-field dynamics. In our case, however, the energy functional is not convex WRT the policy and moreover the softmax parametrization destroys the convexity of the approximator of the policy with respect to νt. Thus showing convergence of the dynamics becomes much more challenging." }, { "heading": "3 SIMPLIFIED SETTING: THE BANDIT PROBLEM", "text": "We now introduce our results in the simple bandit setting, where state space S is one point (and will be henceforth suppressed in the notation) and without loss of generality action spaceA is continuous. In this case, for a reward function r and a softmax policy\nπν(a) = exp(fν(a))∫ A exp(fν(a))da\nwe have that the value function for the regularized problem reads (we denote Vν = Vπν to simplify notation)\nVν = ∫ (r(a)− τ log πν(a))πν(da) ,\nwhile the Q function is simply Q(a) = r(a). We further note that the optimal policy in the regularized case reads:\nπ∗(a) = Z−1 exp(τ−1r(a)), Z = ∫ A exp(τ−1r(a))da\nRecalling the definition of the covariance operator Cπ[f, g](s) from (10), the expression for the policy gradient vector field in the latter case simplifies to\n∂tωt := Ft(ωt; νt) = ∇ωDπνDVν = Cπν [∇ωψ(a;ω), r − τ log(πν)]\n= ∫ A ( ∇ωψ(a;ω)− ∫ ∇ωψ(a′;ω)πν(da′) ) (r(a)− τfν(a))πν(da) , (12)\nwhere Dπν , DVν denote the Fréchét derivative of πν and Vν WRT ν and π respectively. Note that by the structure of the covariance operator Cπ , adding a constant to the function fν(·) does not affect the dynamics. This reflects the fact that the softmax policy is normalized by definition." }, { "heading": "3.1 GLOBAL OPTIMALITY OF SOFTMAX POLICY GRADIENT", "text": "We now sketch the main steps in proving that the mean-field policy gradient dynamics converge, under appropriate assumptions, to global optimizers. The proof in this simpler setting is much more transparent than the general case to be discussed in the next section, and will provide some intuition for the latter. The first part of the proof concerns the properties of fixed points of the dynamics (10), while the second part concerns the training dynamics." }, { "heading": "STATICS", "text": "We first informally prove global optimality of any fixed point ν∗ of the transport equation ddtνt = −div(νtFt) with Ft from (12) such that\na) ν∗ has full support in Ω,\nb) the nonlinearity is 1-homogeneous in the first component of its parameters, i.e., that writing ω = (ω0, ω̄) ∈ R×Θ one has ψ(a;ω) := ω0φ(a; ω̄) for a regular enough φ : A×Θ→ R,\nc) the span of {φ(a; ω̄)}ω̄∈Θ is dense in L2(A), so that i.e., πν∗(a) = π∗(a) = Z−1 exp [ τ−1r(a) ] . Weaker assumptions and the general statement are given in the next section, while the general proof appears in the appendix. First, we note that by assumption a), div(ν∗F ( · ; ν∗)) = 0 directly implies that for almost all ω ∈ Ω\nF (ω; ν∗) = ∫ A ∇ωψ(a;ω) (r(a)− τfν∗(a)− Vν∗)πν∗(da) = 0 .\nIn particular, by homogeneity assumption b), the first component of the above vector field must vanish on Θ.∫ A φ(a; ω̄) (r(a)− τfν∗(a)− Vν∗)πν∗(da) = 0 .\nBy assumption c) that span of φ is dense in L2(A) the above implies that\nr(a)− τfν∗(a)− Vν∗ = 0 πν∗ -a.e. in A . (13)\nFinally, recalling that by the softmax parametrization and by the boundedness of φ, πν∗(a) > 0, we must have\nfν∗(a) = τ −1r(a) + C\nwhich directly implies the optimality of the policy." }, { "heading": "DYNAMICS", "text": "While it is clear that assumption b) and c) about the structure and approximating power of the nonlinearity ψ hold independently of t, we want to show that assumption a) also holds uniformly in time. In this sense, the continuity of the vector field (12) will preserve the full support properties of the measure νt for all t > 0, as we will prove in a more general framework in Lemma C.2. Consequently, any measure ν respecting assumption a) at initialization will do so for any finite positive time t > 0. However, the question remains of whether this property still holds at t = ∞. This is the object of Lemma C.3, where we prove that whenever the gradient approaches a fixed point in parameter space, if this fixed point is not a global minimizer, it must be avoided by the dynamics, and thus, the only possible fixed points of the dynamics are global minimizers." }, { "heading": "4 RESULTS IN THE GENERAL SETTING", "text": "We now come back to the general MDP framework introduced in Section 2." }, { "heading": "4.1 ASSUMPTIONS", "text": "To state the main result of this section, the optimality of fixed points of (10) we need the following Assumption 1. Assume that ω = (ω0, ω̄) ∈ R×Θ for Θ = Rm−1 and ψ(s, a;ω) = ω0φ(s, a; ω̄) with a) Regularity of φ: φ is bounded, differentiable and Dφω is Lipschitz. Also, for all f ∈ L2(S × A) the\nregular values of the map ω̄ 7→ gf (ω̄) := ∫ f(s, a)φ(s, a; ω̄) are dense in its range, and gf (rω̄) converges\nin C1({ω̄ ∈ Θ : ‖ω̄‖2 = 1}) as r →∞ to a map ḡf (ω̄) whose regular values are dense in its range. b) Universal approximation: the span of {φ(·, ω̄) : ω̄ ∈ Θ} is dense in L2(S ×A); c) Support of the measure: There exists r > 0 s.t. the support of the initial condition ν0 is contained in Qr := [−r, r]×Θ and separates {−r} ×Θ from {r} ×Θ, i.e., any continuous path connecting {−r} ×Θ to {r} ×Θ intersects the support of ν0.\nAssumption 1 a) is a common, technical regularity assumption ensuring that (10) is well behaved and controlling the growth, variation and regularity of φ. Alternative assumptions on the case Θ 6= Rm−1 are given in the appendix. Assumption 1 b) speaks to the approximating power of the nonlinearity, assumed to be expressive enough to approximate any function in L2(S × A). This condition replaces the convexity assumption from Chizat & Bach (2018), as the lack of convex structure in our setting prevents us from identifying the local and global minimization properties of a fixed point. Indeed, despite the one-point convexity of E% [Vπ(s)] as a\nfunctional of π (Kakade & Langford, 2002) which can be leveraged in the tabular case, such property will be lost, in general, when restricting to policies through nonlinear function approximation. We bypass this issue by requiring sufficient expressivity of the approximating function class, guaranteeing that the optimal policy can be represented with arbitrary precision. Similar assumption on approximability of neural network representation was made in recent analysis of natural policy gradient algorithm (Agarwal et al., 2019). We note that this assumption is easily satisfied by widely used nonlinearities by the universal approximation theorem (Cybenko, 1989; Barron, 1993). Examples of activation functions satisfying Assumption 1 a)-b) include sigmoid, tanh and Gaussian radial function nonlinearities. Extension to analogous results in the ReLU case was discussed in Wojtowytsch (2020) for the supervised learning. Finally, Assumption 1 c) guarantees that the initial condition is such that the expressivity from b) can actually be exploited. This condition is satisfied for example by the product of a uniform distribution on any bounded set A ⊂ R with the normal distribution on Θ or, if Θ is compact, with the uniform distribution on Θ." }, { "heading": "4.2 CONVERGENCE OF THE MANY-PARTICLE LIMIT", "text": "Before discussing the optimality properties of the dynamics (10), we show that this PDE accurately describes the policy gradient dynamics of a sufficiently wide, single layer neural network. To this aim, we let P2(Ω) be the space of probability distributions on Ω with finite second moment.\nTheorem 4.1. Let Assumption 1 hold and let w(N)t be a solution of (5) with initial condition w (N) 0 ∈ W = ΩN . If ν(N)0 converges to ν0 ∈ P2(Ω) in Wasserstein distance W2 then ν (N) t converges, for every t > 0, to the unique solution νt of (10).\nWe note that by the law of large numbers for empirical distributions, the condition of convergence of ν(N)0 to ν0 is e.g., satisfied when w(i)0 are drawn independently at random from ν0. The proof of this result is largely standard, under the given assumptions, and is provided in the appendix for completeness. The idea of the proof is a canonical propagation of chaos argument (Sznitman, 1991). In a nutshell the first step of the argument establishes sufficient regularity of the gradient dynamics, allowing to guarantee existence and uniqueness of the solution to (10). Then, one bounds the difference in differential updates for the particle system and the mean-field dynamics by comparing them with the evolution of the particle system according to a linear, time-inhomogeneous PDE using the drift term of the mean-field model. The proof is finally concluded by application of Gronwall inequality. The main difficulty WRT similar results in the literature is to establish the needed Lipschitz continuity of the vector field driving the transport PDE: while this is an immediate consequence of assumptions on the activation functions and on the risk functional in the supervised setting, proving this type of regularity requires more effort in the RL setting given the involved dependence of the vector field on the measure νt." }, { "heading": "4.3 OPTIMALITY", "text": "After discussing the connection between particle dynamics and mean-field equations, we present the main convergence result of this paper:\nTheorem 4.2. Let Assumption 1 hold and νt given by (10) converge to ν∗, then πν∗ = π∗, the optimal policy for (3).\nThus if the policy gradient dynamics (10) converges to a stationary point, that point must be a global minimizer. Again, we emphasize that in our regularized setting π∗ is given by a probability distribution, and can thus be represented as a softmax policy. We prove this result in three steps. First, we connect the optimality of a stationary point with the support of the underlying measure in parameter space. More specifically, we show in Lemma C.1 that by the expressivity of φ, the transport vector field of suboptimal fixed points of the dynamics (10) cannot vanish everywhere in parameter space. This implies that a measure with sufficient support cannot correspond to a suboptimal fixed point. We then show in Lemma C.2 that such sufficient notion of support (Assumption 1 c) ) is preserved by the mean-field policy gradient dynamics (10) throughout training. For any finite time, this is true by topological arguments: the separation property of the measure cannot be altered by the action of a continuous vector field such as (10). We note in particular that we do not prove that assumption a) in Section 3 holds in this case. Finally, in Lemma C.3 we combine the above results and prove that spurious fixed points are avoided by the policy gradient dynamics (10) when initialized properly. To establish this we argue by contradiction: assuming\nthat we are approaching such a spurious fixed point ν̃ at time t0, we show in Lemma C.5 that the velocity field will change little for any t > t0. In particular, it follows that in this regime the dynamics of (10) can be approximated by the gradient descent dynamics (in particle space) of an approximately fixed potential. On the other hand, by Assumption 1 c) and by the homogeneity of ψ, we are able to show that by Lemma C.2 a positive amount of measure ν̃ will fall in a forward invariant region where its ω0 component will grow linearly in t (which exists by Lemma C.1), thereby eventually contradicting the assumption that ν̃ is a fixed point of (10). There are two main conceptual differences between the proof outlined above and the one carried out in the supervised learning setting. On one hand, a necessary step in our proof is to establish Lipschitz continuity of the vector field defining the transport equation (10), also needed for convergence of the particle dynamics as discussed above. On the other hand, the landscape of the objective function for policy gradients does not enjoy the convexity (WRT the predictor) typically assumed in the supervised case. To exclude the existence of local minima we assume sufficient expressivity of the activation functions, absent in the supervised analysis. This assumption is key to deduce optimality of fixed points of (10) in Theorem 4.2 in our less regular setting." }, { "heading": "5 NUMERICAL EXAMPLES", "text": "To test our theoretical results in a simple setting we train a wide, single hidden layer neural network with policy gradients to learn the optimal softmax policy (8) for entropy-regularized rewards with parametrization (7) and regularization parameter τ = 0.2. We do so in two separate settings:\n(a) S = {0}, A = [0, 1]. This setting corresponds to bandits framework discussed in Section 3. (b) S × A is a grid of size 100 × 100 in the set [0, 1]2. In this case, we have chosen a discount factor of\nγ = 0.7 and a transition process given by P (s, a, s′) = 0.9δ(s′ − a) + 0.1/100 (i.e., an action a leads to the corresponding state s′ = a with probability 0.9 and is uniformly distributed with probability 0.1). At each iteration we have computed the exact distribution %π by computing the resolvent of the (weighted) transition matrix.\nIn both cases, we defined the optimal Q function as Q∗(s, a) = τf∗w(s, a), where f ∗ w(s, a) is given by a single hidden layer neural network of width n = 5, ReLU nonlinearites and weights w drawn independently and identically distributed from a centered, normal distribution with variance σ2 = 4, i.e.,\nQ∗(s, a) = τf∗w(s, a) for wi ∼ N (0, 4) .\nWe learn the optimal policy for the problem defined above using a N = 800-neurons wide single hidden layer neural network with ReLU nonlinearities in the mean-field regime (7) used as energy for a softmax policy (8). The initialization of the student network is as follows: first-layer weights are initialized at random drawn\nindependently from a centered, normal distribution with variance σ2 = 4, while output weights are initialized at 0. The model is trained according to (4) with fixed step-size βt = 10−3. We report the results of this training procedure in Fig. 1, where we notice that all the paths monotonically decrease the error E(ν∗) − E(νt), as predicted by our results. Note that the convergence rate of the model varies across experiments, consistently with the purely qualitative nature of the convergence result we proved." }, { "heading": "6 CONCLUSIONS AND FUTURE WORK", "text": "This work addresses the problem of optimality of policy gradient algorithms, a workhorse of deep reinforcement learning, when combined with mean-field models such as neural networks. More specifically, we provide a mean-field formulation of the parametric dynamics of policy gradient algorithms for entropy-regularized MDPs and prove that, under mild assumptions, all fixed points of such dynamics are optimal. This extends similar results obtained in the “neural” or “lazy” regime to the mean-field one, which is known to be much more expressive (E et al., 2019; Ghorbani et al., 2020), but also highly nonlinear. The latter feature prevents, at present, from obtaining convergence results of these models, except in very specific settings (Chizat, 2019; Javanmard et al., 2019). Interesting avenues or future research include the relaxing the adiabaticity assumption, i.e., considering the stochastic approximation problem resulting from the finite number of samples and the finite gradient step-size, as well as establishing quantitative bounds for models with a large, but finite, number of parameters. Probably the most important open question, however, concerns establishing quantitative convergence of mean-field dynamics of neural networks: even in the supervised setting, despite recent results in specific settings (Chizat, 2019; Javanmard et al., 2019), these guarantees remain mainly out of reach." }, { "heading": "ACKNOWLEDGMENTS.", "text": "AA acknowledges the support of the Swiss National Science Foundation through the grant P2GEP2-17501 and by the NSF grant DMS-1613337. The work of JL is in part supported by the US National Science Foundation via grants CCF-1910571 (Duke TRIPODS) and DMS-2012286." }, { "heading": "A DERIVATION OF SOFTMAX POLICY GRADIENT DYNAMICS", "text": "Lemma A.1. The gradient of the entropy-regularized value function can be written as\nEs∼%0 [∇wVπ(s)] = Es∼%π,a∼πw [∇w log πw(s, a) (Qπ(s, a)− τ log πw(s, a))] ,\nand thus the policy gradient dynamics (5).\nProof. We choose throughout π̄ as the Lebesgue measure, and use that πw is absolutely continuous wrt π̄. Taking the gradient of (1) using a parametric policy πw we obtain\n∇wVπw(s) = ∇wEπw [ ∞∑ t=0 γt (r(st, at, st+1)− τDKL(πw(st, ·); π̄( · ))) ∣∣∣s0 = s]\n= ∇w ∫ S×A ( r(s, a1, s1)− τ log πw(s, a1) π̄(a1) + γVπw(s1) ) P (s, a1,ds1)πw(s,da1)\n= ∫ S×A ( r(s, a1, s1)− τ log πw(s, a1) π̄(a1) + γVπw(s1) ) P (s, a1,ds1)∇wπw(s,da1)\n+ ∫ S×A ( −τ∇w log πw(s, a1) π̄(a1) + γ∇wVπw(s1) ) P (s, a1,ds1)πw(s,da1)(A.1)\nNow, since ∫ S P (s, a,ds ′) = 1 and ∫ A πw(s,da\n′) = 1 for all s, a, w we have for the first term in brackets in the last line∫\nS×A ∇w log\nπw(s, a1)\nπ̄(a1) P (s, a1,ds1)πw(s,da1) = ∫ A ∇wπw(s, a1)da1 = ∇w ∫ A πw(s, a1)da1 = 0\nOn the other hand, we can rewrite the second term in brackets as Eπw [γ∇Vπw(s1)|s0 = s], and recognize the LHS of (A.1) evaluated at the next state s1 in the expectation. Therefore, we can sequentially repeat the same computation as above, and recalling the definition of %πw( · ) and Qπ(s, a) in (6) and (2) we obtain\n∇wVπw(s) = ∫ A ∞∑ t=0 γt ( r(st, at, st+1)− τ log πw(st, at) π̄(at) + γVπw(st+1) ) ∇wπw(st,dat) ∣∣∣ s0=s\n= Eπw [ ∞∑ t=0 γt ( Qπw(st, at)− τ log πw(st, at) π̄(at) ) ∇w log πw(st, at) ∣∣∣s0 = s]\n= ∫ S×A ( Qπw(st, at)− τ log πw(s, a) π̄(a) ) ∇w log πw(s, a) %πw(ds)πw(s,da),\nwhere in the second line we have used that, if πw > 0 on A, πw(st,dat)∇w log πw(st, at) = ∇wπw(st,dat).\nProposition 2.1. For a fixed initial distribution %0, the dynamics (10) is the Wasserstein gradient flow of the energy functional\nE [ν] = Es0∼%0 [Vπν (s0)] .\nProof. We find the potential of the gradient flow by functional differentiation of E :\nδ\nδν E [ν](ω) = ∫ A δE δπ (s, a) δπν δν (s, a;ω)dsda (A.2)\nand consider the two terms in the integral separately, starting from the second:\nδπν δν (s, a;ω) = δ δν\neτ ∫ ψ(s,a;ω)ν(dω)∫\nA e τ ∫ ψ(s,a;ω)ν(dω)da\n= 1∫\nA e τ ∫ ψ(s,a;ω)ν(dω)da δ δν eτ\n∫ ψ(s,a;ω)ν(dω)\n− e τ ∫ ψ(s,a;ω)ν(dω)(∫\nA e τ ∫ ψ(s,a;ω)ν(dω)da )2 ∫ A δ δν eτ ∫ ψ(s,a;ω)ν(dω)da\n= τ ( ψ(s, a;ω)− ∫ A ψ(s, a′;ω)πν(s,da ′) ) πν(s, a) (A.3)\nFor the first term in the integrand of (A.2), we use π(s, a) as a density and obtain\nδE δπ (s, a) = δ δπ [∫ S×A (r̄(s, a)− τ log π(s, a)) (%π(ds)π(s,da)) ] (s, a)\n= ∫ S [ (r̄(s, a)− τ(log π(s, a) + 1)) %π(s′, s) (A.4)\n+ ∫ S×A (r̄(s′′, a′′)− τ log π(s′′, a′′)) δ% π(s′, s′′) δπ (s, a)π(s′′,da′′)ds′′ ] %0(ds ′) ,(A.5)\nwhere in the last line we have used that δδπ [π log π](s, a) = log π(s, a) + 1. We evaluate the variational derivative of %π as\nδ%π(s′, s′′)\nδπ (s, a) = ∞∑ t=0 γt δ δπ [∫ P tπ(s ′, s′′) ] (s, a)\n= γ ∫ S δPπ(s ′, s′′′) δπ ( ∞∑ t=0 γtP tπ(s ′′′, s′′) ) + Pπ(s ′, s′′′) δ δπ ∞∑ t=0 γt δ δπ P tπ(s ′′′, s′′)ds′′′\n= γ [∫ S δ(s, s′)P (s′, a, s′′′)%π(s′′′, s′′) + Pπ(s ′, s′′′) δ%π(s′′′, s′′) δπ (s, a)ds′′′ ] We further recognize the same derivative on the RHS of the above expression, allowing to write\nδ%π(s′, s′′)\nδπ (s, a) = ∞∑ t=0 γtP tπ(s ′, s) ∫ S P (s, a, s′′′)%π(s′′′, s′′)ds′′′\nFinally, we notice that the last term in (A.4) is constant in a and therefore vanishes when integrated against (A.3), so we only consider\nδE δπ (s, a)− %π(s) = ∫ S2×A (r̄(s′′, a′′)− τ log π(s′′, a′′))×\n× ( δs,s′′δa,a′′% π(s′, s) + π(s′′, a′′) δ%π(s′, s′′)\nδπ (s, a)\n) %0(ds ′)ds′′da′′\n= ∫ S2×A ∞∑ t=0 γtP tπ(s ′, s) (r̄(s′′, a′′)− τ log π(s′′, a′′))×\n× ( δs,s′′δa,a′′ + γ ∫ S P (s, a, s′′′)%π(s′′′, s′′)ds′′′ ) %0(ds ′)ds′′da′′\n= ∞∑ t=0 γt ∫ S P tπ(s ′, s) (Qπ(s, a)− τ log π(s, a)) %0(ds′)\n= (Qπ(s, a)− τ log π(s, a)) %π(s) (A.6) We conclude by noting that combining (A.3) and (A.6) we obtainCπ[ψ(ω), Qπ−τ log π](s), and the Wasserstein gradient flow corresponding to this potential is (10)." }, { "heading": "B PROOFS OF THE MANY-PARTICLE LIMIT", "text": "Theorem 4.1. Let Assumption 1 hold and let w(N)t be a solution of (5) with initial condition w (N) 0 ∈ W = ΩN . If ν(N)0 converges to ν0 ∈ P2(Ω) in Wasserstein distance W2 then ν (N) t converges, for every t > 0, to the unique solution νt of (10).\nProof. As anticipated in the main text, the proof is divided in two parts:\n1. We prove sufficient regularity of the dynamics (10), allowing to establish existence and uniqueness of its solution.\n2. We leverage the regularity proven above to establish a propagation of chaos result, showing that the system of interacting particles behaves asymptotically as its mean-field limit.\nWhile carrying out this proof is needed in our context since the dependence of E(ν) on ν is more involved than in e.g., Mei et al. (2018); Rotskoff & Vanden-Eijnden (2018); Chizat & Bach (2018), the steps of this derivation are mainly standard, see e.g., (Sznitman, 1991)." }, { "heading": "B.1 REGULARITY", "text": "We prove existence and uniqueness of the gradient flow dynamics (10) through standard arguments from the optimal transportation literature (see e.g., (Ambrosio et al., 2008)). More specifically, recalling that π = πν we leverage the Lipschitz continuity of the vector field\nFt(ω, ν) = ∫ S Cπ [∇ωψ(s, ·;ω), Qπ(s, ·)− τ log π(s, ·)] %π(ds)\nwith respect to ν. To prove such regularity result, decompose E(ν) = R (∫ ψν ) for R(f) = S ◦ π(f) . (B.1)\nand S : (S → M1+(A)) → R maps µ 7→ Eµ [Vµ(s)|s ∼ %0] and π : L2(S × A) → (S → M1+(A)) is the softmax policy parametrization (8) of its argument. Recalling the definition of Qr from Assumption 1 and denoting Fr = { ∫ ψν : supp ν ∈ Qr}, we further define the norms and constants needed in the following proof as:\n‖Dψ‖r,∞ = sup ω∈Qr ‖Dψω‖ LDψ = sup ω,ω′∈Qr ‖Dψω −Dψω′‖ ‖ω − ω′‖2\n‖DR‖r,∞ = sup ψ(·;ω) : ω∈Qr ‖DRψ‖ LDR = sup ψ,ψ′∈Fr ‖DRψ −DRψ′‖ ‖ψ − ψ′‖2\nwhere ‖ · ‖ denotes the operator norm. While for any r > 0 the boundedness of ‖Dψ‖r,∞, LDψ results directly from Assumption 1, more work is needed to prove that ‖DR‖r,∞ <∞, LDR <∞. We prove this in Lemma C.5 below, and proceed with the proof of convergence of the particle dynamics. For any r and corresponding Qr from Assumption 1 we define the set of localized functionals\nE(r)(ν) = { E(ν) if supp (ν) ⊂ Qr ∞ else\nFurthermore, we say that a coupling γ ∈ M1+(Ω × Ω) is an admissible transport plan if both its marginals have support in Qr and finite second moments. To every admissible transport plan, for p ≥ 1 we associate a transportation cost Cp(γ) = ( ∫ Ω2 |ω − ω′|pdγ(ω, ω′))1/p. We prove the following results: for every r > 0 we have\n1. There exists λr > 0 such that for any admissible transport plan γ, defining the interpolation map νγt := (tΠ0 + (1 − t)Π1)#γ, the function t 7→ E(ν γ t ) is differentiable with Lipschitz continuous\nderivative with constant λrC22 (γ). 2. Let ν0 have support in Qr. Then for any given transport plan γ with first marginal given by ν0, a\nvelocity field F satisfies E((Π1)#γ) ≥ E(ν0) + ∫ F (u) · (u− u′)dγ(u, u′) + o(C2(γ)) (B.2)\nif and only if F (u) is in the subdifferential of DEν(u) := Cπ[ψ,Qπ − log π](u) for g ∈ L2(S × A) (projected in the interior of Qr when u ∈ ∂Qr ) for ν0 almost every u ∈ Ω.\nThe proof of the two points above corresponds to (Chizat & Bach, 2018, Lemma B.2). We sketch the proof of these two points below, referring to the original reference for the details.\n1. By the Lipschitz continuity of ψ : Ω→ L2(S ×A) and R′(f) = DRf = Cπ[·, Qπ − τ log π] in Qr, the energy E(r)(νγt ) transported along an interpolating path ν γ t is differentiable and we can write its\nderivative as d\ndt E(r)(νγt ) =\n∫ R′( ∫ ψνγt ) ∫ Dψ(1−t)ω+tω′(ω ′ − ω)dγ(ω, ω′)\nThen, again by the Lipschitz continuity of Dψ and DR we have for 0 ≤ t′ < t′′ < 1∣∣∣∣ ddtE(r)(νγt′)− ddtE(r)(νγt′′) ∣∣∣∣ ≤ ∣∣∣∣∫ (R′(∫ ψνγt′)−R′(∫ ψνγt′′))∫ Dψ(1−t′)ω+t′ω′(ω′ − ω)dγ(ω, ω′)∣∣∣∣\n+ ∣∣∣∣∫ R′(∫ ψνγt′′)∫ (Dψ(1−t′)ω+t′ω′ −Dψ(1−t′′)ω+t′′ω′)(ω′ − ω)dγ(ω, ω′)∣∣∣∣ ≤ λr|t′′ − t′|\nfor a λr large enough, where in the last inequality we have used the uniform bounds on DR inQr, that |Dψ(1−t′)ω+t′ω′ −Dψ(1−t′′)ω+t′′ω′ | ≤ (t′ − t′′)LDψ|ω − ω′| and we applied Hölder’s inequality to bound C21 (γ) ≤ C22 (γ).\n2. The proof of this result leverages an expansion of the functionals R,ψ to the second order in their arguments:\nψ(ω′) = ψ(ω) +Dψω(ω ′ − ω) +Rψ(ω, ω′)\nR(g) = R(f) +DRf (g − f) +RR(f, g)\nRecalling the Lipschitz bounds on the remainders Rψ(ω, ω′) < 12LDψ|ω − ω ′|2, RR(f, g) < 1 2LDR‖f − g‖ 2 2 and combining the two expansions above we have, for a transport plan γ with marginals ν0 = (Π0)#γ, ν1 = (Π1)#γ\nE(r)(ν1) = E(r)(ν0) + ∫ R′( ∫ ψν0)Dψu(u ′ − u)dγ(u, u′)dsda+R\nfor a remainder term R. We can bound such remainder term, again by the Lipschitz regularity of Dψ and DR and by the boundedness of Dψ,DR in Qr, by C2(γ)2 and C1(γ)2 ≤ C2(γ)2, thereby obtaining that\nE(r)(ν1) = E(r)(ν0) + ∫ R′( ∫ ψν0)Dψu(u ′ − u) dsdadγ(u, u′) + o(C2(γ))\nNoting that the integrand against the coupling is the gradient flow vector field, the above uniquely characterizes the velocity field satisfying (B.2).\nWe note that point 1) above immediately implies that E(·) is λr-semiconvex along geodesics, while by 2) E(r)( · ) admits strong Wasserstein subdifferentials on its domain (Ambrosio et al., 2008, Definition 10.3.1). Combining the two results one obtains existence and uniqueness of the solutions of the Wasserstein gradient flow through (Ambrosio et al., 2008, Theorem 11.2.1)." }, { "heading": "B.2 PROPAGATION OF CHAOS", "text": "By the Lipschitz continuity of the transport field in (10) in ν0 with supp ν0 ⊂ Qr, there exists a time tr > 0 such that, supp ν(N)s ⊂ Qr for all s ∈ [0, tr], N ∈ N. Consider now two times 0 ≤ t1 < t2 ≤ tr. To prove the existence of the limiting curve (νt)t, we show that the curves ν (N) t are uniformly in N equicontinuous in W2 and as such, possess a converging subsequence by Arzela-Ascoli theorem. To show equicontinuity, we bound the the W2 distance between distributions by coupling positions of the same particles at different times and using Cauchy-Schwartz inequality:\nW2(ν (N) t1 , ν (N) t2 ) 2 ≤ 1 N N∑ i=1 ‖w(i)t1 − w (i) t2 ‖ 2 2 ≤ t2 − t1 N N∑ i=1 ∫ t2 t1 ‖ d ds w(i)s ‖22ds\nCombining the above with the identity\nd dt E(ν(N)t ) = 1 N N∑ i=1 〈∇wiE(ν (N) t ), d dt w (i) t 〉 = 1 N N∑ i=1 ‖ d dt w (i) t ‖22\nwe have\nW2(ν (N) t1 , ν (N) t2 ) ≤ √ t2 − t1 √∫ t2 t1 d ds E(ν(N)s )ds ≤ √ t2 − t1 ( sup supp ν∈Qr E(ν)− inf supp ν∈Qr E(ν) )1/2 where we recall that Fr = {ν ∈ M1+(Ω) : supp ν ⊂ Qr}. In particular, the above continuity bound is independent on N , proving equicontinuity of ν(N) in W2. We now prove that the limiting point of the converging subsequence whose existence was identified above must solve (10). To do so we compare both the differential for the mean-field and particle dynamics to the one of a linear inhomogneous PDE: for any bounded and continuous f : R× Rm → Rm, denoting by E = νtFtdt and EN = ν (N) t F (N) t dt where Ft, F (N) t are the vector fields of the mean-field and particle system respectively, we\nwrite ∣∣∣∣∫ f(ω)d(E − EN )∣∣∣∣ 2 ≤ ‖f‖∞ ∫ ∣∣∣F (N)t − Ft∣∣∣ 2 dν (N) t dt+ ∣∣∣∣∫ fFtd(ν(N)t − νt)dt∣∣∣∣ . By boundedness of fFt over Qr, the second term converges by our choice of subsequence. For the first term, denoting throughout by ‖ · ‖BL the bounded Lipschitz norm, we leverage the Lipschitz continuity of the vector field F WRT the underlying parametric measure:\n‖F (N)t − Ft‖2 ≤ Cr‖νt − ν (N) t ‖BL\nfor Cr > 0 large enough, again obtaining convergence by our choice of subsequence. This proves convergence of the particle model to the mean-field equation (10) on [0, tr]. To extend the time interval on which we prove convergence, we use that E(ν) (and thus R) decays along trajectories of (10). Consequently, by the boundedness of the differential DR on sublevel sets of R the Lipschitz constant of E(νt) is uniformly bounded. Using again the Lipschitz continuity of F we can show that supu∈Qr ‖F‖ < A+Br, i.e., that particle velocities can grow at most linearly in r, and an application of Gronwall allows to find, for every T > 0 that there exists r > 0 such that supp νt ∈ Qr for all t ∈ [0, T ] and propagation of chaos follows." }, { "heading": "C PROOFS OF OPTIMALITY", "text": "Theorem 4.2. Let Assumption 1 hold and νt given by (10) converge to ν∗, then πν∗ = π∗ S ×A-a.e. Before proceeding to prove Theorem 4.2, we state the alternative form of Assumption 1 a) in the case where Θ 6= Rm−1. Our proof of the theorem above can be easily generalized to the setting of Assumption 2. Assumption 2. Assume that ω = (ω0, ω̄) ∈ R × Θ for Θ ⊂ Rm−1 which is the closure of a bounded open convex set. Furthermore ψ(s, a;ω) = ω0φ(s, a; ω̄) where φ is bounded, differentiable and Dφω is Lipschitz. Also, for all f ∈ L2(S × A) the regular values of the map ω̄ 7→ gf (ω̄) := ∫ f(s, a)φ(s, a; ω̄)dads are dense in its range and gf (ω̄) satisfies Neumann boundary conditions (i.e., for all ω̄ ∈ ∂Θ we have dgf (ω̄)(nω̄) = 0 where nω̄ ∈ Rm−1 is the normal of ∂Θ at ω̄). We prove Theorem 4.2 as sketched in Section 4 by first connecting the optimality and the support of stationary measures in parametric space through Lemma C.1, and then investigating how the dynamics preserves full support property for any t > 0 in Lemma C.2 and avoids spurious minima in Lemma C.3. Before starting this program we introduce the equivalent of greedy policies in the entropy-regularized setting. For a given Q(s, a), the associated Boltzmann policy πB with respect to a reference measure π̄ is given by\nπB(s, a) := exp [(Q(s, a)− VQ(s))/τ ] for VQ(s) := τ logEa∼π̄ [exp [Q(s, a)/τ ]] , and satisfies πB(s, ·) = arg maxπ∈M1+(A) (Ea∼π [Q(s, a)]− τDKL(π; π̄)(s)) One can then define the Boltzmann backup or soft Bellman backup operator T τ that, for a given Q and the associated Boltzmann policy πB, gives the action-value function T τQ associated to πB:\nT τQ(s, a) = r̄(s, a) + γτEπ [logEa1∼π̄ [expQ(s1, a1)/τ ] |s0 = s] (C.1)\nIt is known (Haarnoja et al., 2018b, Theorem 1) that the fixed points of the above operator are optimal, i.e., they correspond to the optimal policy π∗B = π\n∗ of the entropy-regularized MDP. To state the first partial result towards the proof of Theorem 4.2, we observe that the ω0-component of the transport vector field in (10) can be written as(∫ S Cπν [∇ωψ(s, ·;ω), Qπν (s, ·)− τ log πν ] %πν (ds) ) 0 = ∫ S Cπν [φ(s, ·; ω̄), Qπν (s, ·)− τ log πν ] %πν (ds) , (C.2) where we recall that Cπ[f, g] is the covariance operator WRT the probability measure π(s,da) introduced below (10). We note in particular that the above expression only depends on ω̄. With the above information at hand we now proceed to prove Lemma C.1 relating the value of (C.2) and the optimality of fixed points of (10): Lemma C.1. Let Assumption 1 hold and let ν satisfy∫\nS×A φ(ω̄; s, a) (Qπν (s, a)− τ log πν(s, a)− Vπν (s))πν(s,da)%πν (ds) = 0 , (C.3)\nω̄-almost everywhere in Θ. Then we have that Qπν = Qπ∗ holds π̄%π̄-a.e. in S ×A.\nProof of Lemma C.1. Assuming that (C.3) holds Lebesgue-a.e. in Θ, by the assumed continuity of φ in ω̄ combined with the expressivity of φ Assumption 1 b) we must have that\nQπν (s, a)− τ log πν(s, a)− Vπν (s) = 0 πν%πν − a.e. .\nWe then rewrite the above condition in compact notation as the fixed point equation\nT τQπν (s, a) = Qπν (s, a)\nfor the soft Q learning or Boltzmann backup operator T τ defined in (C.1). Since all fixed points of T τ for γ < 1 are optimal (Nachum et al., 2017, Theorem 3), we must have that πν = πB[Q∗] = π∗ for πν%πν -almost every (s, a) ∈ S ×A. The result follows by equivalence of πν and π̄.\nConsequently, suboptimal fixed points of the dynamics (10) cannot satisfy (C.3) Lebesgue-a.e. in Θ." }, { "heading": "C.1 PROOF OF THEOREM 4.2", "text": "We prove below that spurious local minima that do not satisfy (C.3) Θ-a.e. are avoided by the dynamics. We do so by leveraging the approximate gradient structure of the policy gradient vector field when νt is close to one of such stationary points, as discussed in the main text. Combining this fact with the assumed convergence to ν∗ proves Theorem 4.2. We note that by the assumed homogeneity of ψ in its first component, if ν, ν′ are such that∫\nω0ν(dω0,dω̄) = ∫ ω0ν ′(dω0,dω̄) a.e. (C.4)\nthen fν( · ) = ∫ ω0φ( · ; ω̄)ν(dω0,dω̄) = ∫ ω0φ( · ; ω̄)ν′(dω0,dω̄) = fν′( · ) ,\nso that in turn we have πν = πν′ a.e.. In other words, the homogeneity of the chosen class of approximators results in a degeneracy of the map ψ : M1+(Ω) 7→ L2(S ×A). To remove this degeneracy in our analysis, we identify all the distributions ν, ν′ that are equivalent under (C.4) by defining the signed measure\nh1ν(dω̄) := ∫ ω0ν(dω0,dω̄) (C.5)\nLeveraging this definition, we prove the desired result Theorem 4.2 in two key steps: we show that\n1. the solution to (10) does not lose (projected) support for any finite time, thereby preserving the property from Assumption 1 c),\n2. stationary points ν̃ with Qπν̃ 6= Qπ∗ – which by Lemma C.1 cannot have full projected support in Θ – are avoided by the dynamics.\nThese facts are respectively summarized in the following lemmas: Lemma C.2. Let Assumption 1 a) hold and let ν0 satisfy Assumption 1 c), then for every t > 0, νt solving (10) with initial condition ν0 also satisfies Assumption 1 c).\nThroughout, we let ‖ · ‖BL denote the bounded Lipschitz norm. Lemma C.3. Let Assumption 1 hold and let ν̃ be a fixed point of (10) such that (C.3) does not hold a.e.. Then there exists ε > 0 such that if ‖h1ν̃−h1νt1 ‖BL < ε for a t1 > 0 there exists t2 > t1 such that ‖h 1 ν̃−h1νt2‖BL > ε.\nProof of Lemma C.2. Analogously to (Chizat & Bach, 2018, Lemma C.13), we aim to show that the separation property Assumption 1 c) is preserved by the evolution of ν0 along the characteristic curves X(t, u) solving\n∂tX(t, u) = Ft(X(t, u); νt), (C.6)\nwhere Ft is the transport field in (10). To reach this conclusion, the analogous result in Chizat & Bach (2018) only relies on the continuity of the map u 7→ X(t, u), established in (Chizat & Bach, 2018, Lemma B.4) under Assumption 1 a). Hence, it is sufficient for our purposes to establish continuity of the map X(t, ·) from (C.6). This property, however results immediately from the one-sided Lipschitz continuity of the vector field Ft on Qr = [−r, r]×Θ uniformly on compact time intervals, which is in turn guaranteed by the Lipschitz continuity and Lipschitz smoothness of ψ from Assumption 1 and boundedness of r.\nTo simplify the notation in the following proof, we denote throughout δ(ν) := Qπν − τ log πν − Vπν and 〈f, g〉π := ∫ S×A f(s, a)g(s, a)π(s,da)%π(ds) .\nProof of Lemma C.3. We first claim that by Lemma C.1, for any spurious fixed point (such that Qπν̃ 6= Qπ∗), there must exist a subset of Θ with positive Lebesgue measure where ν̃ loses support and such that 〈∇ψ, δ(ν̃)〉π 6= 0. This is easily proven by contradiction: if 〈∇ψ, δ(ν̃)〉π = 0 a.e. then by Lemma C.1 we have that Qπν̃ = Qπ∗ . This implies that the quantity\ngν̃(ω̄) := 〈∂ω0ψ(·;ω), δ(ν̃)〉πν = 〈ψ(·; (1, ω̄)), δ(ν̃)〉πν = 〈φ(·; ω̄), δ(ν̃)〉πν (C.7)\ncannot vanish a.e. on Θ. Then, by Assumption 1 on the regularity of g, there exists a nonzero regular value−η of gν̃(ω̄). Assuming without loss of generality that this regular value is negative, so that η > 0 (else invert the signs of ω0 in the remainder of the proof), we define the nonempty sublevel set G := {(ω0, ω̄) ∈ Ω : gν̃(ω̄) < −η} and G+ = {(ω0, ω̄) ∈ G : ω0 > 0} . (C.8) Further denoting by Ḡ ⊆ Θ the projection of G onto Θ, we have by definition that the gradient field of gν̃(ω̄) is orthogonal to the level set ∂Ḡ, the latter being an orientable manifold of dimension m− 2. Denoting by nω̄ the normal unit vector to ∂Ḡ in the outward direction, by continuity of∇gν̃(ω̄) when Ḡ is compact1 we can bound the scalar product between the two away from 0, i.e., there exists\nβ := min ω̄∈∂Ḡ\nnω̄ · ∇ω̄gν̃(ω̄) > 0 .\nWe now prove that the stationarity assumption in a ε-neighborhood of a spurious fixed point\n‖h1ν̃ − h1νt‖BL < ε for all t > t1 , (C.9)\nleads to a contradiction for ε small enough. To do so, by Lemma C.5 we set ε(α, η, β) small enough so that for all νt such that (C.9) holds we have gνt(ω̄) < −η/2 on Ḡ and nω̄ · ∇ω̄gνt > β/2 on ∂Ḡ. Then, the two inequalities above combined with ∂ω0ψ(ω0, ω̄) = ψ(1, ω̄) imply that the set G+ defined above is forward invariant and therefore that ∂tνt(G+) ≥ 0 as long as (C.9) holds. Furthermore, by similar arguments we notice that characteristic trajectories cannot enter the set G \\ G+ after t1. Now, we consider two cases: either (i) a positive amount of mass is present at t1 in the forward invariant set G+ (νt1(G+) > 0) or (ii) νt1(G+) = 0. We discuss these two cases separately, along the lines of (Chizat & Bach, 2018, Lemmas C.4, C.18), respectively.\n1if Ḡ is not compact we choose η to also be a regular value of the function on {ω̄ ∈ Rm−1 : ‖ω̄‖2 = 1} to which g converges as ω̄ goes to infinity.\n(i) Assume that νt1(G+) > 0. We note that under our assumptions the first component of the velocity field in G is lower bounded by η/2, so that ω0(t) = ω0(0) + tη/2 bounds from below the ω0-component of the trajectory of a test mass with initial condition with ω(0) ∈ G, as long as ω̄(t) ∈ Ḡ. Combining this bound with the forward invariance of G+, we see that if ω(0) ∈ G+ then ω0(t) > tη/2. Consequently, assuming that supp(νt) ⊂ (−M,M)×Θ for every t > t1 we have\nh1νt(Ḡ) ≥ η/2(t− t1)νt1(G+) + min{0, (t− t1)η/2−M}νt1(G \\ G+) .\nThis implies linear growth of h1νt(Ḡ) for t > t1 + 2M/η, contradicting the original assumption that ‖h1ν̃ − h1νt1‖BL < ε for all t > t1.\n(ii) Consider now the complementary case νt1(G+) = 0. We proceed to show that there exists t2 > t1 such that νt2(G+) > 0, thereby reducing this case at time t2 to part (i). To do so, we consider ω∗ ∈ supp(νt1) such that ω̄∗ ∈ Ḡ is a local minimum of gν̃ , i.e., for which ∇gν̃ = 0 (which exists by the preservation of the support property Assumption 1 c) ). Then, choosing ε̃ such that Bε̃(ω̄∗) ⊂ Ḡ, and setting M large enough that supp (νt1) ⊆ [−M,M ] × Θ, we prove below in Lemma C.4 that there exists t2 > t1 for which the image at t2 of ω(t1) := ω∗ under the characteristic flow (C.6) is contained in G+. By continuity of the flow map X(·, t), this conclusion extends to a neighborhood of ω∗, with positive mass under νt1 .\nWe denote throughout by ‖ · ‖C1 the maximum of the supremum norm of a function and the supremum norm of its gradient and recall the structure of the policy gradient vector field\nFt(ω, νt) = −∇〈ω0φ(ω̄), δ(νt)〉πν = −∇(ω0 gνt(ω̄)) (C.10)\nwhere g is defined in (C.7) and νt solves (10). With these definitions, we proceed to prove that case (ii) in the analysis above will ultimately reduce to case (i) for t large enough. Lemma C.4. Let ν̃ ∈ M1+(Ω) and ω̄∗ satisfy |∇gν̃(ω̄∗)| = 0, gν̃(ω̄∗) < −η < 0 for some η > 0. Then for every ε̃,M > 0 there exists t2, ε > 0 such that if for all t ∈ (0, t2) we have ‖gν̃−gνt‖C1 < ε and ω∗0 ∈ [−M, 0], then the point ω∗ is mapped, under the flow of the policy gradient vector field (C.10) at time t2 to a subset of Bε̃((1, ω̄∗)).\nProof of Lemma C.4. By homogeneity of the approximator, we can bound the first component of the velocity of a particle (ω0(t), ω̄(t)) under (C.10) with initial condition ω(0) = ω̄∗ as\nd dt ω0(t) = −gνt(ω̄(t)) ≥ −gν̃(ω̄∗)− |gν̃(ω̄(t))− gν̃(ω̄∗)| − |gνt(ω̄(t))− gν̃(ω̄(t))|\nIn the other directions, defining q(t) := ‖ω̄(t)− ω̄∗‖, we have\nd dt q(t) ≤ |ω0(t)|‖∇ω̄gνt(ω̄(t))‖\n≤ |ω0(t)| [‖∇ω̄gν̃(ω̄∗)‖+ ‖∇ω̄gν̃(ω̄(t))−∇ω̄gν̃(ω̄∗)‖+ ‖∇ω̄gνt(ω̄(t))−∇ω̄gν̃(ω̄(t))‖]\nfor all t ∈ [0, τ̄ ] where τ̄ := inf{t : ω0(t) 6∈ [−M, 1]}. Moreover, Lipschitz continuity of the potential gν̃( · ) and its Lipschitz smoothness imply the existence of a L > 0 such that max{|gν̃(ω̄) − gν̃(ω̄∗)|, ‖∇gν̃(ω̄) − ∇gν̃(ω̄∗)‖} ≤ L‖ω̄ − ω̄∗‖. Combining this with the assumed convergence of νt to ν̃, which implies ‖gν̃ − gνt‖C1 < ε, we can bound the evolution of (ω0(t), q(t)) for t ∈ [0, τ̄ ] in the perturbative regime of interest as follows:\nd dt ω0(t) ≥ η − ε− Lq(t) (C.11)\nd dt q(t) ≤ |ω0(t)| [ε+ Lq(t)] (C.12)\nWe now show that, choosing both ε and a neighborhood around ω∗ = (ω∗0 , ω̄ ∗) to be small enough, the forward dynamics of ω∗ will reach the set {ω0 > 0} before q(t) can increase too much. More precisely, by possibly increasing the value of L such that η/4L < ε̃, and defining τq = inf{t : q(t) > η/4L} we prove that there\nexists ε ∈ (0, η/4) such that τq > τ̄ , i.e., that the trajectory ω∗ reaches G+ before q(t) > ε̃. Note that as long as t ∈ [0, τq] and ε ∈ (0, η/4) the negative terms on the RHS (C.11) can be bounded from below, and we have\nω0(t) ≥ ω0(0) + η\n2 t\nso that ω0(t) > ω0(0) ≥ −M . Consequently, for all t ∈ [0, τ̄ ∧ τq] we bound the RHS of (C.12) as ddtq(t) < Mε + LMq(t). Using that q(0) = 0 and Grönwall inequality we can bound the total excursion in the ω̄ component q(t) ≤ εMt exp [LMt]. Finally, setting τ0 := 2(M + 1)/η ≥ −2(ω0(0) − 1)/η > τ̄ so that ω0(τ0) > 1 we are still free to set ε small enough such that τq > τ0 > τ̄ . Indeed, by monotonicity of the upper-bound on q(t) we have\nq(τ0) ≤ 2ε(M + 1)M/η exp [2LM(M + 1))/η] ≤ η/4L ,\nso that setting ε ∈ (0, η/4) concludes the proof.\nWe now proceed to show the needed regularity of the potential gν from (C.7) in terms of the signed measure h1ν defined in (C.5). In doing so, we also prove Lipschitz smoothness of the operator R defined in (B.1):\nLemma C.5. For any r > 0, the operator R on Fr = { ∫ ψν : supp ν ∈ Qr} is Lipschitz smooth, and DRf is bounded in the supremum norm. Furthermore, for all C0 > 0 there exists α > 0 and ε > 0 such that for all ν, ν′ satisfying ‖h1ν‖BL, ‖h1ν′‖BL < C0, ‖h1ν − h1ν′‖BL < ε, one has\n‖gν − gν′‖C1 ≤ α‖h1ν − h1ν′‖BL . (C.13)\nTo prove the above lemma, we first bound some relevant quantities. Throughout, by slight abuse of notation, we denote for any function f : S ×A → R\n‖f‖2 = sup s∈S ∫ A f(s, a)2da\nLemma C.6. For all f, f ′ ∈ Fr there exists ε > 0, C ′, C ′′, C ′′′ > 0 such that if ‖f − f ′‖2 < ε one has\n‖πν − πν′‖2 ≤ C ′‖f − f ′‖2 (C.14) ‖%ν − %ν′‖1 ≤ C ′′‖f − f ′‖2 (C.15) ‖Qπν −Qπν′‖2 ≤ C ′′′‖f − f ′‖2 (C.16)\nProof of Lemma C.6. Throughout this proof, for simplicity of notation, we will write π = πν and π′ = πν′ . Furthermore, we use that for f ∈ Fr there exists C0 > 0 so that\ne−C0‖φ‖C1 ≤ exp[f(s, a)] ≤ eC0‖φ‖C1 , (C.17)\nimplying, together with the assumed absolute continuity of %0 that ‖Qπ‖∞, ‖π‖∞, ‖%π‖∞ < ∞. Setting throughout τ = 1 to simplify the notation and combining the above with the pointwise upper bound ex < 1 +Kr|x| for |x| < eC0‖φ‖C1 we obtain\n‖π − π′‖2 ≤ ∥∥∥∥ exp[f(s, a)]∫ exp[f(s, a′)]da′ − exp[f ′(s, a)]∫ exp[f ′(s, a′)]da′ ∥∥∥∥ 2\n≤ ∥∥∥∥ exp[f(s, a)]∫ exp[f(s, a′)]da′ − exp[f ′(s, a)]∫ exp[f(s, a′)]da′ ∥∥∥∥ 2 + ∥∥∥∥ exp[f ′(s, a)]∫ exp[f(s, a′)]da′ − exp[f ′(s, a)]∫ exp[f ′(s, a′)]da′ ∥∥∥∥ 2\n≤ ∥∥∥∥ exp[f ′(s, a)]∫ exp[f(s, a′)]da′ ∥∥∥∥ ∞ ( ‖1− exp[f(s, a)− f ′(s, a)]‖2 + ∥∥∥∥ ∫ exp[f(s, a′)]da′∫ exp[f ′(s, a′)]da′ − 1 ∥∥∥∥ 2 ) ≤ e 2C0‖φ‖C1\n|A|\n( Kr ‖f(s, a)− f ′(s, a)‖2 + e2C0‖φ‖C1\n|A|\n∥∥∥∥∫ (exp[f(s, a′)− f ′(s, a′)]− 1)da′∥∥∥∥ 2 ) ≤ e 2C0‖φ‖C1\n|A| Kr(1 + e\n4C0‖φ‖C1 )‖f − f ′‖2 =: C ′‖f − f ′‖2, (C.18)\nwhere we have denoted by |A| the Lebesgue measure of the action space A. We now proceed to establish the second bound in the statement of the lemma. In this case, denoting the t-steps transition probability as P tπ(s,dst) = ∫ St−1 Pπ(s,ds1)Pπ(s1,ds2) . . . Pπ(st−1,dst) we have\n‖%π − %π′‖1 = ∥∥∥∥∥ ∞∑ t=1 γt ∫ S %0(ds0) ( P tπ(s0,ds)− P tπ′(s0,ds) )∥∥∥∥∥ 1\n≤ ∞∑ t=1 γt t−1∑ j=0 ∥∥∥%0P jπ (Pπ − Pπ′)P t−j−1π′ ∥∥∥ 1 .\nObserving that for any smooth % ∈M1+(S) for the operator norm of the difference in the above sum we have ‖%(Pπ − Pπ′)‖1 = ∥∥∥∥∫ S %(ds) ∫ A P (s, a,ds′) (π(s,da)− π′(s,da)) ∥∥∥∥ 1\n≤ ‖%‖1‖P‖1‖π − π′‖2 ≤ (1− γ)2\nγ C ′′‖f − f ′‖2\nfor large enough C ′′, where we used (C.18) in the last line and the Lipschitz continuity of P in its second argument, and ‖P‖1 is the operator norm of the transition operator ∫ A P (s, a,ds\n′)π′′(s,da) : M1+(A) → M1+(A), which is equal to 1. From this we conclude\n‖%π − %π′‖1 ≤ (1− γ)2\nγ\n∞∑ t=1 tγtC ′′‖f − f ′‖2 = C ′′‖f − f ′‖2\nFinally, defining for notational convenience R′τ := r̄ − τDKL(π′, π̄) we write: ‖Qπ −Qπ′‖2 = ∥∥∥∥γ ∫ P (s, a,ds′) (Vπ(s′)− Vπ′(s′))∥∥∥∥\n2\n= ∥∥∥∥γ ∫ P (s, a,ds′)(∫ Rτ (s′′, a′)%π(s′,ds′′)π(da′)−R′τ (s′′, a′)%π′(s′,ds′′)π′(da′))∥∥∥∥ 2\n≤ γ ∥∥∥∥∫ P (s, a,ds′)(Rτ (s′′, a′)%π(s′,ds′′)π(da′)−R′τ (s′′, a′)%π′(s′,ds′′)π′(da′))∥∥∥∥\n2 ≤ ∥∥∥∥∫ (Rτ −R′τ ) (s′′, a′)%π(s′,ds′′)π(da′)∥∥∥∥\n∞ + ∥∥∥∥∫ R′τ (s′′, a′)(%π − %π′)(s′,ds′′)π(da′)∥∥∥∥ ∞\n+ ∥∥∥∥∫ R′τ (s′′, a′)%π′(s′,ds′′)(π − π′)(da′)∥∥∥∥ ∞\n(C.19)\nand bound each term separately letting C ′′′1 , C ′′′ 2 , C ′′′ 3 > 0 be large enough constants. For the first, we have:∥∥∥∥∫ (Rτ −R′τ ) (s′′, a′)%π(s′,ds′′)π(s′′,da′)∥∥∥∥ ∞ ≤ 1 1− γ ‖π‖2‖ log π − log π′‖2\n≤ 1 1− γ ‖π‖2\n( ‖f − f ′‖2 + ‖log ∫ ef(s,a)da∫ ef ′(s,a)da ‖2 ) ≤ C ′′′1 ‖f − f ′‖2 (C.20)\nwhere we have bounded the log term as follows: ‖ log ∫ ef(s,a)da∫ ef ′(s,a)da ‖2 = ‖ log ∫ ef(s,a) − ef ′(s,a)da∫ ef ′(s,a)da + 1‖2\n≤ ‖ log\n∫ ef ′(s,a) ( e|f(s,a)−f ′(s,a)| − 1 )\nda∫ ef ′(s,a)da + 1‖2\n≤ ‖ log\n( ‖e−f ′‖∞‖ef\n′‖∞ |A|\n∫ ( e|f(s,a)−f ′(s,a)| − 1 ) da+ 1 ) ‖2\n≤ ‖ log\n( ‖e−f ′‖∞‖ef\n′‖∞ |A| Kr\n∫ |f(s, a)− f ′(s, a)|da+ 1 ) ‖2\n≤ ‖ log ( ‖e−f ′ ‖∞‖ef ′ ‖∞Kr‖f(s, a)− f ′(s, a)‖2 + 1 ) ‖2\n≤ ‖e−f ′ ‖∞‖ef ′ ‖∞Kr‖f − f ′‖2 (C.21)\nFor the second term in (C.19), using the boundedness of ‖R‖2 < CR and that %π − %π ′\n= γ (%π − %π′) for a certain, %0 depending on s′ we write∥∥∥∥∫ R′τ (s′′, a′)(%π − %π′)(s′,ds′′)π(s′′,da′)∥∥∥∥\n∞ ≤ ‖π‖2‖R′τ (s′′, a′)‖2‖%π − %π′‖1\n≤ C ′′′2 ‖f − f ′‖2. (C.22)\nWe finally bound the third term by writing∥∥∥∥∫ R′τ (s′′, a′)%π′(s′,ds′′)(π − π′)(s′′,da′)∥∥∥∥ ∞ ≤ ‖%π′‖1‖R′τ (s′′, a′)‖2‖π − π′‖2\n≤ C ′′′3 ‖f − f ′‖2 (C.23)\nand obtain (C.16) by combining (C.20)-(C.23).\nProof of Lemma C.5. We first establish the desired properties of the functional R. To do so we differentiate (8) at f ∈ L2(S ×A)\nδπf δf (s, a) = δ δf\neτ ∫ f(s,a)ν(dω)∫\nA e τf(s,a)da\n= 1∫\nA e τfda\nδ δf eτf − e τf(∫ A e τfda )2 ∫ A δ δν eτfda\n= τ ( f − ∫ A fπf (s,da ′) ) πf (s, a) . (C.24)\nThen, combining the above with (A.6) and Hölder inequality we obtain\n‖DRf‖∞,r = sup f∈Fr\n∫ (DRf (s, a)) 2dsda <∞\nwhere we have used that all the terms appearing in DRf are bounded for every choice of r > 0. To establish Lipschitz smoothness of R, letting f, f ′ ∈ Fr and denoting, to simplify notation, π = πf and π′ = πf ′ , we proceed to bound the operator norm by splitting the RHS as\n‖DRf −DRf ′‖ ≤ sup `∈L2(S×A) : ‖`‖=1 |Cπ[`,Qπ − τ log π]− Cπ′ [`,Qπ′ − τ log π′]|\n≤ sup `∈L2(S×A) : ‖`‖=1 [(I) + (II) + (III)]\nand considering the resulting terms separately. First of all, defining throughout by slight abuse of notation δ(π) := Qπ − τ log π, we have\n(I) := |〈 ∫ `(s, a)π′(s,da)− ∫ `(s, a)π(s,da), δ(π′)〉π′ |\n≤ ∫ S (∫ A `(s, a)(π(s, a)− π′(s, a))da )(∫ δ(π′)π′(da) ) %π′(s)ds ≤ ‖`‖2‖π − π′‖2‖δ(π′)π′‖2‖%π′‖1 ≤ C‖`‖2‖f − f ′‖2, (C.25)\nwhere the last line was obtained using (C.14) and boundedness from above and below of π,Qπ in Fr. We further write, for a K <∞ large enough\n(II) := |〈`(s, a)− ∫ `(s, a′)π(s,da′), δ(π′)〉π′ − 〈`(s, a)− ∫ `(s, a′)π(s,da′), δ(π′)〉π|\n≤ | ∫ δ(π′)`(s, a)%π(s)(π − π′)(s,da)ds|+ | ∫ `(s, a)δ(π′)(%π − %π′)(s)π′(s,da)ds|\n≤ ‖δ(π′)‖2‖%π′‖1‖`‖2 (‖π − π′‖2 + ‖%π − %π′‖1) ≤ K‖`‖2‖f − f ′‖2, (C.26)\nwhere we have used Cauchy-Schwartz inequality together with (C.14) and (C.15). Finally, using (C.16) and (C.21), we bound for K <∞ possibly larger than above\n(III) := |〈`(s, a)− ∫ `(s, a′)π(s,da′), δ(π)− δ(π′)〉π|\n≤ ‖%ππ‖∞‖`‖2‖δ(π)− δ(π′)‖2 ≤ ‖%π‖1‖π‖2‖`‖2 ( ‖Qπ −Qπ′‖2 + ‖Vπ − Vπ′‖2 + τ‖ log π π′ ‖2 )\n≤ ‖%π‖1‖π‖2‖`‖2 ( (1 + ‖π′‖∞)‖Qπ −Qπ′‖2 + ‖Qπ‖2‖π − π′‖2 + τ‖ log π π′ ‖2 ) ≤ K‖`‖2‖f − f ′‖2. (C.27)\nCombining (C.25), (C.26) and (C.27) we obtain that\nLDR = sup f,f ′∈Fr ‖DRf −DRf ′‖ ‖f − f ′‖2 <∞\nproving the Lipschitz smoothness claim. Proceeding to the proof of (C.13), combining the Lipschitz smoothness of R on bounded sets, the identity ψ(s, a; (1, ω̄)) = φ(s, a; ω̄) and the boundedness of the set { ∫ ψν : ν ∈ P2(Ω), |h1ν | < C0} we obtain\n‖fν − fν′‖2 = ∥∥∥∥∫ ψ(·;ω)(ν − ν′)(dω)∥∥∥∥\n2\n(C.28)\n= ∥∥∥∥∫ φ(·;ω)(h1ν − h1ν′)(dω̄)∥∥∥∥ 2\n≤ sup `∈L2(S×A),‖`‖≤1\n∫ ∫ `(s, a)φ(s, a; ω̄)dsda(h1ν − h1ν′)(dω̄)\n≤ ‖φ‖C1‖h1ν − h1ν′‖BL , which combined with the ‖φ‖C1 -Lipschitz continuity of the map ∫ `(s, a)φ(s, a; ω̄)dsda and with the Lipschitz smoothness of R concludes the proof." } ]
2,021
GLOBAL OPTIMALITY OF SOFTMAX POLICY GRADIENT WITH SINGLE HIDDEN LAYER NEURAL NETWORKS IN THE MEAN-FIELD REGIME
SP:fbcb2bbd4ca1133e8ae2178e02d3a7393ec4e05d
[ "This paper presents an interesting idea of using neural-network-based RL to solve a type of vehicle routing problems, where the vehicles are tasked with visiting spatial locations to deliver items, and are subject to load capacity and delivery time constraints. In order to solve this problem, the authors propose an encoder-decoder architecture to decide for each robot, where to move next. The encoder is inspired from the Covariance Compositional Networks and the decoder utilizes an attention module, and the network is trained via REINFORCE. The results show that the proposed method outperforms the baseline in unseen test cases, in terms of task completion rate and the specified cost function.", "The paper proposes a graph learning approach for solving the multi-robot task allocation (MRTA) problem. It frames the problem as a Markov Decision Process (MDP) and trains a policy with a graph neural network architecture using REINFORCE. Results show that the proposed approach scales better compared to a non-learning baseline and is more accurate than a multi-headed attention (MHA) approach.", "This paper proposed a neural architecture for learning to solve multi-robot task allocation (MRTA) problems. The MRTA problem is modeled as a MDP, and then the so-called covariant attention-based neural architecture (CAM) is proposed. The main paper contribution is the CAM architecture. Case studies are presented in which the proposed CAM is compared with other learning based and non-learning based methods. In the reported experiments CAM obtained smaller errors (average cost) and smaller processing time. ", "This paper considers the multi-robot task allocation problems. To address the limitations of existing studies, such as real-world constraints, larger-sized problems and generalizations, this paper proposed a learning architecture, Covariant Attention-based Mechanism. They further conduct adequate evaluations and the results have shown great improvements over the state-of-the-art methods. ", "This paper proposes a new method, including neural network architecture, for solving time-constrained multi-robot task allocation (MRTA) problems. The proposed approach models the target problem as a Markov Decision Process (MDP) over graphs and use Reinforcement Learning (RL) methods to solve the problem. The proposed learning architecture is called Covariant Attention-based Mechanism (CAM). The architecture is shown to have better performance than an existing state-of-the-art encoder-decoder method regarding task completion, cost function, and scalability. Though the performance is still lower than non-learning-based baseline methods, i.e., BiG-MRTA, the computational cost is significantly smaller than the baselines. " ]
This paper demonstrates how time-constrained multi-robot task allocation (MRTA) problems can be modeled as a Markov Decision Process (MDP) over graphs, such that approximate solutions can be modeled as a policy using Reinforcement Learning (RL) methods. Inspired by emerging approaches for learning to solve related combinatorial optimization (CO) problems such as multi-traveling salesman (mTSP) problems, a graph neural architecture is conceived in this paper to model the MRTA policy. The generalizability and scalability needs of the complex CO problem presented by MRTA are addressed by innovatively using the concept of Covariant Compositional Networks (CCN) to learn the local structures of graphs. The resulting learning architecture is called Covariant Attention-based Mechanism or CAM, which comprises: 1) an encoder: CCN-based embedding model to represent the task space as learnable feature vectors, 2) a decoder: an attentionbased model to facilitate sequential decision outputs, and 3) context: to represent the state of the mission and the robots. To learn the feature vectors, a policy-gradient method is used. The CAM architecture is found to generally outperform a stateof-the-art encoder-decoder method that is purely based on Multi-head Attention (MHA) mechanism in terms of task completion and cost function, when applied to a class of MRTA problems with time deadlines, robot ferry range constraints, and multi-tour allowance. CAM also demonstrated significantly better scalability in terms of cost function over unseen scenarios with larger task/robot spaces than those used for training. Lastly, evidence regarding the unique potential of learningbased approaches in delivering highly time-efficient solutions is provided for a benchmark vehicle routing problem – where solutions are achieved 100-1000 times faster compared to a non-learning baseline, and for a benchmark MRTA problem with time and capacity constraints – where solutions for larger problems are achieved 10 times faster compared to non-learning baselines.
[]
[ { "authors": [ "Claudia Archetti", "Dominique Feillet", "Michel Gendreau", "M. Grazia Speranza" ], "title": "Complexity of the VRP and SDVRP", "venue": "Transportation Research Part C: Emerging Technologies,", "year": 2011 }, { "authors": [ "Jean-Philippe Aurambout", "Konstantinos Gkoumas", "Biagio Ciuffo" ], "title": "Last mile delivery by drones: An estimation of viable market potential and access to citizens across european cities", "venue": "European Transport Research Review,", "year": 2019 }, { "authors": [ "Nabila Azi", "Michel Gendreau", "Jean-Yves Potvin" ], "title": "An exact algorithm for a vehicle routing problem with time windows and multiple use of vehicles", "venue": "European Journal of Operational Research,", "year": 2010 }, { "authors": [ "Thomas D Barrett", "William R Clements", "Jakob N Foerster", "Alex I Lvovsky" ], "title": "Exploratory combinatorial optimization with reinforcement learning", "venue": null, "year": 1909 }, { "authors": [ "Amir Behjat", "Hemanth Manjunatha", "Apoorva Kumar", "Jani", "Leighton Collins", "Payam Ghassemi", "Joseph Distefano", "David Doermann", "Karthik Dantu", "Ehsan Esfahani", "Souma Chowdhury" ], "title": "Learning robot swarm tactics over complex adversarial environments", "venue": "In International Symposium on Multi-Robot and Multi-Agent Systems (MRS’21),", "year": 2021 }, { "authors": [ "Tolga Bektas" ], "title": "The multiple traveling salesman problem: an overview of formulations and solution procedures", "venue": "Omega, 34(3):209–219,", "year": 2006 }, { "authors": [ "Kris Braekers", "Katrien Ramaekers", "Inneke Van Nieuwenhuyse" ], "title": "The vehicle routing problem: State of the art classification and review", "venue": "Computers & Industrial Engineering,", "year": 2016 }, { "authors": [ "Quentin Cappart", "Didier Chételat", "Elias B. Khalil", "Andrea Lodi", "Christopher Morris", "Petar Veličković" ], "title": "Combinatorial Optimization and Reasoning with Graph", "venue": "Neural Networks", "year": 2021 }, { "authors": [ "Diego Cattaruzza", "Nabil Absi", "Dominique Feillet" ], "title": "Vehicle routing problems with multiple trips", "venue": null, "year": 2016 }, { "authors": [ "Hanjun Dai", "Elias B. Khalil", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "George B Dantzig", "John H Ramser" ], "title": "The truck dispatching problem", "venue": "Management science,", "year": 1959 }, { "authors": [ "M Bernardine Dias", "Robert Zlot", "Nidhi Kalra", "Anthony Stentz" ], "title": "Market-based multirobot coordination: A survey and analysis", "venue": "Proceedings of the IEEE,", "year": 2006 }, { "authors": [ "Maria Valera Espina", "Raphael Grech", "Deon De Jager", "Paolo Remagnino", "Luca Iocchi", "Luca Marchetti", "Daniele Nardi", "Dorothy Monekosso", "Mircea Nicolescu", "Christopher King" ], "title": "Multi-robot teams for environmental monitoring", "venue": "In Innovations in Defence Support", "year": 2011 }, { "authors": [ "Brian P Gerkey", "Maja J" ], "title": "Matarić. A formal analysis and taxonomy of task allocation in multi-robot systems", "venue": "The International Journal of Robotics Research,", "year": 2004 }, { "authors": [ "Payam Ghassemi", "Souma Chowdhury" ], "title": "Decentralized Task Allocation in Multi-Robot Systems via Bipartite Graph Matching Augmented With Fuzzy Clustering", "venue": "In Volume 2A: 44th Design Automation Conference, pp. V02AT03A014. American Society of Mechanical Engineers, aug 2018", "year": 2018 }, { "authors": [ "Payam Ghassemi", "Souma Chowdhury" ], "title": "Multi-robot task allocation in disaster response: Addressing dynamic tasks with deadlines and robots with range and payload constraints", "venue": "Robotics and Autonomous Systems,", "year": 2021 }, { "authors": [ "Payam Ghassemi", "David DePauw", "Souma Chowdhury" ], "title": "Decentralized Dynamic Task Allocation in Swarm Robotic Systems for Disaster Response: Extended Abstract", "venue": "IEEE. ISBN 978-1-7281-2876-4. doi: 10.1109/MRS.2019.8901062. URL https: //ieeexplore.ieee.org/document/8901062/", "year": 2019 }, { "authors": [ "William L. Hamilton", "Rex Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "CoRR, abs/1706.02216,", "year": 2017 }, { "authors": [ "Keld Helsgaun" ], "title": "An Extension of the Lin-Kernighan-Helsgaun TSP Solver for Constrained Traveling Salesman and Vehicle Routing Problems: Technical report", "venue": "Roskilde Universitet,", "year": 2017 }, { "authors": [ "Truong Son Hy", "Shubhendu Trivedi", "Horace Pan", "Brandon M Anderson", "Risi Kondor" ], "title": "Predicting molecular properties with covariant compositional networks", "venue": "The Journal of chemical physics,", "year": 2018 }, { "authors": [ "Sarah Ismail", "Liang Sun" ], "title": "Decentralized hungarian-based approach for fast and scalable task allocation", "venue": "In 2017 International Conference on Unmanned Aircraft Systems (ICUAS),", "year": 2017 }, { "authors": [ "Jiechuan Jiang", "Chen Dun", "Tiejun Huang", "Zongqing Lu" ], "title": "Graph convolutional reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Kelin Jose", "Dilip Kumar Pratihar" ], "title": "Task allocation and collision-free path planning of centralized multi-robots system for industrial plant inspection using heuristic methods", "venue": "Robotics and Autonomous Systems,", "year": 2016 }, { "authors": [ "Yoav Kaempfer", "Lior Wolf" ], "title": "Learning the multiple traveling salesmen problem with permutation invariant pooling", "venue": "networks. ArXiv,", "year": 2018 }, { "authors": [ "Elias Khalil", "Hanjun Dai", "Yuyu Zhang", "Bistra Dilkina", "Le Song" ], "title": "Learning combinatorial optimization algorithms over graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Alaa Khamis", "Ahmed Hussein", "Ahmed Elmogy" ], "title": "Multi-robot task allocation: A review of the state-of-the-art", "venue": "In Cooperative Robots and Sensor Networks", "year": 2015 }, { "authors": [ "Wouter Kool", "Herke Van Hoof", "Max Welling" ], "title": "Attention, learn to solve routing problems", "venue": "In 7th International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Qingbiao Li", "Fernando Gama", "Alejandro Ribeiro", "Amanda Prorok" ], "title": "Graph neural networks for decentralized multi-robot path planning", "venue": "In IEEE International Conference on Intelligent Robots and Systems,", "year": 2020 }, { "authors": [ "Zhuwen Li", "Qifeng Chen", "Vladlen Koltun" ], "title": "Combinatorial optimization with graph convolutional networks and guided tree search", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Nina Mazyavkina", "Sergey Sviridov", "Sergei Ivanov", "Evgeny Burnaev" ], "title": "Reinforcement learning for combinatorial optimization: A survey, 2021", "venue": null, "year": 2021 }, { "authors": [ "Hakim Mitiche", "Dalila Boughaci", "Maria Gini" ], "title": "Iterated local search for time-extended multi-robot task allocation with spatio-temporal and capacity constraints", "venue": "Journal of Intelligent Systems,", "year": 2019 }, { "authors": [ "Akash Mittal", "Anuj Dhawan", "Sahil Manchanda", "Sourav Medya", "Sayan Ranu", "Ambuj Singh" ], "title": "Learning heuristics over large graphs via deep reinforcement learning", "venue": null, "year": 1903 }, { "authors": [ "R Nallusamy", "K Duraiswamy", "R Dhanalaksmi", "P Parthiban" ], "title": "Optimization of non-linear multiple traveling salesman problem using k-means clustering, shrink wrap algorithm and meta-heuristics", "venue": "International Journal of Nonlinear Science,", "year": 2009 }, { "authors": [ "Alex Nowak", "Soledad Villar", "Afonso S Bandeira", "Joan Bruna" ], "title": "A note on learning algorithms for quadratic assignment with graph neural networks", "venue": null, "year": 2017 }, { "authors": [ "Ernesto Nunes", "Marie Manner", "Hakim Mitiche", "Maria Gini" ], "title": "A taxonomy for task allocation problems with temporal and ordering constraints", "venue": "Robotics and Autonomous Systems,", "year": 2017 }, { "authors": [ "Edwin Olson", "Johannes Strom", "Ryan Morton", "Andrew Richardson", "Pradeep Ranganathan", "Robert Goeddel", "Mihai Bulic", "Jacob Crossman", "Bob Marinier" ], "title": "Progress toward multi-robot reconnaissance and the magic 2010 competition", "venue": "Journal of Field Robotics,", "year": 2012 }, { "authors": [ "Rohan Paleja", "Andrew Silva", "Letian Chen", "Matthew Gombolay" ], "title": "Interpretable and personalized apprenticeship scheduling: Learning interpretable scheduling policies from heterogeneous user demonstrations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Eric Schneider", "Elizabeth I Sklar", "Simon Parsons", "A Tuna Özgelen" ], "title": "Auction-based task allocation for multi-robot teams in dynamic environments", "venue": "In Conference Towards Autonomous Robotic Systems,", "year": 2015 }, { "authors": [ "Malcolm Strens", "Neil Windelinckx" ], "title": "Combining planning with reinforcement learning for multirobot task allocation", "venue": null, "year": 2005 }, { "authors": [ "Quinlan Sykora", "Mengye Ren", "Raquel Urtasun" ], "title": "Multi-agent routing value iteration network", "venue": "In 37th International Conference on Machine Learning,", "year": 2112 }, { "authors": [ "Ekaterina V. Tolstaya", "James Paulos", "Vijay R. Kumar", "Alejandro Ribeiro" ], "title": "Multi-robot coverage and exploration using spatial graph neural networks. ArXiv", "venue": null, "year": 2011 }, { "authors": [ "Paolo Toth", "Daniele Vigo" ], "title": "Vehicle routing: problems, methods, and applications", "venue": null, "year": 2014 }, { "authors": [ "Pieter Vansteenwegen", "Wouter Souffriau", "Greet Vanden Berghe", "Dirk Van Oudheusden" ], "title": "Iterated local search for the team orienteering problem with time windows", "venue": "Computers & Operations Research,", "year": 2009 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N. Gomez", "Lukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "CoRR, abs/1706.03762,", "year": 2017 }, { "authors": [ "Di Wang", "Mengqi Hu", "Yang Gao" ], "title": "Multi-criteria mission planning for a solar-powered multi-robot system", "venue": "ASME", "year": 2018 }, { "authors": [ "Zheyuan Wang", "Matthew Gombolay" ], "title": "Learning scheduling policies for multi-robot coordination with graph attention networks", "venue": "IEEE Robotics and Automation Letters,", "year": 2020 }, { "authors": [ "Barrett" ], "title": "Under review as a conference paper at ICLR 2022 A LEARNING OVER GRAPHS Neural network based methods for learning CO can be broadly classified into: (i) Reinforcement Learning (RL) methods Kool et al", "venue": "Strens & Windelinckx", "year": 2005 }, { "authors": [ "Mittal" ], "title": "The supervised learning approaches typically address problem scenarios where samples are abundant (e.g., influence maximization in social networks Mittal et al. (2019)) or inexpensive to evaluate (e.g., TSP Kaempfer & Wolf (2018)), and are thus unlikely to be readily applicable to solve complex problems over real-world graphs. RL based techniques to learn on graphs include attention models with REINFORCE", "venue": null, "year": 2017 }, { "authors": [ "Mittal" ], "title": "2019) presented a new framework to solve a combinatorial optimization problem. In this framework, Graph Convolutional Network (GCN) performs the graph embedding and Q-Learning learns the policy. The results demonstrated that the proposed framework can learn to solve unseen test problems that have been drawn from the same distribution as that of the training data", "venue": null, "year": 2019 }, { "authors": [ "Kool" ], "title": "networks and attention mechanism) to encode and learn the combinatorial optimization problems in graph space Kaempfer & Wolf", "venue": null, "year": 2019 }, { "authors": [ "al. Wang" ], "title": "Gombolay (2020) showed how learning can lead to generate faster solutions than standard exact methods for multi-robot scheduling problems. However, the size of the problem that has been studied in that work and other similar studies", "venue": "Paleja et al", "year": 2020 }, { "authors": [ "Li" ], "title": "In this paper, a new neural architecture is proposed that combines the attention mechanism with an enhanced encoding network (embedding layers), where the latter is particularly designed to capture local structural features of the graph in an equivariant manner. The embedding layer", "venue": "CO problems,", "year": 2020 }, { "authors": [ "Hy" ], "title": "CCN was originally implemented for predicting molecular properties by learning local structural information of the molecules. This node-based embedding has been chosen since it: i) operates on an undirected graph; ii) uses receptive field and aggregation functions based on tensor product and contraction operations (which are equivariant), which leads to a permutation- and rotation-invariant embedding", "venue": "Covariant Compositional Networks (CCN),", "year": 2018 }, { "authors": [ "Vaswani" ], "title": "Here we are interested in the attention mechanisms since they involve simple matrix multiplications, which make them not only computationally inexpensive", "venue": null, "year": 2017 }, { "authors": [ "Vaswani" ], "title": "implement an attention-based decoder for CAM as proposed in Kool et al", "venue": null, "year": 2019 }, { "authors": [], "title": "..sN ] is the sequence of all the nodes that were visited. The minimum cost function that can be achieved using Eq. 1 is 0, which corresponds to the case where all the tasks are successfully completed. A detailed formulation of the exact ILP constraints that describe the MRTA-TAPTC problem", "venue": null, "year": 2019 }, { "authors": [ "Vansteenwegen" ], "title": "Enhanced Iterated Local Search (EILS): EILS is also an online meta heuristic iterated search method Mitiche et al. (2019), with an improved perturbation step as compared", "venue": null, "year": 2009 }, { "authors": [ "Ghassemi" ], "title": "Bi-Graph MRTA (BiG-MRTA): The BiG-MRTA algorithm Ghassemi", "venue": null, "year": 1980 } ]
[ { "heading": "1 INTRODUCTION", "text": "In multi-robot task allocation (MRTA) problems, we study how to coordinate tasks among a team of cooperative robotic systems such that the decisions are free of conflict and optimize a quantity of interest (Gerkey & Matarić, 2004). The potential real-world applications of MRTA are immense, considering that multi-robotics is one of the most important emerging directions of robotics research and development, and task allocation is fundamental to most multi-robotic or swarm-robotic operations. Example applications include disaster response (Ghassemi & Chowdhury, 2018), last-mile delivery (Aurambout et al., 2019), environment monitoring (Espina et al., 2011), reconnaissance (Olson et al., 2012) and combat Behjat et al. (2021). Although various approaches (e.g., graph-based methods (Ghassemi et al., 2019), integer-linear programming (ILP) approaches (Nallusamy et al., 2009; Toth & Vigo, 2014), and auction-based methods (Dias et al., 2006; Schneider et al., 2015)) have been proposed to solve the combinatorial optimization problem underlying MRTA operations, they usually do not scale well with number of robots and/or tasks, and do not readily adapt to complex problem characteristics without tedious hand-crafting of the underlying heuristics. In the recent years, a rich body of work has emerged on using learning-based techniques to model solutions or intelligent heuristics for combinatorial optimization (CO) problems over graphs. The existing methods are mostly limited to classical CO problems, such as multi-traveling salesman (mTSP), vehicle routing (VRP), and max-cut type of problems. We specifically focus on a class of MRTA problems that falls into the Single-task Robots, and Single-robot Tasks (SR-ST) class defined in (Gerkey & Matarić, 2004; Nunes et al., 2017). Based on iTax taxonomoy as defined in Gerkey & Matarić (2004), these problems fall into the In-schedule Dependencies (ID) category. Here, a feasible and conflict-free task\nallocation is defined as assigning any task to only one robot (Ghassemi et al., 2019). For solving these problems, we propose a new covariant attention-based model (aka CAM), a neural architecture for learning over graphs to construct the MRTA policies. This architecture builds upon the attention mechanism concept and innovatively integrates an equivariant embedding of the graph to capture graph structure while remaining agnostic to node ordering. We implement CAM on an original MRTA problem, a suite of benchmark MRTA problems and benchmark VRP problems to perform generalizability, scalability and comparative analyses of the new method. We also perform an analysis on the impact of the neighborhood size on the performance on the benchmark MRTA problem." }, { "heading": "1.1 MULTI-ROBOT TASK ALLOCATION", "text": "In recent years, learning approaches based on Graph Neural Networks or GNN are being increasingly used to solve planning problems with a CO formulation, e.g., TSP, VRP, Max-Cut, Min-Vertex, and MRTA Kool et al. (2019); Barrett et al. (2019); Khalil et al. (2017); Kaempfer & Wolf (2018); Mittal et al. (2019); Li et al. (2018); Nowak et al. (2017); Wang & Gombolay (2020); Tolstaya et al. (2020); Sykora et al. (2020); Dai et al. (2017). Further details on these related learning-based studies can be found in Appendix A. Some of the conventional ILP, MILP, and INLP based methods for MRTA have been discussed in Appendix D.1. GNNs provide the advantage of being able to capture both Euclidean features (e.g., task location), as well as non-Euclidean features such as task capacity, task deadline and the local structure of task neighborhoods. The latter serves as higher-level meaningful features that assist in generalized decision-making These existing studies are however limited in three key aspects: 1) They address simplified problems that often exclude common real-world factors such as resource and capacity constraints Kool et al. (2019); Kaempfer & Wolf (2018); Khalil et al. (2017); Tolstaya et al. (2020)). 2) They are mostly focused on smaller sized problems (≤ 100 tasks and 10 robots) Paleja et al. (2020); Strens & Windelinckx (2005); Wang & Gombolay (2020); Sykora et al. (2020), with their scalability remaining unclear. 3) They rarely provide evidence of generalizing to problem scenarios that are larger in size than those used for training. This capability would be particularly critical since real-world MRTA problems often involve simulating episodes whose costs scale with the number of tasks and robots, making re-training efforts burdensome. To address these gaps, here we propose a new learning framework that can solve large-sized MRTA problems (SR-ST) with commonly considered constraints – involving up to 1000+ tasks and 200+ robots – and generalize across even larger problem scenarios without the need to re-train. For most practical scenarios with larger number of locations, a highly optimal solution is not always desired, while a good feasible solution is the priority as pointed out by Cappart et al. (2021). Therefore, to enable scalable policies, we design a novel encoder based on the concept of Covariant Compositional Networks (CCN) Hy et al. (2018), which is hypothesized to effectively combine local structural information with permutation invariance. The encoder is followed by a decoder based on a Multi-head Attention mechanism Kool et al. (2019); Vaswani et al. (2017) which fuses the encoded information and problem/mission-specific information (Context) using simple matrix multiplication, in order to enable decentralized sequential decision-making." }, { "heading": "1.2 CONTRIBUTIONS OF THIS PAPER", "text": "The primary contributions of this paper can thus be stated as follows: 1) We formulate the general SR-ST class of MRTA problems as a Markov Decision Process or MDP over graphs with the multirobot state information embedded as the context portion of the policy model, such that the (task allocation) policy can be learned using an RL approach. 2) We design the GNN that acts as the policy network as an encoder-decoder architecture, where the encoder is innovatively based on covariant compositional networks (CCN), whose embedding capabilities significantly improve generalizability and scalability to larger task graphs and multi-robot teams. 3) We implement an attention based decoder (inspired by Kool et al. (2019)) to enable sequential decision-making, and specifically extend it to a multi-agent combinatorial optimization setting. The proposed CAM architecture is evaluated on a representative MRTA problem that involves coordinating a team of unmanned aerial vehicles (UAVs) to time-efficiently deliver flood relief. The results of this case study demonstrate how CAM clearly outperforms the state-of-the-art attention based method AM (Kool et al., 2019), in terms of scalabilty and convergence, thereby emphasizing the effectiveness of the new encoder. Further case studies show that CAM continues to compare favorably to AM over benchmark MRTA problems (with time and capacity constraints) and CVRP problems. Comparisons to non-learning baselines for these benchmark problems demonstrate the significant online computation advantages of learnt policies, with the latter being 10-100 times faster. The remainder of the paper is organized as follows: Section 2 defines the MRTA problem and its formulation as an MDP over graphs. Section 3 presents\nour proposed new GNN architecture. Section 4 describes simulation settings and different case studies. Results are discussed in Section 5." }, { "heading": "2 MRTA: PROBLEM DEFINITION AND FORMULATIONS", "text": "The multi-robot task allocation (MRTA) problem is defined as the allocation of tasks and resources among several robots that act together without conflict in the same environment to accomplish a common mission. The optimum solution (decision) of the MRTA problem is a sequence of tasks for each robot (conflict-free allocation) that maximizes the mission outcome (e.g., fraction of tasks completed) or minimize the mission cost (e.g., total distance travelled) subject to the robots’ range constraints. Here, the following assumptions are made: 1) All robots are identical and start/end at the same depot; 2) There are no environmental uncertainties; 3) The location (xi, yi) of task-i and its time deadline τi are known to all robots; 4) Each robot can share its state and its world view with other robots; and 5) There is a depot (Task-0), where each robot starts from and visits if no other tasks are feasible to be undertaken due to the lack of available range. Each tour is defined as departing from the depot, undertaking at least one task, and returning to the depot. 6) Motivated by the multi-UAV relief delivery problem, tasks are considered to be instantaneous, which means that reaching the waypoint associated with a task completes that task. This MRTA problem is a class of combinatorial optimization problems, which can be modeled in graph space. In order to learn policies that yield solutions to this CO problem, we express the MRTA problem as a Markov Decision Process (MDP) over a graph, described next. The optimization formulation of MRTA is then given in Section 2.2." }, { "heading": "2.1 MDP OVER A GRAPH", "text": "The MRTA problem involves a set of nodes/vertices (V ) and a set of edges (E) that connect the vertices to each other, which can be represented as a complete graph G = (V,E). Each node represents a task, and each edge connects a pair of nodes. Let Ω be a weight matrix where the weight of the edge (ωij ∈ Ω) represents the cost (e.g., distance) incurred by a robot to take task-j after achieving task-i. For MRTA with N tasks, the number of vertices and the number of edges are N and N(N − 1)/2, respectively. Node i is assigned a 3-dimensional feature vector denoting the task location and time deadline, i.e., di = [xi, yi, τi] where i ∈ [1, N ]. Here, ωij can be computed as ωij = √ (xi − xj)2 + (yi − yj)2, where i, j ∈ [1, N ].\nThe MDP defined in a decentralized manner for each individual robot (to capture its task selection process) can be expressed as a tuple < S,A,Pa,R >. The components of the MDP can be defined as: State Space (S): A robot at its decision-making instance uses a state s ∈ S, which contains the following information: 1) the current mission time, 2) its current location, 3) its remaining ferry-range (battery state), 4) the planned (allocated) tasks of its peers, 5) the remaining ferry-range of its peers, and 6) the states of tasks. The states of tasks contain the location, the time deadline, and the task status – active, completed, and missed (i.e., deadline is passed). Here we assume that each robot can broadcast its information to its peers without the need for a centralized system for communication, as aligned with modern communication capabilities Sykora et al. (2020). Action Space (A): The set of actions is represented as A, where each action a is defined as the index of the selected task, {0, . . . , N} with the index of the depot as 0. The task 0 (the depot) can be selected by multiple robots, but the other tasks are allowed to be chosen once if they are active (not completed or missed tasks). Pa(s′|s, a): A robot by taking action a at state s reaches the next state s′ in a deterministic manner (i.e., deterministic transition model is defined). Reward (R): The reward function is defined as −fcost, and is calculated when there is no more active tasks (all tasks has been visited once irrespective of it being completed or missed). Transition: The transition is an event-based trigger. An event is defined as the condition that a robot reaches its selected task or visits the depot location." }, { "heading": "2.2 MRTA AS OPTIMIZATION PROBLEM", "text": "This MRTA problem is adopted from (Ghassemi et al., 2019; Ghassemi & Chowdhury, 2021) with the following modification – payload constraints are not imposed on the robot. The exact solution of the MRTA problem can be obtained by formulating it as an integer nonlinear programming problem, which can be summarily expressed as:\nmin fcost = r − u(r)e−dr (1) where r ∈ [0, 1] and u(r) = {\n1 if r = 0 0 otherwise\n(2)\nsubject to τ fi < τi (3) δij ≤ ∆k, k ∈ [1, Nr] i, j ∈ [0, N ] (4)\nHere τfi is the time at which task i was completed, ∆k is the available range for robot k at any point of time, and δij is the distance between nodes i and j. A detailed formulation of the exact ILP constraints that describe the MRTA problem with range restrictions, multi-tours per robot and tasks with deadlines, can be found (Ghassemi & Chowdhury, 2021). Note that, here we use a slightly different objective/cost function to better reflect the generalized setting for the class of MRTA problems with ferry-range and task-deadline constraints For compactness of representation, only the main constraints involved in the studied MRTA problem are shown in the above set of equations. We however consider all of the constraints, except the one related to payload capacity, as defined in the work by (Ghassemi & Chowdhury, 2021); for detailed formulation of the MRTA problem, please refer to (Ghassemi & Chowdhury, 2021).\nHere, we craft the objective function (Eq. equation 1) such that it emphasises maximizing the completion rate (i.e., the number of completed tasks divided by the total number of tasks); and if perfect completion rate (100%) is feasible, then the travelled cost is also considered. The term of 1 − r is defined as task completion rate; i.e., the number of completed tasks (Nsuccess) divided by the total tasks (N ) or r = (N −Nsuccess) /N . Here, dr is a normalized value of the total distance travelled by all robots in the team. The term dr is the average travelled distance over all robots (i.e., dr = ∑Nr i=1 d total i /( √ 2 N)). The terms Nr and dtotali represent respectively the number of robots and the total traveled distance by robot-i during the entire mission. The above objective function (Eq. equation 1) gives a positive value if the completion rate is lower than 100%, otherwise it gives a negative value. Equation 1 ensures that the objective function is bounded in the range (−1, 1]." }, { "heading": "3 COVARIANT ATTENTION-BASED NEURAL ARCHITECTURE", "text": "For learning to work on the MDP defined over graphs in Section 2.1, we need to represent each node as a continuous vector, preserving its properties as well as the the structural information of the neighborhood of that node.\nTask Graph\nState\nGreedy Policy CAM: Graph Neural Network\nTask Graph\nRobot-1\nEncoder\nContext\nDecoder argm ax(𝑎\n𝑖 )\n1 2 3\n4 Peers\nTime\n𝑡0\nM H A\nL in e a r\nS o ftm a x\n2\nAction: Selected\nTask0.1 0.5 0.3 0.1\nGreedy Policy CAM: Graph Neural Network\nRobot-2\nEncoder\nContext\nDecoder argm ax(𝑎\n𝑖 )\n1 2 3\n4 Peers\nTime\n𝑡1\nM H A\nL in e a r\nS o ftm a x\n4\nAction: Selected\nTask0.3 0 0.2 0.5\n(a)\n(b)\nOutput: Prob. Of Selecting Task\nState\nFigure 1: Deployment of an MRTA policy using CAM architecture. a) Robot-1 at t0. b) Robot-2 at t1; here, the CAM output for previously selected task (task 2 in (b)) is set at 0.\nBefore describing the technical components of our proposed Covariant Attention Mechanism, the so-called CAM neural architecture, we provide an illustration and summary description here of how this policy architecture is used by robots or agents during an SR-ST operation. The CAM model for task allocation is called by/for each robot right when it reaches its current destination (task location or depot), in order to decide its next\ntask or destination. While robots are moving toward their selected task locations, they can check if their decision is conflicting with another robot based on recent information. If there is conflict, the robot with the worst time can cancel its current task and select a new task. Since full-observability is assumed across the multi-robot team and the policy-model execution-time is almost negligible, the current setup is agnostic to whether the online CAM model is executed centrally off-board or on-board each robot. As an example, Figure 1 illustrates how robot-1 and robot-2 uses the CAM policy model to choose a task at two different decision-making instances (t = t0 and t = t1). Here, the inputs to the CAM model includes 1) the task graph information, i.e., the properties of all the tasks/nodes di and the computed weight matrix Ω, 2) the current mission time, 3) the state of robot-r, and 4) the states of robot-r’s peers. The CAM model then generates the probability of selecting each task as its output. A greedy strategy of choosing the task with the highest probability is used here, which thus provides the next destination to be visited by that robot. It should be noted that the probability values for completed tasks and missed tasks (i.e., missed deadline) are set at 0.\nFigure 2 shows the detailed architecture of CAM. As shown in this figure, the CAM model consists of three key components, namely: Context, Encoder, and Decoder. The context includes the current mission time, the states of robot-r, and the states of robot-r’s peers. The state of a robot consists of it’s destination x, y coordinates and the available range ρ. The encoder and decoder components are described below." }, { "heading": "3.1 CCN-INSPIRED NODE ENCODER", "text": "For learning over graphs, the performance of the trained model depends mostly on the ability of the Graph Neural Network (GNN) to transform all the required node information into a feature vector or tensor. For our case, apart from the node properties, some of the features that is essential include a node’s local neighborhood information, and permutation invariance. Using a node’s local neighborhood information which consists of its association with its local neighbors during training is\nmore beneficial than considering the association with the entire graph, for generalizing to unseen nodes as demonstrated by frameworks like GraphSAGE (Hamilton et al., 2017), thus enabling the GNN to generalize for problems with larger number of nodes without the need to re-train. The encoder represent the properties of each graph node (preserving its structural information) into a continuous feature vector of dimension dembed, which is fed to the decoder. Each node i, has three properties which are the x-coordinate (x), y-coordinate (y), and the time deadline (τ ) of the task. Note that our encoding mechanism can also be extended to a probabilistic scenario, for example where an estimated deadline τ follows a probability distribution (common in disaster response type operations). The encoding for each node should include its properties and the its positional association with its neighboring nodes. We implement a variation of CCN (Hy et al., 2018). We determine the nearest k neighbors of a node (Nbi) based on the positional coordinates (x and y). The first step is to compute a feature vector by linear transformation for each node i. To encode the node properties, we do a linear transformation of di to get a feature vector Fdi for all i ∈ [1, N ], i.e., Fdi = W ddTi + bd.\nHere W d ε Rdembed×3, bd ε Rdembed×1, di = [xi, yi, τi]. For effective decision making, we also need to preserve the structural information. Therefore we define a matrix FNbdi as in equation 5.\nFNbdi = Concat(Fdj ), j ∈ Nbi (5) We compute a matrix FNbi (as shown in Eq. equation 6), which we believe captures the association of a node with its local neighbours (one-hop neighbors) in terms of the node properties.\nFNbi = W Nb(FNbdi − Fdi) + b Nb (6)\nwhereWNb εRdembed×dembed , bNb εRdembed×1. FNbi captures the information about how close the node properties of neighbor nodes of node i is to itself, which shows a representation of how important node i is to its neighbors. Further explanation regarding this design choice is discussed in Appendix B and we strongly encourage the reader to go over the section.\nWe compute the final embedding for each node using Eq. equation 7.\nFi = Aggregate(Wf (Concat(F di , F Nb i )) + bf ) (7)\nHere,Wf εRdembed×dembed , bf εRdembed×1. Thus finally we get an embedding Fi for each node, where Fi ε Rdembed×1. W d, bd, WNb, bNb, Wf , and bf are learnable weights and biases. The Aggregate function is the summation across all the columns of a matrix. This summation along with the relative difference in node properties, as in Eq. equation 6, preserves permutation-invariance and the structural properties (cognizance of inter-node distances for example) of the graph. Note that, these operations make the encoded state w.r.t. a given node insensitive to the order of the neighboring nodes, and thus the overall state space becomes independent of the indexing of tasks or to graph rotations. Equations 5, 6, and 7 represents a single layer of encoding. Multiple layers of encoding can be performed with the output of the previous layer being the inputs to equations 5 and 6 in the next layer." }, { "heading": "3.2 ATTENTION-BASED DECODER AND CONTEXT", "text": "The main objective of the decoder is to use the information from the encoder and the current state as context or query, and thereof choose the best task by calculating the probability value of getting selected for each (task) node. In this case, the first step is to feed the embedding for each node (from the encoder) as key-values (K, V). The keyK and value V for each node is computed by two separate linear transformations of the node embedding obtained from the encoder. The next step is to compute a vector representing the current state, also known as the context (as shown in bottom left of Fig. 2). The context for the MHA layer in this experiment consist of the following seven features: 1) Current time; 2) Available range of the robot taking decision; 3) Current location of robot taking decision; 4) Current destination of other robots; 5) Available range for other robots all concatenated to a single vector of length hq, which then undergoes a linear transformation to get a vector of length dembed also called the query Q. Figure 2 illustrates the structure of the decoder.\nNow the attention mechanism can be described as mapping the query (Q) to a set of key-value (K,V) pairs. The inputs, which are the query (Q) is a vector, while K and V are matrices of size dembed×N (since there are N nodes). The output is a weighted sum of the values V , with the weight vector computed using the compatibility function expressed as:\nAttention(K,V,Q) = softmax(QTK/ √ dembed)VT (8)\nHere hl is the dimension of the key of any node i (ki ∈ K). In this work, we implement a multi-head attention (MHA) layer in order to determine the compatibility of Q with K and V . The MHA implemented in this work is similar to the decoder implemented in Kool et al. (2019) and Vaswani et al. (2017). As shown in (Vaswani et al., 2017) the MHA layer can be defined as: MHA(K,V,Q) = Linear(Concat(head1 . . . headhe)) (9) Here headi = Attention(K,V,Q) and he (taken as 8 here) is the number of heads. The feed-forward layer is implemented to further process the mapping that results from the MHA layer, and transform it to a dimension that is coherent with the number of nodes in the task-graph (N ). The interjecting batch normalization layers serve to bound values of a specific batch using the mean and variance of the batch. The final softmax layer outputs the probability values for all the nodes. Here, the next task to be done is then chosen based on a greedy approach, which means that the node with the highest probability will be chosen. The nodes which are already visited will be masked (by setting its probability as 0) so that these nodes are not available for selection in the future time steps of the simulation of the multi-robot operation." }, { "heading": "4 CASE STUDIES", "text": "We design and execute a set of numerical experiments described in Appendix D.3, to investigate the performance of our proposed learning-based algorithm over graph space (CAM) and compare it with 1) an extended version of a state-of-the-art graph learning-based algorithm proposed by (Kool et al., 2019), so called attention-based mechanism (AM) approach; 2) a recent bipartite graph matching method BiG-MRTA (Ghassemi & Chowdhury, 2021); 3) a myopic baseline called Feasibilitypreserving Random-Walk (Feas-RND) that takes randomized but feasible actions (avoiding conflicts and satisfying other problem constraints). Since the BiG-MRTA method has been shown to provide near-optimal solutions in comparison to ILP (and competitive performance w.r.t. state-of-the-art online MRTA methods) Ghassemi & Chowdhury (2021), it is used here to comparatively gauge performance of AM and CAM. The Feas-RND method on the other hand provides a baseline that AM and CAM should clearly surpass in performance (cost function), in order to demonstrate that meaningful MRTA policies are being learnt as opposed to simply mapping random feasible actions." }, { "heading": "5 RESULTS AND DISCUSSION", "text": "" }, { "heading": "5.1 GENERALIZABILITY AND SCALABILITY ANALYSIS OF CAM", "text": "The CAM model has been trained on scenarios with 200 tasks and varying robot size (randomly a robot size between 10 and 50 has been selected). Then, 100 test scenarios have been generated per robot-task size from the same distribution of training scenarios. In this paper, generalizability refers to the performance of the trained model on unseen test scenarios that involve the same (or lower) number of tasks as in the scenarios used for training; and where the test and training scenarios are drawn from the same probability distribution over task locations and deadlines. In this work, generalizability was analysed on test scenarios with the number of tasks fixed at 50 and 200, drawn from the same distribution over a 2D space, and number of robots fixed at 5 and 40. In this paper, scalability refers to the performance of the trained model over test scenarios with higher (and increasing) numbers of tasks and robots than that encountered in the scenarios used for training. Here, we analyze scalability\nby evaluating the CAM model on test scenarios with the number of tasks fixed at 500 and 1000, and the number of robots fixed at 50 and 1000. The task-to-robot ratio is however kept the same across the generalizability and scalability analysis cases, in order not to introduce another control factor affecting the numerical experiments. To measure and compare performance, we use two metrics: 1) Average cost function (Eq. 1): This metric accounts for the completion rate of tasks and the total travelled distance, averaged over the set of test scenarios; and 2) Average computing time: This measures how long each method takes to compute the entire solution, averaged over the test set. The latter is particularly important to note in scalability analysis, since state-of-the-art non-learning based MRTA methods scale poorly in terms of computing efficiency as numbers of tasks and robots increases. Table 1: MRTA – Multi-UAV flood response: Generalizability: Task size\n100%, while a positive value indicates a task completion rate below 100%. As it can be seen from Table. 1 that the proposed CAM approach outperforms the AM approach in all the test cases by achieving better mean values of the cost function, respectively. The CAM approach performs significantly better than AM in terms of the cost function for the lower task-to-robot ratio (here, 5). Based on table 1, the proposed CAM approach achieved a perfect completion rate for most of scenarios with the task-to-robot ratio of 5. The local structure of the graph is not only important for effective decision-making, but also expected to be shared across various problems settings drawn from the same distribution, thereby promoting generalizability of policies when adequately captured.\nTo investigate the scalability of the learnt model, we use a new set of unseen test scenarios with numbers of tasks and robots much larger than those of the scenar-\nios used in training (the latter involved 200 tasks). Table 2 shows the performance of the trained model of CAM and AM in terms of the cost function (lower the better) for four large case studies, involving 500-tasks-50-robots, 500-tasks-100-robots, 1000-tasks-100-robots, and 1000-tasks-200-robots; for each case, 100 randomly generated scenarios (i.e., task locations and deadlines randomized) are used. As shown in Table 2, the proposed CAM method outperforms the AM method in all cases, with a significant difference in the cost function for the case studies with the task-to-robot ratio of 5. In the largest case (i.e., 1000-task-200-robot), the learnt model by CAM achieved a perfect completion rate for most of the scenarios (Appendix E). It can be argued that, for time-critical problems such as MRTA in disaster response, generating an optimal solution is less of a priority, compared to generating a feasible near-optimal solution as quickly as possible; this capability of the learnt models in evident from the results in Table 7 in Appendix E. BiG-MRTA requires a significantly larger computing time for larger sized problem (see Fig. 5 in Appendix E.1). Comparison with the BiG-MRTA solutions also indicate remaining scope of improvement for learning methods, in terms of distance travelled.\nThe comparison with Feas-RND shows how the learning method is markedly better than random feasible myopic decisions, thereby indicating that meaningful MRTA policies have been learnt here, as opposed to producing random feasible solution by virtue of the masked policy network design. In the Feas-RND method, a robot takes a task-selection decision randomly from a feasible set of choices that abide by all the constraints in the problems, e.g., related to inter-robot conflicts, task deadline and robot range. As shown in Table 7, CAM achieves more than 92% task completion rate for all the scenarios, which is found to be a 5%-30% gain in task completion rate over that achieved by\nFeas-RND across these scenarios. The performance of CAM in terms of the task completion rate is comparable to that of BiG-MRTA (which is slightly better), with the highest difference being 6.9% for the 500-tasks-50-robots case. Note that CAM continues to be better than AM is all these cases as well. Later in Section 5.2, we also compare the performance of CAM, AM, and BiG-MRTA on a benchmark MRTA problem where the objective is just to maximize the task completion rate (hence the cost function is not affected by distance travelled), where CAM proves to be highly competitive.\nAblation study: We performed two ablation studies (Appendix E.2) on CAM to understand the importance of the novel encoder and the decoder (adopted from attention mechanism). In the first ablation study, the CCN based encoding is replaced with simple feedforward layers (as explained in Appendix E.2), with the decoder remaining the same. It should be noted that the node embedding length (dembed), is the same for all the cases. In the second ablation study, the MHA based decoding was replaced with simple feedforward layers and a softmax layer (as explained in Appendix E.2), with the encoder remaining the same. As shown by Table 8, in both of the cases (i.e., with the encoder and decoder respectively ablated), we observe a significant decrease in the performance across all scenarios, with the maximum dip in the completion rate being 19.8% and 13.4% for the first and the second study, respectively. Compiling the results from the first ablation study for the encoder and the comparison of CAM with AM, we posit that the CCN based encoding, which is able to aggregate local node neighborhoods while remaining agnostic to node ordering, clearly aids in providing better policies. Similarly, from the second case of the ablation study, we can conclude the MHA based decoding, which computes a compatibility of the current state information with the node information, aid in learning better policies." }, { "heading": "5.2 COMPARATIVE ANALYSIS ON BENCHMARK MRTA PROBLEMS AND CVRP PROBLEMS", "text": "MRTA - Task Allocation Problem with Time and Capacity (TAPTC): The CAM architecture is implemented and tested on a well-known class of (NP hard) MRTA problems known as Task Allocation Problem with Time and Capacity constraints or TAPTC, as described in Mitiche et al. (2019). In TAPTC, each task i has a time deadline (ti) and workload (wi), and each robot has a work capacity (cj). The time to finish task i by robot j is defined as wi/cj . We compare the results of CAM on TAPTC with that of AM (Kool et al., 2019) and with those of the three non-learning baseline methods, namely 1) Iterated Local Search (ILS) (Vansteenwegen et al., 2009), which uses a meta heuristic approach, 2) Enhanced Iterated Local Search (EILS) (Vansteenwegen et al., 2009), which has controlled runtime and perturbations (compared to ILS), and 3) Bi-Graph MRTA (BiGMRTA) (Ghassemi & Chowdhury, 2021)), which uses a bigraph construction and maximum weighted matching approach. Further details of the TAPTC benchmark, the baselines used here, and the changes made to CAM and AM for this case study are discussed in Appendix F. For the results shown here, the CAM model is implemented such that k = 9 nearest neighbors are considered when computing the embedding of each node of the task graph. The testing is performed for different scenarios that are characterized by the number of robots and % of tasks having slack time deadline. Mitiche et al. (2019) categorized the TAPTC problems into 2 groups based on the value of the high deadline for the tasks. Table 3 here presents the results for group 2, with the group 1 results given in Table 10 in Appendix F.4. In these tables, the scenario nomenclature is defined as: R75A5 denotes R = 75% tasks have a normally-distributed time deadline and a team of A = 5 robots. Results on the impact of neighborhood size (k) on CAM performance is also discussed in Appendix F.4.\nTable 3 shows that for all scenarios with A = {5, 7}, CAM performs better than both AM and the non-learning baselines in terms of task completion rate; and CAM achieves top performance for 50% of the scenarios with A = {2, 3}.\nA comparison of the computation time to generate the entire MRTA solution, shown in Table 4, demonstrates the advantage of learning based methods over non-learning baselines as the problem size increases w.r.t. the number of robots and tasks. Capacitated VRP: To demonstrate the versatility of the proposed CAM architecture, we train and test the CAM and the AM architectures (for comparison) on a benchmark variation of the Vehicle Routing Problem (VRP), known as the Capacitated VRP (CVRP). The CVRP benchmark consists of N task locations, where a vehicle is required to visit the locations and deliver packages in a manner that minimizes a cost function. We also use the LinKernighan heuristics (LKH3) solver (Helsgaun, 2017) and the Simulated Annealing (SA) algorithm implementation provided by Google Operations Research (OR) tools as well-regarded non-learning based baselines for comparing the results obtained by CAM and AM. Further details regarding the CVRP problem, the baseline method, and settings changes for CAM and AM are presented in Appendix G.1. Table 5 summarizes the results of all four approaches (i.e., AM, CAM, and Google OR) on unseen scenarios for varying task size (# of locations) ranging from 50 to 1,000, in terms of the cumulative computing time and the cost function (both as average values across scenarios of a given size). LKH3 is a well known state-of-the-art method for solving CVRP problems and also has the best performance (considering only cost and not run time). As shown in Table 5, the performance of all methods except LKH3 are comparable for small sized problems (50, 100, and 200 locations or tasks). As expected, the main advantage of the proposed CAM approach is apparent for the problems with larger number of tasks (i.e., 500 and 1000 tasks), where the average cost function of the solutions obtained by the CAM is significantly better (less than half of AM’s and one-third of Google OR’s). In these larger-sized scenarios, the computing time performance of CAM is slightly better than that of AM, and together AM and CAM are two orders of magnitude faster than SA (Google OR) and LKH3 in generating the entire solution (sequence of tasks to be undertaken).\n6 CONCLUSION\nTable 5: CVRP: Comparison of average cost function (and average time taken to generate the entire solution). Lower the better.\n# OF AVG. COST FUNCTION (AVG. COMPUTING TIME) TASKS LKH3 GOOGLE OR AM CAM\n50 10.53 (46s) 11.3 (2s) 12.3 (0.04s) 12.2 (0.04s) 100 15.58 (60s) 17.6 (5s) 17.4 (0.09s) 17.9 (0.09s) 200 17.69 (86s) 21.3 (20s) 21.6 (0.18s) 21.8 (0.17s) 500 24.87 (123s) 54.5 (20s) 34.0 (0.53s) 29.1 (0.53s) 1000 28.67 (189s) 81.8 (200s) 64.1 (1.51s) 41.6 (1.49s) In this paper, we proposed a new GNN architecture, called CAM, for a multirobot task allocation problem with a set of complexities, including tasks with time deadline and robots with constrained range. This new architecture incorporates an encoder based on covariant node-based embedding and a decoder based on attention mechanism. A simple RL algorithm has been implemented for learning the features of the encoder and decoder. In addition, to compare the performance of the proposed CAM method, an attention-based mechanism approach (aka AM) has been extended to be able to handle a multi-agent CO setting problem, along with a recent state-of-the-art method BiG-MRTA, and a mypoic baseline method Feas-RND. To evaluate the performance of the proposed CAM architecture and the extended version of AM, they are trained with the same settings. All the methods were tested on 100 unseen case studies. Performance was analyzed in terms of the cost value and the completion rate. Our primary proposition, CAM, outperformed AM and Feas-RND on test scenarios by achieving better cost function value, and was also able to achieve high task completion rate (> 92%) for even larger sized problems without the need to retrain, which is comparable to the near-optimal (but O(101 − 102) more expensively computed) solutions by BiG-MRTA, thereby demonstrating the favorable scalability of CAM. The computational cost analysis showed that the proposed CAM model takes a few milliseconds to compute a decision, thereby providing clear advantage over non-learning based approaches to MRTA in the context of online (time-sensitive) planning. Moreover, the advantage of using local neighborhood information for node encoding can be seen in the scalability analysis on the MRTA and CVRP, where CAM demonstrates superior performance when applied to graphs with larger number of tasks/nodes. The ablation studies in Appendix E.2 showed the importance of the CCN based encoding and the MHA based decoding used in CAM. A comparisons over a different suite of benchmark MRTA problems showed that CAM was competitive w.r.t. standard non-learning baselines including BiG-MRTA, in terms of task completion rate, while providing substantially faster solutions compared to the latter. Lastly, a comparative analysis on different CAM models with varying neighborhood size (while encoding) was performed (appendix F.6) to study the impact of the neighborhood size k in the encoder. Based on Appendix F.6 (for MRTA-TAPTC), k is a trade off between accuracy and computation time.\nEthics statement: The work in this paper does not have any direct negative societal consequences.\nReproducibility statement: The CAM architecture which includes the encoder, decoder and the context part, can be coded in any programming language by following the equations in section 3 and figure 2. The dataset for training the MRTA-Multi-UAV Flood response problem can be generated using the information in Appendix D.3. The codes for the AM method can be obtained from https://github.com/wouterkool/attention-learn-to-route. The AM method can be modified for solving the MRTA-Multi-UAV Flood Response problem using the information from Appendix D.5. Both CAM and AM method can be trained using the settings in table 6. The codes for the BiG-MRTA method and Feas-RND can be obtained from https://github.com/ adamslab-ub/BiG-MRTA. The training data for MRTA-TAPTC can be generated using the information in Appendix F.5. Both CAM and AM can be modified using the information in Appendix F.2. Details on running BiG-MRTA for MRTA- TAPTC can be obtained from Ghassemi et al. (2019) and the corresponding code from https://github.com/adamslab-ub/BiG-MRTA. The EILS can be coded using the inforation in Mitiche et al. (2019). The test dataset can be obtained from https://tinyurl.com/taptc15in. The CAM architecture can be modified for CVRP using the information in Appendix G.1.1 and G.1.2. The codes for the AM method and implementation for LKH3 can be obtained from https://github.com/wouterkool/ attention-learn-to-route. The google OR tools implementation can be done using the example in https://developers.google.com/optimization/routing/vrp with the dataset generated using the information in Appendix G.4." }, { "heading": "A LEARNING OVER GRAPHS", "text": "Neural network based methods for learning CO can be broadly classified into: (i) Reinforcement Learning (RL) methods Kool et al. (2019); Barrett et al. (2019); Khalil et al. (2017); Strens & Windelinckx (2005); and (ii) Supervised learning (often combined with RL) methods Kaempfer & Wolf (2018); Mittal et al. (2019); Li et al. (2018); Nowak et al. (2017). The supervised learning approaches typically address problem scenarios where samples are abundant (e.g., influence maximization in social networks Mittal et al. (2019)) or inexpensive to evaluate (e.g., TSP Kaempfer & Wolf (2018)), and are thus unlikely to be readily applicable to solve complex problems over real-world graphs. RL based techniques to learn on graphs include attention models with REINFORCE Kool et al. (2019) and deep Q-learning Khalil et al. (2017); Barrett et al. (2019), among others, with some extending solutions to multi-agent settings Jiang et al. (2020). In this work, we are interested in the first class of the methods (i.e., RL methods over graph space).\nDai et al. Dai et al. (2017) showed that a combination of graph embedding and RL methods can be used to approximate optimal solutions for combinatorial optimization problems, as long as the training and test samples are drawn from the same distribution. Mittal et al. (2019) presented a new framework to solve a combinatorial optimization problem. In this framework, Graph Convolutional Network (GCN) performs the graph embedding and Q-Learning learns the policy. The results demonstrated that the proposed framework can learn to solve unseen test problems that have been drawn from the same distribution as that of the training data set. More importantly, it has been shown that using a learned network policy instead of tree search, both methods are using the same embedding GCN, showed a speedup of 5.5 for a problem size of 20,000 for influence maximization. Similarly, the effectiveness of learning a network policy using Q-Learning to solve the Max Cut problem (a combinatorial problem) has been demonstrated by Barrett et al. (2019).\nRecently, there has been a growing interest in using sequence-to-sequence models (e.g., pointer networks and attention mechanism) to encode and learn the combinatorial optimization problems in graph space Kaempfer & Wolf (2018); Kool et al. (2019). Kool et al. Kool et al. (2019) implemented a framework using an encoder/decoder based on attention mechanism and REINFORCE algorithm for solving a wide variety of combinatorial optimization problems as graphs, with the main contribution being flexibility of the approach on multiple problems with the same hyperparameters. Wang et al. Wang & Gombolay (2020) showed how learning can lead to generate faster solutions than standard exact methods for multi-robot scheduling problems. However, the size of the problem that has been studied in that work and other similar studies Paleja et al. (2020); Sykora et al. (2020) is limited to 5 robots and 100 tasks, and only temporal constraints were considered. In this paper, we study larger problems (up to 1000 tasks/200 robots) as well as include complexities such as time deadlines for tasks, robot ferry range constraints, capacity constraints, multiple routes, etc. Graph learning has been implemented for a multi-robot coverage problem in Tolstaya et al. (2020), which demonstrates good scalability. However, this work is addressing a multi-robot exploration problem (and not MRTA), and it is not clear how this proposed method can be applied to an MRTA problem with complexities such as range constrain, capacity constraints, time extended tasks, multiple routes, etc. Apart from CO problems, graph learning can also be used to perform path planning as demonstrated in Li et al. (2020).\nIn this paper, a new neural architecture is proposed that combines the attention mechanism with an enhanced encoding network (embedding layers), where the latter is particularly designed to capture local structural features of the graph in an equivariant manner. The embedding layer is a variation of Covariant Compositional Networks (CCN), introduced by Hy et al. (2018). CCN was originally implemented for predicting molecular properties by learning local structural information of the molecules. This node-based embedding has been chosen since it: i) operates on an undirected graph; ii) uses receptive field and aggregation functions based on tensor product and contraction operations (which are equivariant), which leads to a permutation- and rotation-invariant embedding; and iii) provides an extendable representation (n-th order tensor representation can be useful to extend the work to multi-level networks, e.g., involving multiple node properties). We found an exact implementation of the CCN to be computationally burdensome for learning policies in large MRTA problems, and hence a variation of the CCN is proposed in this work. An attention mechanism has been successfully implemented for problems with sequential processes, e.g., Natural Language Processing (NLP) Vaswani et al. (2017). Here we are interested in the attention mechanisms since they involve simple matrix multiplications, which make them not only computationally inexpensive\n(by utilizing modern GPUs), but also programmatically easy to be implemented. In this work, we implement an attention-based decoder for CAM as proposed in Kool et al. (2019); Vaswani et al. (2017)." }, { "heading": "B EMBEDDING LOCAL STRUCTURAL INFORMATION", "text": "Local structural or neighborhood information of a graph node/ task refers to the association of the node to its neighboring nodes. These information includes how the properties of the neighborhood nodes differ from that of the node under consideration. It is this information that is being encoded in the node embedding along with the information of the node properties di. The encoder should be able to use this local structural information in order to scale to larger-sized problems as pointed out by (Cappart et al., 2021). Here, we explain how local structural information is being encoded with the help of two graphs, one which is small and the other is a larger graph but having some nodes with similar neighborhood as shown in figure 3. Node A in graph G1, and nodes B, and C in graph G2 have almost similar neighborhood. Here the local neighborhood information encoding of nodes A, B, and C (which are FNbA , F Nb B , and F Nb C respectively) will be almost same, irrespective of the size of the graph. Therefore by equation 6, ((FNbdA − FdA) ≈ (F Nb dB − FdB ) ≈ (FNbdC − FdC )) =⇒ (FNbA ≈ FNbB ≈ FNbC ). Even though the actual locations of nodes A, B, and C are different, their association with it’s neighboring are almost the same, which is being captured by equation 6." }, { "heading": "C LEARNING FRAMEWORK", "text": "Both the CCN-inspired encoder and the attention-based decoder consist of learnable weight matrices as explained in Sections 3.1 and 3.2. In order to learn these weight matrices, both supervised and unsupervised learning methods can be used. However, supervised learning methods are not tractable since the computational complexity of the exact I(N)LP solution process required to generate labels. The complexity of the ILP formulation of the MRTA problem scales with O(n3m2h2), where n, m, and h represent the number of tasks, the number of robots, and the maximum number of tours per robot, respectively Therefore, we use a reinforcement learning algorithm to conduct the learning. Learning Method: In this work we implement a simple policy gradient method (REINFORCE) as the learning algorithm with greedy rollout baseline, which also enables us to compare the effectiveness of our method with that of (Kool et al., 2019). For each epoch, two sets of data are used which are the training set and the validation set. The training data set is used to train the training model (θCAM) while the validation data set is used to update the baseline model (θBLCAM). The size of the training data and the validation data used for this paper is mentioned in section D.3. Each sample data from the training and validation data-set consist of a graph as defined in Section 2.1. The pseudo code of the training algorithm for our architecture is shown in Alg. 1 in Appendix E. It should be noted that the policy gradient method requires the evaluation of a cost function, which is defined to be same as in Eq. equation 1. Policy: We define the policy such that if the robot r does not satisfy the constraints in Eqs. equation 3, it returns to depot (i.e., a = 0). Otherwise the robot r runs the learnt CAM network\nand chooses the output (task) based on a greedy approach (selects a task with the highest probability value), as shown in Fig. 1.\nAlgorithm 1 Training Algorithm Input: NE: Number of epochs, Nb: Number of batch, B: Batch size, Ntr: Training data size, Nvl: Validation data size. 1: θCAM-RL - CAM-RL 2: θBLCAM-RL - Baseline CAM-RL 3: for epoch = 1..Nepoch do 4: Dtr,Dvl ← GenerateScenarios(Ntr, Nvl) 5: Nb ← bNtr/Bc 6: for step = 1..Nb do 7: Dtr,b ← SampleRandom(Dtr,M) {Dtr,b: Batch Training Dataset} 8: aBL, fBLcost ← CalculateCost(θBLCAM-RL,Dtr,b) 9: a, fcost ← CalculateCost(θCAM-RL,Dtr,b) 10: ∇L ← 1 B ∑B i=1(fcost,i − f BL cost,i) log softmax(ai) 11: θCAM-RL ← ADAM(∇L, θCAM-RL) 12: end for 13: aBLvl , f BL cost,vl ← CalculateCost(θBLCAM-RL,Dvl) 14: a, fcost ← CalculateCost(θCAM-RL,Dvl) 15: if ( ∑Nvl i=1 f BL cost,i > ∑Nvl i=1 fcost,i) ∧ (T-Test(avl,a BL vl ) > ) then 16: θBLCAM-RL ← θCAM-RL 17: end if 18: end for 19: CalcuateCost Procedure: 20: for i = 1..|D| do 21: ai, fcost,i ← Simulation(θ,Di) 22: a← a ∪ ai 23: fcost ← fcost ∪ fcost,i 24: end for 25: return a, fcost" }, { "heading": "C.1 SIMULATION AND FRAMEWORK SETTINGS", "text": "The ”Python” 3.7 and the 64-bit distribution of ”Anaconda 2020.02” are used to implement the MRTA approaches. The environment, training algorithm, and the evaluation of the trained model, are all implemented in Pytorch-1.5 for CAM and AM. The training, based on Pytorch, is deployed on two GPUs (NVIDIA Tesla V100) with 16GB RAM." }, { "heading": "D MORE DETAILS ON MRTA", "text": "" }, { "heading": "D.1 CONVENTIONAL METHODS FOR MRTA", "text": "The MRTA problem can be formulated as an Integer Linear Programming (ILP), mixed ILP or Integer Non-Linear Programming (INLP) problem depending on the application. When tasks are defined in terms of location, the MRTA problem becomes analogical to the Multi-Traveling Salesmen Problem (mTSP) (Khamis et al., 2015) and its generalized version, the Vehicle Route Planning (VRP) problem (Dantzig & Ramser, 1959). Existing solutions to mTSP and VRP problems in the literature (Bektas, 2006; Braekers et al., 2016) have addressed analogical problem characteristics of interest to MRTA, albeit in a disparate manner; these characteristics include tasks with time deadlines, and multiple tours per vehicle, with applications in the operations research and logistics communities (Azi et al., 2010; Wang et al., 2018). ILP-based mTSP-type formulations and solution methods have also been extended to task allocation problems in the multi-robotic domain (Jose & Pratihar, 2016). Although the ILP-based approaches can in theory provide optimal solutions, due to the NP-hard time complexity of the SR-ST problems (Mazyavkina et al., 2021; Archetti et al., 2011), they are characterized by exploding computational effort as the number of robots and tasks increases (Toth & Vigo, 2014; Cattaruzza et al., 2016). For example, for the studied SR-ST problem, the cost of solving the exact integer programming formulation of the problem scales withO(n3m2h2), where n, m, and h represent the number of tasks, the number of robots, and the maximum number\nof tours per robot, respectively (Ghassemi et al., 2019). As a result, most practical online MRTA methods, e.g., auction-based methods (Dias et al., 2006) and bi-graph matching methods (Ghassemi & Chowdhury, 2018; Ismail & Sun, 2017), use some sort of heuristics, and often report the optimality gap at least for smaller test cases compared to the exact ILP or INLP solutions (where tractable)." }, { "heading": "D.2 MORE DETAILS ON ENCODING THE CONTEXT", "text": "As discussed in section 3.2, the context portion of while a robot r makes a decision, consists of 1) Current time t; 2) Available range of the robot taking decision ρr; 3) Current location of robot taking decision (xr, yr); 4) Current destination of other robots (xp, yp,∀ p ∈ Pr); 5) Available range for other robots (ρp,∀ p ∈ Pr), where Pr represents the peers of robot r. Here, features 2 and 3 represent the current state of the robot taking the decision, while features 4 and 4 represents the states of the peer robots. The context feature vector can be computed as shown in Eq.10.\nQ = Linear(Concat(t, Qr, QPr )) (10)\nwhere, Qr = Linear([xr, yr, ρr] (11)\nand QPr = Σp∈PrLinear([xp, yp, ρp]) (12)\nThe dimensions of Qr, and QPr is dembed, and the length of the final feature vector Q is also considered dembed. The summation aggregation operation in Eq.12, makes the context vector agnostic to the number of robots." }, { "heading": "D.3 DESIGN OF EXPERIMENTS & LEARNING PROCEDURES", "text": "To evaluate the proposed CAM method, we define an MRTA case study with varying number of UAVs and 200 task (flood victims) locations. A 2D environment with 1 sq. km area is used for this purpose, with the time deadline of tasks varied from 0.1 to 1 hour. The UAVs are assumed to have a range of 4 km, and a nominal speed of 10 km/h. We assume instantaneous battery swap is provided at depot location, which is used when UAVs return to depot since they were running low on battery. It is important to note that the flood victim application is used here merely for motivation, and the CAM architecture is in no way restricted to this application, but can rather solve problems in the broad (important) class of capacity/range-constrained and timed task-constrained SR-ST problems. Moreover, even the policies learnt here for CAM demonstration on the described case settings can generalize to related SR-ST problems with up to 1,000 tasks, which represents a fairly large MRTA problem in reference to the existing literature in the MR domain.\nTo perform learning and testing of the learned model, we proceed as follows: Learning Phase: We use a policy gradient reinforcement learning algorithm (REINFORCE with rollout baselines in this case) for learning the optimal policy. The learnable parameters in this architecture includes all the weights in the encoder and the decoder. The training is carried out for a total of 100 epochs. Each epoch consists of 10,000 random training samples, which are evaluated and trained in batches of 100 samples. Testing Phase: In order to provide a statistically insightful evaluation and comparison, the models are tested for different cases of varying number of tasks and varying number of robots with each case having 100 random test scenarios from training data distribution. Here, for each task in a sample scenario, the location of the tasks, time deadline, location of the depot are all generated from a random uniform distribution. More details on the learning framework is given in Appendix C. The training and testing settings and the modifications to AM for MRTA are given in Appendix D." }, { "heading": "D.4 BASELINES", "text": "BiG-MRTA: The BiG-MRTA algorithm Ghassemi & Chowdhury (2021); Ghassemi et al. (2019) is an online method based on the construction and maximum weighted matching of a bipartite graph. BiG-MRTA (Ghassemi & Chowdhury, 2021) uses a novel combination of a bipartite graph construction, an incentive model to assign edge weights in the bigraph, and maximum weighted matching (based on the Karp algorithm (Karp, 1980)) over the bigraph to allocate tasks to robots. This method has been developed as an online solver for SR-ST type MRTA problems, where tasks have\ndeadlines, new tasks could appear during the mission, and robots are subject to range and payload constraints.\nAM: The number of attention heads for the encoder is 8, with 3 layers of encoding. The node embedding length is 128.\nFeas-RND: In the Feas-RND approach, each robot randomly chooses available tasks that are feasible to be undertaken by the UAV, satisfying all the constraints in section 2.2. The algorithm used for Feas-RND can be found in (Ghassemi & Chowdhury, 2021)." }, { "heading": "D.5 MORE DETAILS ON TRAINING FOR MRTA", "text": "Learning Curve : In order to compare the convergence of the proposed CAM method with that of the AM approach, we run both methods with similar settings and plot their learning curve (convergence history), as shown in Fig. 4 in appendix D. As seen from this figure, the AM method took 3 epochs to reach its best cost value and stagnated. On the other hand, the CAM method took 20 epochs to reach the best cost value of AM and continued to improve up to∼24 epochs, leading to a significantly better cost function value (f∗cost,CAM = −0.266) compared to AM (f∗cost,AM = −0.009). The stagnation of AM could be attributed to direct implementation of the transformer network (Vaswani et al., 2017), which was designed for machine translation and thus consists of multiple layers of Multi-head attention. In contrast, our CAM model uses simple linear transformations of the node properties and its relative differences in local neighborhoods to capture structural information.\n0 20 40 60 80 100\nEpoch\n0.2\n0.1\n0.0\n0.1\n0.2\n0.3\nCo st\nFu nc\ntio n\nCAM AM\nFigure 4: MRTA: Learning curve of CAM and AM for 200 tasks.\nModifications to AM: The attention-based mechanism (AM) reported by (Kool et al., 2019) has been shown to solve a few different classes of single-agent/robot combinatorial optimization problems. To be able to implement the AM method for our problem (for comparison with our CAM method), the AM method is adapted to a multi-robot setting. For this purpose, we make the following three changes to the AM method: (i) The node properties that are defined in Section 2.1 are used in AM; (ii) The context for the attention mechanism is modified to be the same as that used for CAM; and (iii) The cost function used for training is changed to that in Eq. equation 1.\nWe want to compare the structural representational quality of CAM and AM for generalizability and scalabilty. Hence, for a fair comparison, both CAM and AM models were trained with the same settings (Table 6 and14)." }, { "heading": "D.5.1 COMPUTING TIME (TRAINING AND EXECUTION)", "text": "Based on the epoch information in Section D.3, the average time to complete a training epoch was found to be 19.50 minutes (i.e., ∼11.7 seconds per sample) for CAM and AM. The average computing time (from Tables 1 and 2) taken by the learnt policies for producing the entire MRTA solution (sequence of tasks assigned to each robot) was found to grow from 0.14 s to 20 s, as the number of tasks grew from 50 to 1000, and thus remaining unprecedentedly attractive for real-time decision-making even for large problems." }, { "heading": "E MRTA – MULTI-UAV FLOOD RESPONSE: FURTHER RESULTS", "text": "" }, { "heading": "E.1 TASK COMPLETION RATE AND COMPUTATION TIME", "text": "Table 7 shows the completion rate which corresponds to the test cases for generalizability (Table 1) and the test for scalability (Table.2), respectively.\nFigure 5, shows the comparison of the average computation time between CAM and BiG-MRTA. It can be seen that, the computation time of BiG-MRTA increases exponentially as compared to that of CAM. Even though the completion rate for BiG-MRTA is slightly greater than that of CAM as can be seen from table 7, this advantage comes with a very high computational cost. This shows the scalability of CAM to larger-sized problems. On comparing the cost function in tables 1 and 2, it can be seen that, compared to BiG-MRTA, the performance of CAM drops for scenarios with higher tasks/robot ratios (such as 200-tasks-20-robots, 500-tasks-50-robots). However, this dip in performance is marginal when comparing the task completion rate (in tables 7), with the maximum dip in performance for CAM being 7.3% (between 100-tasks-10-robots and 100-tasks-20-robots). One of the reason for this behaviour, is due to the cost function (Eq.1), where the distance minimization is only taken into account when the task completion rate is 100%. Hence, we also compared the performance of CAM, AM, and BiG-MRTA on a benchmark MRTA problem in section 5.2, where\nthe objective is just to maximize the task completion rate, in which CAM demonstrates a superior performance compared to the other methods." }, { "heading": "E.2 ABLATION STUDIES", "text": "For Encoder: In order to study the true impact of the graph node encoding, we compared the performance of CAM with it’s covariant based encoding removed to get a new model CAMEFF . In this case the, node encoding is performed using a simple feedforward network, following Eq.14. The decoder for CAMEFF is the same as that of CAM .\nFdi = W ddTi + bd (13)\nFi = W FFdi + bF (14)\nwhere, di = [xi, yi, τi], ∀i ∈ V . W d, WF , bd, and bF are learnable weights and biases, where W d ε Rdembed×3, bd ε Rdembed×1, and WF ε Rdembed×dembed , bF ε Rdembed×1\nFor Decoder: In order to study the impact of the MHA based decoder, we compare the performance of CAM, with it’s decoder being replaced by a simple feedforward network to get a new model CAMDFF , which takes in the node embeddings and the context information, and computes the output probabilities using the following equations. The encoder for CAMDFF is the same as that of CAM .\nPAct = softmax([PAct1 , . . . , P Act N ]), (15)\nPActi = LeakyReLU(W decConcat(Fi,Q)T + bdec), where i ∈ V (16)\nHere W dec and bdec are learnable weights and biases, where W dec ∈ RN×2dembed and bdec ∈ RN×1. Fi ∀ i ∈ V are the node embeddings from the encoder, and Q ∈ Rdembed is the context vector. The value of dembed for both the models is the same as that of CAM in section 5. Both the models were trained on the MRTA-Multi-UAV flood response problem in section 4, using the parameters in table 6, and tested on the same test scenarios as that used for CAM , AM , and BiG-MRTA, for varying number of locations and varying number of robots (shown in table 8). The third and the fourth column in table 8 shows the average cost function of 100 test scenarios, and it’s corresponding % completion rate, for CAMEFF and CAMDFF ablation study, respectively.\nAs can be seen from table 8, the performance of both CAMEFF and CAMDFF is significantly poor compared to CAM , in terms of both cost function and task completion rate. The performance drop is more for scenarios with lower number of robots, with the maximum drop in task completion rate being for CAMEFF being 19.8% (for 50-tasks-5-robots), and for CAMDFF being 13.4% (for 200-tasks-20-robots). It can also be observed that, the performance drop is greater for CAMEFF than CAMDFF , this indicates that the CCN-based encoder has a slightly more influence on the performance, than the decoder." }, { "heading": "F MRTA - TASK ALLOCATION PROBLEM W/ TIME & CAPACITY (TAPTC)", "text": "" }, { "heading": "F.1 PROBLEM DESCRIPTION AND FORMULATION", "text": "In order to assess the effectiveness of the CAM architecture for MRTA problems with time-extended assignments, we test and validate our CAM on Task Allocation Problem with Time and Capacity (TAPTC) benchmark problem Mitiche et al. (2019), which falls into the ST-SR-TA-TW (Single Task robot, Single Robot task, Time-extended Assignment, TimeWindows). The TAPTC involves task locations within a 100 × 100 grid map, with each task i having a time deadline ti, and also a workload wi. Each robot j has a work capacity of cj . All the robots move with a maximum speed of 1 unit per second. A task i is considered to be completed only if a robot j visits node i and spends a time of wi/cj . Each scenario is generated based on a uniform distribution reported in Mitiche et al. (2019) and the training samples for our models are generated from the same distribution.\nFor evaluating the performance of the learned model and conducting a comparative analysis, we use the TAPTC dataset (accessible from http://tinyurl.com/taptc15in. This dataset consists of test cases with 100 tasks and varying number of robots (A = 2, 3, 5, 7 robots), and the speed of every robot is considered to be the same (1 unit per second). The test cases can be divided equally into two groups, based on tight deadlines (Group 1) and slack deadlines (Group 2). Each group can be further divided into 4 sub-categories based on the fraction of tasks (R = 25%, 50%, 75%, and 100%) that have normally distributed task deadlines. For example, 25% indicates that there are 25 tasks with deadlines normally distributed between the limits tlow and thigh, while the remaining 75 tasks have a deadline of thigh. For group 1 (tight deadline), the value of thigh is considered as half as that for group 2 (tight deadline). In both the groups, for all values of A and R, there will be 3 samples. Therefore the total number of test cases is given by number of groups × |A| × |R| × 3) = 96, where each group has 48 test cases. Further description of the test cases can be found in Mitiche et al. (2019).\nThe exact solution of the MRTA-TAPTC problem can be obtained by formulating it as the following integer non-linear programming (INLP) problem:\nmin fcost = N∑ i=1 ri,\n{ ri = τfi τi , if τfi > τi\n0, otherwise (17)\nsubject to\nsi ∈ S ∀ i ∈ [1..N ] (18) si 6= sj ∀ i 6= j (19)\nHere, N is the number of tasks/nodes, τfi is the time at which task i was completed, τi is the time deadline of task i, and S = [s1, s2, ...sN ] is the sequence of all the nodes that were visited. The minimum cost function that can be achieved using Eq. 1 is 0, which corresponds to the case where all the tasks are successfully completed. A detailed formulation of the exact ILP constraints that describe the MRTA-TAPTC problem can be found in Mitiche et al. (2019). Note that, in our paper we use a slightly different reward function as compared to the objective function in Mitiche et al. (2019), but the intention of both the functions are essentially the same, which is to maximize the number of successfully completed tasks.\nHere, we craft the objective function (Eq. equation 17) such that only missed tasks (τfi > τi) contribute to the cost function. It is important to note that the objective function can be tailored according to the priority of the problem. Since the main priority in Mitiche et al. (2019) is to maximize the number of successfully completed tasks, the objective function (Eq. 17) also prioritizes task completion for a fair comparison with the baseline methods. The constraints in Eq. 18 and 19 are such that each tasks must be visited exactly once by any robot." }, { "heading": "F.2 MODIFICATIONS TO CAM AND AM", "text": "1. Change in encoder: The encoder for MRTA-TAPTC problems consider additional node properties, namely the location of the tasks (xi, yi), the time deadline (τi), and the workload wi, i.e., di = [xi, yi, τi, wi].\n2. Change in context: The context for the MHA layer in the decoder consists of the following five features: 1) elapsed mission time; 2) Work capacity of the robot taking decision; 3) Current location of robot taking decision; 4) Current destination of peers; and 5) Work capacity of peer.\n3. Cost function: We use Eq. 17 as cost function.\nThe decoder need no change in this case." }, { "heading": "F.3 BASELINE METHODS", "text": "We consider three non-learning methods, which are i) Iterated Local Search (ILS), ii) Enhanced Iterated Local Search (EILS), iii) Bi-Graph MRTA (BiG-MRTA), which are briefly described below. The learning based baseline method we implemented here is AM with minor modifications as also discussed below.\ni) Iterated Local Search (ILS): This is an online meta heuristic iterated search algorithm Vansteenwegen et al. (2009), where the output of one iteration is partially used as the input to the next iteration. During each iteration, the best solution is improved by a perturbation step, followed by a local search.\nii) Enhanced Iterated Local Search (EILS): EILS is also an online meta heuristic iterated search method Mitiche et al. (2019), with an improved perturbation step as compared to Vansteenwegen et al. (2009).\niii) Bi-Graph MRTA (BiG-MRTA): The BiG-MRTA algorithm Ghassemi & Chowdhury (2021); Ghassemi et al. (2019) is an online method based on the construction and maximum weighted matching of a bipartite graph. BiG-MRTA (Ghassemi & Chowdhury, 2021) uses a novel combination of a bipartite graph construction, an incentive model to assign edge weights in the bigraph, and maximum weighted matching (based on the Karp alogirthm (Karp, 1980)) over the bigraph to allocate tasks to robots. This method has been developed as an online solver for SR-ST type MRTA problems, where tasks have deadlines, new tasks could appear during the mission, and robots are subject to range and payload constraints.\niii) AM: The AM implementation is almost the same for the MRTA-TAPTC problem as that implemented for the multi-UAV flood response problem in section D.3. In addition, the changes implemented in CAM, as explained in Section F.2, are also applied here. The number of attention heads for the encoder is 8, with 3 layers of encoding. The node embedding length is 128.\nBoth CAM and AM are trained using REINFORCE as described in algorithm 1 using the settings given in Table 9." }, { "heading": "F.4 RESULT AND DISCUSSIONS", "text": "Tables 3 and 10 summarize the performance of CAM alongside the baseline methods, in terms of the average completion rate, i.e, the ratio of the number of successfully completed tasks to the total number of tasks averaged over 3 samples for all the scenarios (denoted by A and R). The CAM model here uses k = 9, where k represents the number of nearest neighbors considered for a node for computing its node embedding. For group 1, the CAM model was able to generate the best results for 8 out of the 16 different scenarios (Table 10). From Table 10, it can be inferred that CAM has a superior performance compared to the baselines for cases with larger number of robots (best performance for all cases withA = 7, and for 3 out of 4 scenarios withA = 5), including a maximum\nmargin of 19% for R75A7 compared to the next best solution (EILS). While for cases with smaller number of robots (A = {2, 3}), CAM achieved the best performance for one scenario (R25A3), with the worst performance having a margin of only 7% (scenario R75A3) compared to best performer (EILS) in that case.\nFor group 2, CAM achieves top performance for all scenarios with A = {5, 7}, while achieving top performance for 50% of the scenarios with A = 2, 3, as discussed in Section 5.2 of the main text.\nTables 11 and 4 gives the average computation time (in milliseconds) to generate the entire solution for all the methods. As it can be seen from these tables, for the non-learning methods EILS and BiG-MRTA, the general trend is an increase in the computation time with increasing number of robots, while for both the learning-based methods (CAM and AM) the computation time increases marginally with increasing number of robots." }, { "heading": "F.5 MORE DETAILS ON TRAINING DATASET FOR MRTA-TAPTC", "text": "Each training sample has 100 tasks, and are located randomly within a 100 × 100 grid map. Each task i has a time deadline 50 ≤ τi ≤ 600, and a workload 10 ≤ wi ≤ 30. Each sample has nr number of robots where 2 ≤ nr ≤ 7. The initial positions of the robots in a sample is also chosen randomly within the grids. Each robot j has a work capacity of cj where 1 ≤ cj ≤ 3. All the robots move at a speed of 1 unit. A task i is considered to be completed only if a robot j visits node i and spends a time of wi/cj . All the training samples are generated such that all the associated variables (mentioned above) follow a uniform distribution within their respective bounds.\nF.6 IMPACT OF NEIGHBORHOOD SIZE FOR CAM ENCODER\nTables 12 and 13 compare the performance of CAM models with varying neighborhood size (k) in the encoder, based on the average completion rate (average of the 3 samples) for all the scenarios (all\nvalues of A and R for the two groups). The impact of the neighborhood size k (during encoding) is more evident on the performance of Group 1 test cases which has tasks with more number of tight deadlines. As shown in Table 12, the completion rate of CAM (with k = {6, 9, 12}) are almost comparable, while the performance of CAM with k = 3 is significantly lower than the other models. Lower the neighborhood size, smaller will be the local structural information learned, which could result in performance loss. However, increasing neighborhood size beyond a point may not necessarily improve performance. The average epoch time for training for the models with k = {3, 6, 9, 12} are 11, 13, 14.3, and 15 minutes respectively. Figure 6 shows the learning curve for training all the CAM models and AM model." }, { "heading": "G FURTHER DETAILS ON CAPACITATED VEHICLE ROUTING PROBLEM", "text": "" }, { "heading": "G.1 FORMULATION OF CAPACITATED VEHICLE ROUTING PROBLEM (CVRP)", "text": "The vehicle routing problem here we considered is a capacitated vehicle routing problem (CVRP), where a vehicle is required to deliver packages to a number of locations N . Each task is designated an index from 1 to N . We also consider a depot with id as 0. Each location has a demand ci on the number of packages where i ∈ [1, N ], and the vehicles has a constraint for the maximum number of packages C it can carry, such that ci < D. We assume that each package is of the same size. The vehicle is required to create multiple routes visiting different locations to deliver the packages. The vehicle starts from a depot, has a maximum capacity for the number of packages, and can have multiple routes to deliver all the packages satisfying the demands in every location. Here we assume that the vehicle can return to the depot for refilling to maximum capacity before starting a new route. In this experiment we do not consider split delivery where the demand of a location is fulfilled partially during a route, and then completed in another route. The ILP formulation for CVRP can be represented as:\nmin fcost = Σδj , j ∈ [1, R] (20)\nsubject to Ct+1 = {\nmax(0, Ct − ci) i 6∈ V D, i = 0\n(21)\nwhere R is the number of routes, δj is total distance travelled in route j, Ct is the available capacity at a time t, V is the set of locations visited. The node encoding and the context encoding are modified for CVRP (as explained in appendix G.1.1 and G.1.2) for both CAM and AM. Both CAM and AM are trained for the scenarios with 100 locations and tested on the unseen scenarios with varying number of locations ranging from 50 to 1,000. The experimental details of this comparative study are given in Appendix G.2." }, { "heading": "G.1.1 ENCODING FOR CVRP", "text": "Except for the node properties, all the other steps for computing the node embeddings is same for CVRP as compared to MRTA for CAM and AM. The node properties associated with the CVRP\nincludes the x coordinate, y coordinate, and the demand ci for each location. Therefore each node can be represented as di = [xi, yi, ci], where xi, yi, and ci are the x coordinate, y coordinate, and capacity respectively for node i. For the encoding,di for CVRP will be used in equation 22 for CAM.\nF di = W ddTi + bd (22)" }, { "heading": "G.1.2 CONTEXT ENCODING", "text": "The context serves the same purpose for which is represent the current state. For CVRP the current location and the remaining capacity as the context, for CAM and AM." }, { "heading": "G.2 TRAINING DETAILS FOR CVRP", "text": "The training procedure for CVRP follows algorithm 1 in appendix D with the only change being in the calculation of the cost. Equation 20 is used in the algorithm 1 to compute the cost. Table 14 shows the different parametric setting for training CAM and AM for CVRP. CAM and AM models were trained for 100 tasks and tested on CVRP with 5 different task sizes (50, 100, 200, 500, 1000). Figure 7 shows the learning curve for training CAM and AM." }, { "heading": "G.3 BASELINE DETAILS FOR CVRP", "text": "Lin-Kernighan heuristics (LKH3): We performed a single run with a maximum number of trails as 10000.\nAM: Same as that of MRTA-Multi-UAV flood response and MRTA-TAPTC.\nGoogle OR tools: The first solution strategy algorithm used was PATH CHEAPEST ARC and the local search algorithm was Simulated Annealing." }, { "heading": "G.4 MORE DETAILS ON CVRP DATASET", "text": "The dataset used for training CVRP consists of scenarios with 100 locations and one depot. The x and y coordinates of the locations (including the depot) are randomly generated from a uniform distribution within the limits [0, 1]. The demand for all task locations will be a random integer from a uniform distribution between [1,9], with depot assigned a 0 demand. The vehicle capacity (C)\nfor a scenario with 100 locations, is considered as 50. The dataset used for testing (to analyze both generalizability and scalibility) has the same limits as explained above. The assumed capacity of the vehicle for test scenarios of different number of locations are shown in Table 15." }, { "heading": "H LIMITATIONS OF CAM", "text": "This work is implemented for a fixed number of nodes but can be easily extended for cases where nodes are determined dynamically. The impact of the learning algorithm parameters such as the learning rate, training frequency or training batch size, etc. is not analyzed. Parametric analysis of the learning algorithm, as well as the implementation of more recent state-of-the-art RL algorithms (e.g., PPO), can be considered as other directions of future work with CAM. The current learning framework for CAM has been implemented with a greedy approach for decision-making. The performance can be improved by adopting an epsilon greedy approach. In real-world settings, it is possible that two robots want to make decisions at the same time, which might cause them to visit the same location. This is very rare and there are various mitigations to address this issue. For example, while robots are moving toward their selected task locations, they can check if their decision is conflicting with another robot based on recent information. If there is conflict, the robot with the worst time can cancel its current task and select a new task." }, { "heading": "I ABBREVIATIONS:", "text": "" } ]
2,021
null
SP:45941e6abff2f79dd106783302095a6674da5f4a
[ "This paper proposed a proximity-aware graph neural network while maintaining the permutation equivariance property. The proposed model, dubbed as stochastic message passing (SMP), arguments the existing GNNs with stochastic node representations. The author proved the proposed method can model proximity-aware representations based on random projection theory. The experimental results show that the SMP can be used for multiple graphs and tasks. " ]
Graph neural networks (GNNs) are emerging machine learning models on graphs. 1 One key property behind the expressiveness of existing GNNs is that the learned 2 node representations are permutation-equivariant. Though being a desirable prop3 erty for certain tasks, however, permutation-equivariance prevents GNNs from 4 being proximity-aware, i.e., preserving the walk-based proximities between pairs 5 of nodes, which is another critical property for graph analytical tasks. On 6 the other hand, some variants of GNNs are proposed to preserve node prox7 imities, but they fail to maintain permutation-equivariance. How to empower 8 GNNs to be proximity-aware while maintaining permutation-equivariance re9 mains an open problem. In this paper, we propose Stochastic Message Passing 10 (SMP), a general and simple GNN to maintain both proximity-awareness and 11 permutation-equivariance properties. Specifically, we augment the existing GNNs 12 with stochastic node representations learned to preserve node proximities. Though 13 seemingly simple, we prove that such a mechanism can enable GNNs to preserve 14 node proximities in theory while maintaining permutation-equivariance with cer15 tain parametrization. Extensive experimental results demonstrate the effectiveness 16 and efficiency of SMP for tasks including node classification and link prediction. 17
[]
[ { "authors": [ "Stephen P Borgatti" ], "title": "Centrality and network flow", "venue": "Social networks,", "year": 2005 }, { "authors": [ "Karsten M Borgwardt", "Hans-Peter Kriegel" ], "title": "Shortest-path kernels on graphs", "venue": null, "year": 2021 }, { "authors": [ "Ryoma Sato", "Makoto Yamada", "Hisashi Kashima" ], "title": "Under review as a conference paper at ICLR", "venue": null, "year": 2021 }, { "authors": [ "Kakade Sham", "Shakhnarovich Greg" ], "title": "Random projections. CMSC", "venue": "(Spring", "year": 2009 }, { "authors": [ "• Cora" ], "title": "CiteSeer, PubMed11 (Yang et al., 2016): Three citation graphs where nodes", "venue": null, "year": 2016 } ]
[ { "heading": null, "text": "Graph neural networks (GNNs) are emerging machine learning models on graphs.1 One key property behind the expressiveness of existing GNNs is that the learned2 node representations are permutation-equivariant. Though being a desirable prop-3 erty for certain tasks, however, permutation-equivariance prevents GNNs from4 being proximity-aware, i.e., preserving the walk-based proximities between pairs5 of nodes, which is another critical property for graph analytical tasks. On6 the other hand, some variants of GNNs are proposed to preserve node prox-7 imities, but they fail to maintain permutation-equivariance. How to empower8 GNNs to be proximity-aware while maintaining permutation-equivariance re-9 mains an open problem. In this paper, we propose Stochastic Message Passing10 (SMP), a general and simple GNN to maintain both proximity-awareness and11 permutation-equivariance properties. Specifically, we augment the existing GNNs12 with stochastic node representations learned to preserve node proximities. Though13 seemingly simple, we prove that such a mechanism can enable GNNs to preserve14 node proximities in theory while maintaining permutation-equivariance with cer-15 tain parametrization. Extensive experimental results demonstrate the effectiveness16 and efficiency of SMP for tasks including node classification and link prediction.17\n1 INTRODUCTION18\nGraph neural networks (GNNs), as generalizations of neural networks in analyzing graphs, have19 attracted considerable research attention. GNNs have been widely applied to various applications20 such as social recommendation (Ma et al., 2019), physical simulation (Kipf et al., 2018), and protein21 interaction prediction (Zitnik & Leskovec, 2017).22\nOne key property of most existing GNNs is permutation-equivariance, i.e., if we randomly permu-23 tate the IDs of nodes while maintaining the graph structure, the representations of nodes in GNNs24 are permutated accordingly. Mathematically, permutation-equivariance reflects one basic symmet-25 ric group of graph structures. Although it is a desirable property for tasks such as node or graph26 classification (Keriven & Peyré, 2019; Maron et al., 2019b), permutation-equivariance also prevents27 GNNs from being proximity-aware, i.e., permutation-equivariant GNNs cannot preserve walk-based28 proximities between nodes such as the shortest distance or high-order proximities (see Theorem 1).29\nPairwise proximities between nodes are crucial for graph analytical tasks such as link predic-30 tion (Hu et al., 2020; You et al., 2019). To enable a proximity-aware GNN, Position-aware GNN31 (P-GNN) (You et al., 2019)1 proposes a sophisticated GNN architecture and shows better perfor-32 mance for proximity-aware tasks. But P-GNN needs to explicitly calculate the shortest distance be-33 tween nodes and its computational complexity is unaffordable for large graphs. Moreover, P-GNN34 completely ignores the permutation-equivariance property. Therefore, it cannot produce satisfactory35 results when permutation-equivariance is helpful.36\nIn real-world scenarios, both proximity-awareness and permutation-equivariance are indispensable37 properties for GNNs. Firstly, different tasks may require different properties. For example, recom-38 mendation applications usually require the model to be proximity-aware (Konstas et al., 2009) while39 permutation-equivariance is a basic assumption in centrality measurements (Borgatti, 2005). Even40\n1In (You et al., 2019), the authors consider the special case of shortest distance between nodes and name such property as “position-aware”. In this paper, we consider a more general case of any walk-based proximity.\nfor the same task, different datasets may have different requirements on these two properties. Taking41 link prediction as an example, we observe that permutation-equivariant GNNs such as GCN (Kipf &42 Welling, 2017) or GAT (Velickovic et al., 2018) show better results than P-GNN in coauthor graphs,43 but the opposite in biological graphs (please see Section 5.2 for details). Unfortunately, in the current44 GNN frameworks, these two properties are contradicting, as we show in Theorem 1. Whether there45 exists a general GNN to be proximity-aware while maintaining permutation-equivariance remains46 an open problem.47\nIn this paper, we propose Stochastic Message Passing (SMP), a general and simple GNN to pre-48 serve both proximity-awareness and permutation-equivariance properties. Specifically, we augment49 the existing GNNs with stochastic node representations learned to preserve proximities. Though50 seemingly simple, we prove that our proposed SMP can enable GNNs to preserve walk-based prox-51 imities in theory (see Theorem 2 and Theorem 3). Meanwhile, SMP is equivalent to a permutation-52 equivariant GNN with certain parametrization and thus is at least as powerful as those GNNs in53 permutation-equivariant tasks (see Remark 1). Therefore, SMP is general and flexible in handling54 both proximity-aware and permutation-equivariant tasks, which is also demonstrated by our exten-55 sive experimental results. Besides, owing to the simple structure, SMP is computationally efficient,56 with a running time roughly the same as those of the most simple GNNs such as SGC (Wu et al.,57 2019) and is at least an order of magnitude faster than P-GNN on large graphs. Ablation studies58 further show that a linear instantiation of SMP is expressive enough as adding extra non-linearities59 does not lift the performance of SMP on the majority of datasets. Our contributions are as follows.60\n• We propose SMP, a simple and general GNN to handle both proximity-aware and permutation-61 equivariant graph analytical tasks.62 • We prove that SMP has theoretical guarantees in preserving walk-based proximities and is at63 least as powerful as the existing GNNs in permutation-equivariant tasks.64 • Extensive experimental results demonstrate the effectiveness and efficiency of SMP. We show65 that a linear instantiation of SMP is expressive enough on the majority of datasets.66\n2 RELATED WORK67\nWe briefly review GNNs and their permutation-equivariance and proximity-awareness property.68\nThe earliest GNNs adopt a recursive definition of node states (Scarselli et al., 2008; Gori et al.,69 2005) or a contextual realization (Micheli, 2009). GGS-NNs (Li et al., 2016) replace the recursive70 definition with recurrent neural networks (RNNs). Spectral GCNs (Bruna et al., 2014) defined graph71 convolutions using graph signal processing (Shuman et al., 2013; Ortega et al., 2018) with Cheb-72 Net (Defferrard et al., 2016) and GCN (Kipf & Welling, 2017) approximating the spectral filters us-73 ing a K-order Chebyshev polynomial and the first-order polynomial, respectively. MPNNs (Gilmer74 et al., 2017), GraphSAGE (Hamilton et al., 2017), and MoNet (Monti et al., 2017) are proposed75 as general frameworks by characterizing GNNs with a message-passing function and an updating76 function. More advanced variants such as GAT (Velickovic et al., 2018), JK-Nets (Xu et al., 2018b),77 GIN (Xu et al., 2018a), and GraphNets (Battaglia et al., 2018) follow these frameworks.78\nLi et al. (Li et al., 2018), Xu et al. (Xu et al., 2018a), Morris et al. (Morris et al., 2019), and79 Maron et al. (Maron et al., 2019a) show the connection between GNNs and the Weisfeiler-Lehman80 algorithm (Shervashidze et al., 2011) of graph isomorphism tests, in which permutation-equivariance81 holds a key constraint. Maron et al. (Maron et al., 2019b) and Keriven et al. (Keriven & Peyré, 2019)82 analyze the permutation-equivariance property of GNNs more theoretically. To date, most of the83 existing GNNs are permutation-equivariant and thus are not proximity-aware. The only exception84 is P-GNN (You et al., 2019), which proposes to capture the positions of nodes using the relative85 distance between the target node and some randomly chosen anchor nodes. However, P-GNN cannot86 satisfy permutation-equivariance and is computationally expensive.87\nVery recently, motivated by enhancing the expressive power of GNNs in graph isomorphism tests88 and distributed computing literature (Angluin, 1980; Linial, 1992; Naor & Stockmeyer, 1995),89 some studies suggest assigning unique node identifiers for GNNs (Loukas, 2020) such as one-hot90 IDs (Murphy et al., 2019) or random numbers (Dasoulas et al., 2019; Sato et al., 2020; Corso et al.,91 2020). For example, Sato et al. (Sato et al., 2020) novelly show that random numbers can enhance92 GNNs in tackling two important graph-based NP problems with a theoretical guarantee, namely the93\nminimum dominating set and the maximum matching problem, and Fey et al. (Fey et al., 2020) em-94 pirically show the effectiveness of random features in the graph matching problem. Our work differs95 in that we systematically study how to preserve permutation-equivariance and proximity-awareness96 simultaneously in a simple yet effective framework, which is a new topic different from these ex-97 isting works. Besides, we theoretically prove that our proposed method can preserve walk-based98 proximities by using the random projection literature. We also demonstrate the effectiveness of our99 method on various large-scale benchmarks for both node- and edge-level tasks, while no similar100 results are reported in the literature.101\nThe design of our method is also inspired by the random projection literature in dimensionality re-102 duction (Vempala, 2005) and to the best of our knowledge, we are the first to study random projection103 in the scope of GNNs. More remotely, our definition of node proximities is inspired and inherited104 from graph kernels (Gärtner et al., 2003; Borgwardt & Kriegel, 2005), network embedding (Perozzi105 et al., 2014; Grover & Leskovec, 2016), and general studies of graphs (Newman, 2018).106\n3 MESSAGE-PASSING GNNS107\nWe consider a graphG = (V, E ,F) where V = {v1, ..., vN} is the set ofN = |V| nodes, E ⊆ V×V108 is the set of M = |E| edges, and F ∈ RN×d0 is a matrix of d0 node features. The adjacency matrix109 is denoted as A, where its ith row, jth column and an element denoted as Ai,:, A:,j , and Ai,j ,110 respectively. In this paper, we assume the graph is unweighted and undirected. The neighborhood111 of node vi is denoted as Ni and Ñi = Ni ∪ {vi}.112 The existing GNNs usually follow a message-passing framework (Gilmer et al., 2017), where the lth113 layer adopts a neighborhood aggregation function AGG(·) and an updating function UPDATE(·):114\nm (l) i = AGG({h (l) j ,∀j ∈ Ñi}),h (l+1) i = UPDATE([h (l) i ,m (l) i ]), (1)\nwhere h(l)i ∈ Rdl is the representation of node vi in the lth layer, dl is the dimensionality, and m (l) i115\nare the messages. We also denote H(l) = [h(l)1 , ...,h (l) N ] and [·, ·] is the concatenation operation. The116 node representations are initialized as node features, i.e., H(0) = F. We denote a GNN following117 Eq. (1) with L layers as a parameterized function as follows2:118\nH(L) = FGNN(A,F;W), (2) where H(L) are final node representations learned by the GNN and W denotes all the parameters.119\nOne key property of the existing GNNs is permutation-equivariance.120 Definition 1 (Permutation-equivariance). Consider a graph G = (V, E ,F) and any permutation121 P : V → V so that G′ = (V, E ′,F′) has an adjacency matrix A′ = PAPT and a feature matrix122 F′ = PF, where P ∈ {0, 1}N×N is the permutation matrix corresponding to P , i.e., Pi,j = 1 iff123 P(vi) = vj . A GNN satisfies permutation-equivariance if the node representations are equivariant124 with respect to P , i.e.,125\nPFGNN(A,F;W) = FGNN(PAPT ,PF;W). (3)\nIt is known that GNNs following Eq. (1) are permutation-equivariant (Maron et al., 2019b).126 Definition 2 (Automorphism). A graph G is said to have (non-trivial) automorphism if there exists127 a non-identity permutation matrix P 6= IN so that A = PAPT and F = PF. We denote the128 corresponding automorphic node pairs as CG = ⋃ P6=IN {(i, j)|Pi,j 6= 0, i 6= j}129 Corollary 1. Using Definition 1 and 2, if a graph has automorphism, a permutation-equivariant130 GNN will produce identical node representations for automorphic node pairs:131\nh (L) i = h (L) j ,∀(i, j) ∈ CG. (4)\nSince the node representations are used for downstream tasks, the corollary shows that permutation-132 equivariant GNNs cannot differentiate automorphic node pairs. A direct consequence of Corol-133 lary 1 is that permutation-equivariant GNNs cannot preserve walk-based proximities between pairs134 of nodes. The formal definitions are as follows.135\n2Since the final layer of GNNs is task-specific, e.g., a softmax layer for node classification or a readout layer for graph classification, we only consider the GNN architecture to its last hidden layer.\nDefinition 3 (Walk-based Proximities). For a given graph G = (V, E ,F), we use a matrix S ∈136 RN×N to denote walk-based proximities between pairs of nodes defined as:137\nSi,j = S ({vi vj}) , (5) where vi vj denotes walks from node vi to vj and S(·) is an arbitrary real-valued function. The138 length of a walk-based proximity is the maximum length of all the walks of the proximity.139\nTypical examples of walk-based proximities include the shortest distance (You et al., 2019), the high-140 order proximities (a sum of walks weighted by their lengths) (Zhang et al., 2018), and random walk141 probabilities (Klicpera et al., 2019). Next, we give a definition of preserving walk-based proximities.142 Definition 4. For a given walk-based proximity, a GNN is said to be able to preserve the proximity143 if there exists a decoder function Fde(·) satisfying that for any graph G = (V, E ,F), there exist144 parameters WG so that ∀ > 0:145 ∣∣∣Si,j −Fde (H(L)i,: ,H(L)j,: )∣∣∣ < , (6) where146\nH(L) = FGNN(A,F;WG). (7)\nNote that we do not constrain the GNN architecture as long as it follows Eq. (1), and the decoder147 function is also arbitrary (but notice that it cannot take the graph structure as inputs). In fact, both148 the GNN and the decoder function can be arbitrarily deep and with sufficient hidden units.149 Theorem 1. The existing permutation-equivariant GNNs cannot preserve any walk-based proximity150 except the trivial solution that all node pairs have the same proximity.3151\nThe formulation and proof of the theorem are given in Appendix A.1. Since walk-based proximities152 are rather general and widely adopted in graph analytical tasks such as link prediction, the theorem153 shows that the existing permutation-equivariant GNNs cannot handle these tasks well.154\n4 THE MODEL155\n4.1 A GNN FRAMEWORK USING STOCHASTIC MESSAGE PASSING156\nA major shortcoming of permutation-equivariant GNNs is that they cannot differentiate automorphic157 node pairs. To solve that problem, we need to introduce some mechanism as “symmetry breaking”,158 i.e., to enable GNNs to distinguish these nodes. To achieve this goal, we sample a stochastic matrix159 E ∈ RN×d where each element follows an i.i.d. normal distribution N (0, 1). The stochastic matrix160 can provide signals in distinguishing the nodes because they are randomly sampled without being161 affected by the graph automorphism. In fact, we can easily calculate that the Euclidean distance162 between two stochastic signals divided by a constant √ 2 follows a chi distribution χd:163\n1√ 2 |Ei,: −Ej,:| ∼ χd,∀i, j. (8)\nWhen d is reasonably large, e.g., d > 20, the probability of two signals being close is very low.164 Then, inspired by the message-passing framework, we apply a GNN on the stochastic matrix so that165 nodes can exchange information of the stochastic signals:166\nẼ = FGNN (A,E;W) . (9)\nWe call Ẽ the stochastic representation of nodes. Using the stochastic matrix and message-passing,167 Ẽ can be used to preserve node proximities (see Theorem 2 and Theorem 3). Then, to let our model168 still be able to utilize node features, we concatenate Ẽ with the node representations from another169 GNN with node features as inputs:170\nH = Foutput([Ẽ,H(L)]) Ẽ = FGNN (A,E;W) ,H(L) = FGNN′(A,F;W′),\n(10)\n3Proposition 1 in (You et al., 2019) can be regarded as a special case of Theorem 1 using the shortest distance proximity.\nwhere Foutput(·) is an aggregation function such as a linear function or simply the identity mapping.171 In a nutshell, our proposed method augments the existing GNNs with a stochastic representation172 learned by message-passings to differentiate different nodes and preserve node proximities.173\nThere is also a delicate choice worthy mentioning, i.e., whether the stochastic matrix E is fixed or174 resampled in each epoch. By fixing E, the model can learn to memorize the stochastic representation175 and distinguish different nodes, but with the cost of unable to handle nodes not seen during training.176 On the other hand, by resampling E in each epoch, the model can have a better generalization177 ability since the model cannot simply remember one specific stochastic matrix. However, the node178 representations are not fixed (but pairwise proximities are preserved; see Theorem 2). In these cases,179 Ẽ is more capable of handling pairwise tasks such as link prediction or pairwise node classification.180 In this paper, we use a fixed E for transductive datasets and resample E for inductive datasets.181\nTime Complexity From Eq.(10), the time complexity of our framework mainly depends on the182 two GNNs in learning the stochastic and permutation-equivariant node representations. In this paper,183 we instantiate these two GNNs using simple message-passing GNNs such as GCN (Kipf & Welling,184 2017) and SGC (Wu et al., 2019) (see Section 4.2 and Section 4.3). Thus, the time complexity of185 our method is the same as these models, which is O(M), i.e., linear with respect to the number of186 edges. We also empirically compare the running time of different models in Appendix 5.5. Besides,187 many acceleration schemes for GNNs such as sampling (Chen et al., 2018a;b; Huang et al., 2018)188 or partitioning the graph (Chiang et al., 2019) can be directly applied to our framework.189\n4.2 A LINEAR INSTANTIATION190\nBased on the general framework shown in Eq. (10), we attempt to explore its minimum model191 instantiation, i.e., a linear model. Specifically, inspired by Simplified Graph Convolution (SGC) (Wu192 et al., 2019), we adopt a linear message-passing for both GNNs, i.e.,193\nH = Foutput([Ẽ,H(L)]) = Foutput( [ ÃKE, ÃKF ] ), (11)\nwhere à = (D+ I)− 1 2 (A+ I)(D+ I)− 1 2 is the normalized graph adjacency matrix with self-loops194 proposed in GCN (Kipf & Welling, 2017) and K is the number of propagation steps. We also set195 Foutput(·) in Eq. (11) as a linear mapping or identity mapping.196 Though seemingly simple, we show that such an SMP instantiation possesses a theoretical guarantee197 in preserving the walk-based proximities.198 Theorem 2. An SMP in Eq. (11) with the message-passing matrix à and the number of propagation199 stepsK can preserve the walk-based proximity ÃK(ÃK)T with high probability if the dimensional-200 ity of the stochastic matrix d is sufficiently large, where the superscript T denotes matrix transpose.201 The theorem is regardless of whether E are fixed or resampled.202\nThe mathematical formulation and proof of the theorem are given in Appendix A.2. In addition, we203 show that SMP is equivalent to a permutation-equivariant GNN with certain parametrization.204 Remark 1. Suppose we adopt Foutput(·) as a linear function with the output dimensionality the205 same as FGNN′ . Then, Eq. (10) is equivalent to the permutation-equivariant FGNN′(A,F;W′) if the206 parameters in Foutput(·) are all-zeros for Ẽ and an identity matrix for H(L).207\nThe result is straightforward from the definition. Then, we have the following corollary.208 Corollary 2. For any task, Eq. (10) with the aforementioned linear Foutput(·) is at least as powerful209 as the permutation-equivariant FGNN′(A,F;W′), i.e., the minimum training loss of using H in210 Eq. (10) is equal to or smaller than using H(L) = FGNN′(A,F;W′).211\nIn other words, SMP will not hinder the performance4 even the tasks are permutation-equivariant212 since the stochastic representations are concatenated with the permutation-equivariant GNNs fol-213 lowed by a linear mapping. In these cases, the linear SMP is equivalent to SGC (Wu et al., 2019).214\nCombining Theorem 2 and Corollary 2, the linear SMP instantiation in Eq. (11) is capable of han-215 dling both proximity-aware and permutation-equivariant tasks.216\n4Similar to previous works such as (Hamilton et al., 2017; Xu et al., 2018a), we only consider the minimum training loss because the optimization landscapes and generalization gaps are difficult to analyze analytically.\n4.3 NON-LINEAR EXTENSIONS217\nOne may question whether a more sophisticated variant of Eq. (10) can further improve the expres-218 siveness of SMP. There are three adjustable components in Eq. (10): two GNNs in propagating the219 stochastic matrix and node features, respectively, and an output function. In theory, adopting non-220 linear models as either component is able to enhance the expressiveness of SMP. Indeed, if we use221 a sufficiently expressive GNN in learning Ẽ instead of linear propagations, we can prove a more222 general version of Theorem 2 as follows.223 Theorem 3. An SMP variant following Eq.(10) with FGNN (A,E;W) containing L layers can224 preserve any length-L walk-based proximity if the message-passing and updating functions in the225 GNN are sufficiently expressive. In this theorem, we also assume the Gaussian random vectors E226 are rounded to machine precision so that E is drawn from a countable subspace of R.227\nThe proof of the theorem is given in Appendix A.3. Similarly, we can adopt more advanced methods228 for Foutput(·) such as gating or attention so that the two GNNs are more properly integrated.229 Although non-linear extensions of SMP can, in theory, increase the model expressiveness, they also230 take a higher risk of over-fitting due to model complexity, not to mention that the computational cost231 will also increase. In practice, we find in ablation studies that the linear SMP instantiation in Eq. (11)232 works reasonably well on most of the datasets (please refer to Section 5.4 for further details).233\n5 EXPERIMENTS234\n5.1 EXPERIMENTAL SETUPS235\nDatasets We conduct experiments on the following ten datasets: two simulation datasets, Grid236 and Communities (You et al., 2019), a communication dataset Email (You et al., 2019), two coau-237 thor networks, CS and Physics (Shchur et al., 2018), two protein interaction networks, PPI (Hamil-238 ton et al., 2017) and PPA (Hu et al., 2020), and three GNN benchmarks, Cora, CiteSeer, and239 PubMed (Yang et al., 2016). We only report the results of three benchmarks for the node classi-240 fication task and the results for other tasks are shown in Appendix B due to the page limit. More241 details of the datasets including their statistics are provided in Appendix C.1. These datasets cover242 a wide spectrum of domains, sizes, and with or without node features. Since Email and PPI contain243 more than one graph, we conduct experiments in an inductive setting on these two datasets, i.e., the244 training, validation, and testing set are split with respect to different graphs.245\nBaselines We adopt two sets of baselines. The first set is permutation-equivariant GNNs including246 GCN (Kipf & Welling, 2017), GAT (Velickovic et al., 2018), and SGC (Wu et al., 2019), which are247 widely adopted GNN architectures. The second set contains P-GNN (You et al., 2019), the only248 proximity-aware GNN to date. We use the P-GNN-F version.249\nIn comparing with the baselines, we mainly evaluate two variants of SMP with different Foutput(·):250 SMP-Identity, i.e., Foutput(·) as an identity mapping, and SMP-Linear, i.e., Foutput(·) as a linear251 mapping. Note that both variants adopt linear message-passing functions as SGC. We conduct more252 ablation studies with different SMP variants in Section 5.4.253\nFor fair comparisons, we adopt the same architecture and hyper-parameters for all the methods254 (please refer to Appendix C.2 for the details). For datasets without node features, we adopt a con-255 stant vector as the node features. We experiment on two tasks: link prediction and node classifica-256 tion. Additional experiments on graph reconstruction, pairwise node classification, and running time257 comparison are provided in Appendix B. We repeat the experiments 10 times for datasets except for258 PPA and 3 times for PPA, and report the average results.259\n5.2 LINK PREDICTION260\nLink prediction aims to predict missing links of a graph. Specifically, we split the edges into 80%-261 10%-10% and use them for training, validation, and testing, respectively. Besides adopting those real262 edges as positive samples, we obtain negative samples by randomly sampling an equal number of263 node pairs that do not have edges. For all the methods, we set a simple classifier: Sigmoid(HTi Hj),264 i.e., use the inner product to predict whether a node pair (vi, vj) forms a link, and use AUC (area265\nunder the curve) as the evaluation metric. One exception to the aforementioned setting is that on the266 PPA dataset, we follow the splits and evaluation metric (i.e., Hits@100) provided by the dataset (Hu267 et al., 2020). The results except PPA are shown in Table 2. We make the following observations.268\n• Our proposed SMP achieves the best results on five out of the six datasets and is highly compet-269 itive (the second-best result) on the other (Physics). The results demonstrate the effectiveness of270 our proposed method on link prediction tasks. We attribute the strong performance of SMP to its271 capability of maintaining both proximity-awareness and permutation-equivariance properties.272\n• On Grid, Communities, Email, and PPI, both SMP and P-GNN outperform the permutation-273 equivariant GNNs, proving the importance of preserving node proximities. Although SMP is274 simpler and more computationally efficient than P-GNN, SMP reports even better results.275\n• When node features are available (CS, Physics, and PPI), SGC can outperform GCN and GAT.276 The results re-validate the experiments in SGC (Wu et al., 2019) that the non-linearity in GNNs is277 not necessarily indispensable. Some plausible reasons include that the additional model complex-278 ity brought by non-linear operators makes the models tend to overfit and also difficult to train (see279 Appendix B.6). On those datasets, SMP retains comparable performance on two coauthor graphs280 and shows better performance on PPI, possibly because node features on protein graphs are less281 informative than node features on coauthor graphs for predicting links, and thus preserving graph282 structure is more beneficial on PPI.283\n• As Email and PPI are conducted in an inductive setting, i.e., using different graphs for train-284 ing/validation/testing, the results show that SMP can handle inductive tasks as well.285\nTable 1: The results of link prediction on the PPA dataset. The best result and the second-best result are in bold and underlined, respectively.\nModel Hits@100\nSGC 0.1187±0.0012 GCN 0.1867±0.0132 GraphSAGE 0.1655±0.0240 P-GNN Out of Memory\nNode2vec 0.2226±0.0083 Matrix Factorization 0.3229±0.0094 SMP-Identity 0.2018±0.0148 SMP-Linear 0.3582±0.0070\nThe results on PPA are shown in Table 1. SMP286 again outperforms all the baselines, showing that287 it can handle large-scale graphs with millions of288 nodes and edges. PPA is part of a recently re-289 leased Open Graph Benchmark (Hu et al., 2020).290 The superior performance on PPA further demon-291 strates the effectiveness of our proposed method292 in the link prediction task.293\n5.3 NODE CLASSIFICATION294\nNext, we conduct experiments of node classifica-295 tion, i.e., predicting the labels of nodes. Since296 we need ground-truths in the evaluation, we only297 adopt datasets with node labels. Specifically, for298 CS and Physics, following (Shchur et al., 2018), we adopt 20/30 labeled nodes per class for train-299 ing/validation and the rest for testing. For Communities, we adjust the number as 5/5/10 labeled300 nodes per class for training/validation/testing. For Cora, CiteSeer, and PubMed, we use the default301 splits that came with the datasets. We do not adopt Email because some graphs in the dataset are too302 small to show stable results and exclude PPI as it is a multi-label dataset.303\n5The results of PGNN are slightly different compared to the paper because we adopt a more practical and common setting that negative samples in the data are not known apriori but randomly sampled in each epoch.\nWe use a softmax layer on the learned node representations as the classifier and adopt accuracy,304 i.e., how many percentages of nodes are correctly classified, as the evaluation criteria. We omit the305 results of SMP-Identity for this task since the node representations in SMP-Identity have a fixed306 dimensionality that does not match the number of classes.307\nThe results are shown in Table 3. From the table, we observe that SMP reports nearly perfect results308 on Communities. Since the node labels are generated by graph structures on Communities and there309 are no node features, the model needs to be proximity-aware to handle it well. P-GNN, which shows310 promising results in the link prediction task, also fails miserably here.311\nOn the other five graphs, SMP reports highly competitive performance. These graphs are commonly-312 used benchmarks for GNNs. P-GNN, which completely ignores permutation-equivariance, performs313 poorly as expected. In contrast, SMP can manage to recover the permutation-equivariant GNNs314 and avoid being misled, as proven in Remark 1. In fact, SMP even shows better results than its315 counterpart, SGC, indicating that preserving proximities is also helpful for these datasets.316\n5.4 ABLATION STUDIES317\nWe conduct ablation studies by comparing different SMP variants, including SMP-Identity, SMP-318 Linear, and the additional three variants as follows:319\n• SMP-MLP: we set Foutput(·) as a fully-connected network with 1 hidden layer.320 • SMP-Linear-GCNfeat: we set FGNN′(A,F;W′) in Eq. (10) to be a GCN (Kipf & Welling,321 2017), i.e., induce non-linearity in message passing for features. Foutput(·) is still linear.322 • SMP-Linear-GCNboth: we set both FGNN (A,E;W) and FGNN′(A,F;W′) to be a GCN (Kipf323 & Welling, 2017), i.e., induce non-linearity in message passing for both features and stochastic324 representations. Foutput(·) is linear.325\nWe show the results for link prediction tasks in Table 4. The results for node classification and326 pairwise node classification, which imply similar conclusions, are provided in Table 10 and Table 11327 in Appendix B.5. We make the following observations.328\n• In general, SMP-Linear shows good-enough performance, achieving the best or second-best re-329 sults on six datasets and highly competitive on the other (Communities). SMP-Identity, which330 does not have parameters in the output function, performs slightly worse. The results demon-331 strate the importance of adopting a learnable linear layer in the output function, which is con-332 sistent with Remark 1. SMP-MLP does not lift the performance in general, showing that adding333 extra complexities in Foutput(·) brings no gain in those datasets.334 • SMP-Linear-GCNfeat reports the best results on Communities, PPI, and PPA, indicating that335 adding extra non-linearities in propagating node features are helpful for some graphs.336 • SMP-Linear-GCNboth reports the best results on Gird with a considerable margin. Recall that337 Grid has no node features. The results indicate that inducing non-linearities can help the stochas-338 tic representations capture more proximities, which is more helpful for featureless graphs.339\n5.5 EFFICIENCY COMPARISON340\nTo compare the efficiency of different methods quantitatively, we report the running time of different341 methods in Table 5. The results are averaged over 3,000 epochs on an NVIDIA TESLA M40 GPU342\nwith 12 GB of memory. The results show that SMP is computationally efficient, i.e., only marginally343 slower than SGC and comparable to GCN. P-GNN is at least an order of magnitude slower except344 for the extremely small graphs such as Grid, Communities, or Email with no more than a thousand345 nodes. In addition, the expensive memory cost makes P-GNN unable to work on large-scale graphs.346\n5.6 MORE EXPERIMENTAL RESULTS347\nBesides the aforementioned experiments, we also conduct experiments on the following tasks: graph348 reconstruction (Appendix B.1), pairwise node classification (Appendix B.2), and comparing with349 one-hot IDs (Appendix B.3). Please refer to the Appendix for experimental results and correspond-350 ing analyses.351\n6 CONCLUSION352\nIn this paper, we propose SMP, a general and simple GNN to maintain both proximity-awareness353 and permutation-equivariance properties. We propose to augment the existing GNNs with stochastic354 node representations learned to preserve node proximities. We prove that SMP can enable GNN to355 preserve node proximities in theory and is equivalent to a permutation-equivariant GNN with certain356 parametrization. Experimental results demonstrate the effectiveness and efficiency of SMP. Ablation357 studies show that a linear SMP instantiation works reasonably well on most of the datasets.358\nREFERENCES359\nDana Angluin. Local and global properties in networks of processors. In Proceedings of the twelfth360 annual ACM symposium on Theory of computing, pp. 82–93, 1980.361\nPeter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi,362 Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al.363 Relational inductive biases, deep learning, and graph networks. arXiv:1806.01261, 2018.364\nStephen P Borgatti. Centrality and network flow. Social networks, 27(1):55–71, 2005.365\nKarsten M Borgwardt and Hans-Peter Kriegel. Shortest-path kernels on graphs. In Fifth IEEE366 international conference on data mining (ICDM’05), pp. 8–pp. IEEE, 2005.367\nJoan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. Spectral networks and locally368 connected networks on graphs. In International Conference on Learning Representations, 2014.369\nJianfei Chen, Jun Zhu, and Le Song. Stochastic training of graph convolutional networks with370 variance reduction. In International Conference on Machine Learning, pp. 942–950, 2018a.371\nJie Chen, Tengfei Ma, and Cao Xiao. Fastgcn: Fast learning with graph convolutional networks via372 importance sampling. In International Conference on Learning Representations, 2018b.373\nWei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An374 efficient algorithm for training deep and large graph convolutional networks. In Proceedings of375 the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp.376 257–266, 2019.377\nGabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. Principal378 neighbourhood aggregation for graph nets. arXiv preprint arXiv:2004.05718, 2020.379\nGeorge Dasoulas, Ludovic Dos Santos, Kevin Scaman, and Aladin Virmaux. Coloring graph neural380 networks for node disambiguation. arXiv preprint arXiv:1912.06058, 2019.381\nMichaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on382 graphs with fast localized spectral filtering. In Advances in neural information processing systems,383 pp. 3844–3852, 2016.384\nMatthias Fey, Jan E Lenssen, Christopher Morris, Jonathan Masci, and Nils M Kriege. Deep graph385 matching consensus. In International Conference on Learning Representations, 2020.386\nThomas Gärtner, Peter Flach, and Stefan Wrobel. On graph kernels: Hardness results and efficient387 alternatives. In Learning theory and kernel machines, pp. 129–143. Springer, 2003.388\nJustin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural389 message passing for quantum chemistry. In International Conference on Machine Learning, pp.390 1263–1272, 2017.391\nMarco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains.392 In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2,393 pp. 729–734. IEEE, 2005.394\nAditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings395 of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining,396 pp. 855–864, 2016.397\nWill Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs.398 In Advances in neural information processing systems, pp. 1024–1034, 2017.399\nXiangnan He, Kuan Deng, Xiang Wang, Yan Li, YongDong Zhang, and Meng Wang. Lightgcn:400 Simplifying and powering graph convolution network for recommendation. In Proceedings of401 the 43rd International ACM SIGIR Conference on Research and Development in Information402 Retrieval, pp. 639–648, 2020.403\nWeihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta,404 and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv405 preprint arXiv:2005.00687, 2020.406\nWenbing Huang, Tong Zhang, Yu Rong, and Junzhou Huang. Adaptive sampling towards fast graph407 representation learning. In Advances in neural information processing systems, 2018.408\nNicolas Keriven and Gabriel Peyré. Universal invariant and equivariant graph neural networks. In409 Advances in Neural Information Processing Systems, pp. 7090–7099, 2019.410\nThomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational411 inference for interacting systems. In International Conference on Machine Learning, pp. 2688–412 2697, 2018.413\nThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional net-414 works. In Proceedings of the 6th International Conference on Learning Representations, 2017.415\nJohannes Klicpera, Aleksandar Bojchevski, and Stephan Günnemann. Predict then propagate:416 Graph neural networks meet personalized pagerank. In International Conference on Learning417 Representations, 2019.418\nIoannis Konstas, Vassilios Stathopoulos, and Joemon M Jose. On social networks and collaborative419 recommendation. In Proceedings of the 32nd international ACM SIGIR conference on Research420 and development in information retrieval, pp. 195–202, 2009.421\nQimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for422 semi-supervised learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.423\nYujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural424 networks. In International Conference on Learning Representations, 2016.425\nNathan Linial. Locality in distributed graph algorithms. SIAM Journal on computing, 21(1):193–426 201, 1992.427\nAndreas Loukas. What graph neural networks cannot learn: depth vs width. In International Con-428 ference on Learning Representations, 2020.429\nJianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, and Wenwu Zhu. Learning disentangled repre-430 sentations for recommendation. In Advances in Neural Information Processing Systems 32, pp.431 5712–5723. 2019.432\nHaggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph433 networks. In Advances in Neural Information Processing Systems, pp. 2156–2167, 2019a.434\nHaggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph435 networks. In International Conference on Learning Representations, 2019b.436\nAlessio Micheli. Neural network for graphs: A contextual constructive approach. IEEE Transactions437 on Neural Networks, 20(3):498–511, 2009.438\nFederico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M439 Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In440 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5115–441 5124, 2017.442\nChristopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav443 Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks.444 In Proceedings of the AAAI Conference on Artificial Intelligence, 2019.445\nRyan Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Relational pooling446 for graph representations. In International Conference on Machine Learning, 2019.447\nMoni Naor and Larry Stockmeyer. What can be computed locally? SIAM Journal on Computing,448 24(6):1259–1277, 1995.449\nMark Newman. Networks. Oxford university press, 2018.450\nAntonio Ortega, Pascal Frossard, Jelena Kovačević, José MF Moura, and Pierre Vandergheynst.451 Graph signal processing: Overview, challenges, and applications. Proceedings of the IEEE, 106452 (5):808–828, 2018.453\nBryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social repre-454 sentations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge455 discovery and data mining, pp. 701–710, 2014.456\nRyoma Sato, Makoto Yamada, and Hisashi Kashima. Random features strengthen graph neural457 networks. arXiv preprint arXiv:2002.03155, 2020.458\nFranco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini.459 The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2008.460\nKakade Sham and Shakhnarovich Greg. Random projections. CMSC 35900 (Spring 2009) Large461 Scale Learning, 2020. URL https://ttic.uchicago.edu/˜gregory/courses/462 LargeScaleLearning/lectures/jl.pdf. [Online; accessed 4-September-2020].463\nOleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan Günnemann. Pitfalls464 of graph neural network evaluation. Relational Representation Learning Workshop, NeurIPS465 2018, 2018.466\nNino Shervashidze, Pascal Schweitzer, Erik Jan van Leeuwen, Kurt Mehlhorn, and Karsten M Borg-467 wardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research, 12(Sep):2539–468 2561, 2011.469\nDavid I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The470 emerging field of signal processing on graphs: Extending high-dimensional data analysis to net-471 works and other irregular domains. IEEE signal processing magazine, 30(3):83–98, 2013.472\nJian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale473 information network embedding. In Proceedings of the 24th international conference on world474 wide web, pp. 1067–1077, 2015.475\nPetar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua476 Bengio. Graph attention networks. In Proceedings of the 7th International Conference on Learn-477 ing Representations, 2018.478\nSantosh S Vempala. The random projection method, volume 65. American Mathematical Soc.,479 2005.480\nDaixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of481 the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pp.482 1225–1234, 2016.483\nDuncan J Watts. Networks, dynamics, and the small-world phenomenon. American Journal of484 sociology, 105(2):493–527, 1999.485\nFelix Wu, Amauri Souza, Tianyi Zhang, Christopher Fifty, Tao Yu, and Kilian Weinberger. Sim-486 plifying graph convolutional networks. In International Conference on Machine Learning, pp.487 6861–6871, 2019.488\nKeyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural489 networks? In International Conference on Learning Representations, 2018a.490\nKeyulu Xu, Chengtao Li, Yonglong Tian, Tomohiro Sonobe, Ken-ichi Kawarabayashi, and Stefanie491 Jegelka. Representation learning on graphs with jumping knowledge networks. In International492 Conference on Machine Learning, pp. 5453–5462, 2018b.493\nZhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning494 with graph embeddings. In Proceedings of the 33rd International Conference on International495 Conference on Machine Learning-Volume 48, pp. 40–48, 2016.496\nJiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. In International497 Conference on Machine Learning, pp. 7134–7143, 2019.498\nZiwei Zhang, Peng Cui, Xiao Wang, Jian Pei, Xuanrong Yao, and Wenwu Zhu. Arbitrary-order499 proximity preserved network embedding. In Proceedings of the 24th ACM SIGKDD International500 Conference on Knowledge Discovery & Data Mining, pp. 2778–2786, 2018.501\nMarinka Zitnik and Jure Leskovec. Predicting multicellular function through multi-layer tissue502 networks. Bioinformatics, 33(14):i190–i198, 2017.503\nA THEOREMS AND PROOFS504\nA.1 THEOREM 1505\nHere we formulate and prove Theorem 1.506\nTheorem 1. For any walk-based proximity function S(·), a permutation-equivariant GNN cannot507 preserve S(·), except the trivial solution that all node pairs have the same proximity, i.e., Si,j =508 c,∀i, j, where c is a constant.509\nProof. We prove the theorem by contradiction. Assume there exists a non-trivial S(·) which a510 permutation-equivariant GNN can preserve. Consider any graph G = (V, E ,F) and denote N =511 |V|. We can create G′ = (V ′, E ′,F′) with |V ′| = 2N so that:512\nE ′i,j = Ei,j if i ≤ N, j ≤ N Ei−N,j−N if i > N, j > N 0 else , F′i,: = { Fi,: if i ≤ N Fi−N,: if i > N . (12)\nBasically, we generate two “copies” of the original graph, one indexing from 1 to N , and the other513 indexing from N +1 to 2N . By assumption, there exists a permutation-equivariant GNN which can514 preserve S(·) in G′ and we denote the node representations as H′(L) = FGNN(A′,F′;WG′). It is515 easy to see that node v′i and v ′ i+N in G\n′ form an automorphic node pair. Using Corollary 1, their516 representations will be identical in any permutation-equivariant GNN, i.e.,517\nH ′(L) i,: = H ′(L) i+N,:,∀i ≤ N. (13)\nAlso, note that there exists no walk from the two copies, i.e. { v′i v ′ j } = { v′j v ′ i } = ∅,∀i ≤518 N, j > N . As a result, for ∀i ≤ N, j ≤ N, ∀ > 0, we have:519\n|Si,j − S(∅)| ≤ ∣∣∣Si,j −Fde (H′(L)i,: ,H′(L)j,: )∣∣∣+ ∣∣∣S(∅)−Fde (H′(L)i,: ,H′(L)j,: )∣∣∣\n= ∣∣∣Si,j −Fde (H′(L)i,: ,H′(L)j,: )∣∣∣+ ∣∣∣Si,j+N −Fde (H′(L)i,: ,H′(L)j+N,:)∣∣∣ < 2 . (14)\nWe can prove the same for ∀i > N, j > N . The equation naturally holds if i ≤ N, j > N or520 i > N, j ≤ N since { v′i v ′ j } = ∅. Combining the results, we have ∀ > 0,∀i, j, |Si,j − S(∅)| <521 2 . Since can be arbitrarily small, the equation shows that all node pairs have the same proximity522 c = S(∅), which leads to a contraction and finishes our proof.523\nNotice that in our proof, G′ can be constructed for any graph, so rather than designing one specific524 counter-example, we have shown that there always exists an infinite number of counter-examples by525 constructing automorphisms in the graph.526\nSome may find that our counter-examples in the above proof will lead to multiple connected com-527 ponents. Next, we give an alternative proof maintaining one connected component (assuming the528 original graph is connected) under the assumption that the walk-based proximity is of finite length.529\nProof. Similar to the previous proof, we assume there exists a non-trivial S(·) which a permutation-530 equivariant GNN can preserve. Besides, we assume the length of S(·) is upper bounded by lmax,531 where lmax is any finite number, i.e., ∀i, j,532\nSi,j = S ({vi vj}) = S ({vi vj |len(vi vj) ≤ lmax}) . (15)\nThen, for a connected graph G = (V, E ,F), we create G′ = (V ′, E ′,F′) similar to Eq. (12). Specif-533 ically, denoting Ñ = N + lmax, we let G′ have 3Ñ nodes so that:534\nE ′i,j = Ei,j if i, j ≤ N 1 if N ≤ i, j ≤ Ñ + 1, |j − i| = 1 Ei−Ñ,j−Ñ if Ñ < i, j ≤ Ñ +N 1 if Ñ +N ≤ i, j ≤ 2Ñ + 1, |j − i| = 1 Ei−2Ñ,j−2Ñ if 2Ñ < i, j ≤ 2Ñ +N 1 if 2Ñ +N ≤ i, j, |j − i| = 1 1 if i = 3Ñ , j = 1 or j = 3Ñ , i = 1 0 else ,F′i,: = Fi,: if i ≤ N 0 if N < i ≤ Ñ Fi−Ñ,: if Ñ < i ≤ Ñ +N 0 if Ñ +N < i ≤ 2Ñ Fi−2Ñ,: if 2Ñ < i ≤ 2Ñ +N 0 if 2Ñ +N < i .\n(16) Intuitively, we create three “copies” of G and three “bridges” to connect the copies and thus make535 G′ also connected. It is also easy to see that nodes v′i, v ′ i+Ñ , and v′ i+2Ñ\nall form automorphic node536 pairs and thus we have:537\nH ′(L) i,: = H ′(L) i+Ñ,: = H ′(L) i+ ˜2N,: ,∀i ≤ Ñ . (17)\nNext, we can see that the nodes in G′ are divided into six parts (three copies and three bridges),538 which we denote as V ′1 = {v1, ..., vN}, V ′2 = {vN+1, ..., vÑ}, V ′3 = { vÑ+1, ..., vÑ+N } , V ′4 =539 {\nvÑ+N+1, ..., v2Ñ } , V ′5 = { v2Ñ+1, ..., v2Ñ+N } , and V ′6 = { v2Ñ+N+1, ..., v3Ñ } . Since V ′2, V ′4, V ′6540 are bridges with length lmax, any walk crosses these bridges will have a length large than lmax. For541 example, let us focus on vi ∈ V ′1, i.e., i ≤ N . If vj is in V ′3, V ′4, or V ′5 (i.e., Ñ < j ≤ 2Ñ +N ), any542 walk vi vj will either pass the bridge V ′2 or V ′6 and thus has a length larger than lmax. As a result,543 we have:544\nSi,j = S ({vi vj}) = S ({vi vj |len(vi vj) ≤ lmax}) = S (∅) . (18)\nIf vj ∈ V ′1 or vj ∈ V ′2, i.e., j ≤ Ñ , we can use the fact that vj and vj+Ñ forms an automorphic node545 pair similar to Eq. (14), i.e., ∀ > 0, we have546\n|Si,j − S(∅)| ≤ ∣∣∣Si,j −Fde (H′(L)i,: ,H′(L)j,: )∣∣∣+ ∣∣∣S(∅)−Fde (H′(L)i,: ,H′(L)j,: )∣∣∣\n= ∣∣∣Si,j −Fde (H′(L)i,: ,H′(L)j,: )∣∣∣+ ∣∣∣Si,j+Ñ −Fde (H′(L)i,: ,H′(L)j+Ñ,:)∣∣∣ < 2 . (19)\nSimilarly, if vj ∈ V ′6, i.e., 2Ñ +N < j, we can use the fact that vj and vj−Ñ forms an automorphic547 node pair to prove the same inequality. Thus, we prove that if i ≤ N, ∀ > 0,∀j, |Si,j − S(∅)| < 2 .548 The same proof strategy can be applied to i > N . Since can be arbitrarily small, the results show549 that all node pairs have the same proximity S(∅), which leads to a contraction and finishes our550 proof.551\nA.2 THEOREM 2552\nHere we formulate and prove Theorem 2. Note that some notations and definitions are introduced in553 Appendix A.1.554\nTheorem 2. For the walk-based proximity S = ÃK(ÃK)T , SMP can preserve the proximity with555 high probability if the dimensionality of the stochastic matrix is sufficiently large, i.e., ∀ > 0,∀δ >556 0, there ∃d0 so that any d > d0:557\nP (|Si,j −Fde (Hi,:,Hj,:)| < ) > 1− δ, (20)\nwhere H are the node representation obtained from SMP in Eq. (11). The result holds for any558 stochastic matrix and thus is regardless of whether E is fixed or resampled during each epoch.559\nProof. Our proof is mostly based on the standard random projection theory. Firstly, since we have560 proven in Theorem 1 that the permutation-equivariant representations cannot preserve any walk-561 based proximity, here we prove that we can preserve the proximity only using Ẽ, which can be562\neasily achieved by ignoring H(L) in Foutput([Ẽ,H(L)]), e.g., if we set Foutput as a linear function, the563 model can learn to set the corresponding weights for H(L) as all-zeros.564\nWe set the decoder function as a normalized inner product:565\nFde (Hi,:,Hj,:) = 1\nd Hi,:H\nT j,:. (21)\nThen, denoting ai = ÃKi,: and recalling Ẽ = à KE, we have:566\n|Si,j −Fde (Hi,:,Hj,:)| = |aiaTj − 1\nd Ẽi,:Ẽ\nT j,:| = |aiaTj − ai\n1 d EETaTj |. (22)\nSince E is a Gaussian random matrix, from the Johnson-Lindenstrauss lemma (Vempala, 2005) (in567 the inner product preservation forum, e.g., see Corollary 2.1 and its proof in (Sham & Greg, 2020)),568 ∀0 < ′ < 12 , we have:569\nP ( |aiaTj − ai 1\nd EETaTj | ≤\n′\n2 (‖ai‖+ ‖aj‖)\n) > 1− 4e− ( ′2− ′3)d 4 . (23)\nBy setting ′ = maxi‖ai‖ , we have > ′ 2 (‖ai‖+ ‖aj‖) and:570\nP (|Si,j −Fde (Hi,:,Hj,:)| < ) > 1− 4e− ( maxi‖ai‖\n2− maxi‖ai‖ 3)d\n4 , (24) which leads to the theorem by solving and setting d0 as follows:571\n4e− ( maxi‖ai‖\n2− maxi‖ai‖ 3)d0\n4 = δ ⇒ d0 = 4 log 4δ (maxi ‖ai‖)\n3\n2 maxi ‖ai‖ − 3 . (25)\n572\nA.3 THEOREM 3573\nHere we formulate and prove Theorem 3. Note that some notations and definitions are introduced in574 Appendix A.1.575 Theorem 3. For any length-L walk-based proximity, i.e.,\nSi,j = S ({vi vj}) = S ({vi vj |len(vi vj) ≤ L}) , where len(·) is the length of a walk, there exists an SMP variant in Eq. (10) with FGNN (A,E;W)576 containing L layers (including the input layer) to preserve that proximity if the following conditions577 hold: (1) The stochastic matrix E contains unique signals for different nodes, i.e. Ei,: 6= Ej,:,∀i 6=578 j. (2) The message-passing and updating functions in learning Ẽ are bijective. (3) The decoder579 function Fde(·) also takes E as inputs and is universal approximation.580\nProof. Similar as Theorem 2, we only utilize Ẽ during our proof. We use e(l)i , 0 ≤ l < L to581 denote the node representations in the lth layer of FGNN (A,E;W), i.e., e(0)i = Ei,: and e (L−1) i =582 Ẽi,:. Our proof strategy is to show that the stochastic node representations can remember all the583 information about the walks.584\nFirstly, as the message-passing and updating function are bijective by assumption, we can recover585 from the node representations in each layer all their neighborhood representations in the previous586 layer. Specifically, there exist F (l)(·), 1 ≤ l < L such that:587\nF (l) ( e (l) i ) = [ e (l−1) i , { e (l−1) j , j ∈ Ni }] 6. (26)\nFor notation conveniences, we split the function into two parts, one for the node itself and the other588 for its neighbors:589 F (l)self ( e (l) i ) = e (l−1) i ,\nF (l)neighbor ( e (l) i ) = { e (l−1) j , j ∈ Ni } .\n(27)\n6To let F (l)(·) output a set with arbitrary lengths, we can adopt sequence-based models such an LSTM.\nFor the first function, if we successively apply such functions from the lth layer to the input layer, we590 can recover the input features of the GNN, i.e., E. Since the stochastic matrix E contains a unique591 signal for different nodes, we can decode the node ID from e(0)i , i.e., there existsF (0) self ( e (0) i ;E ) = i.592\nFor brevity, we denote applying such l + 1 functions to get the node ID as593 F (0:l)self ( e (l) i ) = F (0)self ( F (1)self ( ... ( F (l)self ( e (l) i ))) ;E ) = i. (28)\nFor the second function, we can apply F (l−1)neighbor to the decoded vector set so that we can recover their594 neighborhood representations in the (l − 2)th layer, etc.595\nNext, we show that for e(l−1)j , there exists a length-l walk vi vj = (va1 , va2 , ..., val), where596\nva1 = vi, val = vj if and only if F (0:l−1) self ( e (l−1) j ) = al = j and there exists e(l−2), ..., e(0) such597 that:598 e(l−2) ∈ F (l−1)neighbor ( e (l−1) j ) ,F (0:l−2)self ( e(l−2) ) = al−1,\ne(l−3) ∈ F (l−2)neighbor ( e(l−2) ) ,F (0:l−3)self ( e(l−3) ) = al−2,\n... e(0) ∈ F (1)neighbor ( e(1) ) ,F (0:0)self ( e(0) ) = a1 = i.\n(29)\nThis result is easily verified as:599\n(va1 , va2 , ..., val) is a walk⇔ Eai,aj = Eaj ,ai = 1⇔ ai ∈ Nai+1 ,∀1 ≤ i < l ⇔ ∃e(i−1) ∈ F (i)neighbor ( e(i) ) ,F (0:i−1)self ( e(i−1) ) = ai,∀1 ≤ i < l.\n(30) Note that all the information is encoded in Ẽ, i.e., we can decode {vi vj |len(vi vj) ≤ L} from600 e (L−1) j by successively applying F (l) self (·) ,F (l) neighbor (·). We can also apply F (0:L−1) self to e (L−1) i to get601 the start node ID i. Putting it together, we have:602\nF ( e (L−1) j , e (L−1) i ) = {vi vj |len(vi vj) ≤ L} , (31)\nwhereF(·) is composed ofF (l)self (·) , 0 ≤ l < L andF (l) neighbor (·) , 1 ≤ l < L. Applying the proximity603\nfunction S(·), we have:604 S ( F ( e (L−1) j , e (L−1) i )) = Si,j . (32)\nWe finish the proof by setting the real decoder function Fde(·) to arbitrarily approximate this desired605 function S (F (·, ·)) under the universal approximation assumption.606\nB ADDITIONAL EXPERIMENTAL RESULTS607\nB.1 GRAPH RECONSTRUCTION608\nTo verify that our proposed SMP can indeed preserve node proximities, we conduct experiments of609 graph reconstruction (Wang et al., 2016), i.e., using the node representations learned by GNNs to610 reconstruct the edges of the graph. Graph reconstruction corresponds to the first-order proximity611 between nodes, i.e., whether two nodes directly have a connection, which is the most straight-612 forward node proximity (Tang et al., 2015). Specifically, following Section 5.2, we adopt the inner613 product classifier Sigmoid(HTi Hj) and use AUC as the evaluation metric. To control the impact614 of node features (i.e., since many graphs exhibit assortative mixing, even models only using node615 features can reconstruct the edges to a certain extent), we do not use node features for all the models.616\nWe report the results in Table 6. The results show that SMP greatly outperforms permutation-617 equivariant GNNs such as GCN and GAT in graph reconstruction, clearly demonstrating that SMP618 can better preserve node proximities. PGNN shows highly competitive results as SMP. However,619 similar to other tasks, the intensive memory usage makes PGNN unable to handle medium-scale620 graphs such as Physics and PubMed.621\nB.2 PAIRWISE NODE CLASSIFICATION622\nBesides standard node classification experiments reported in Section 5.3, we follow (You et al.,623 2019) and experiment on pairwise node classification, i.e., predicting whether two nodes have the624 same label. Compared with standard node classification, pairwise node classification focuses more625 on the relations between nodes and thus requires the model to be proximity-aware to perform well.626\nSimilar to link prediction, we split the positive samples (i.e., node pairs with the same label) into an627 80%-10%-10% training-validation-testing set with an equal number of randomly sampled negative628 pairs. For large graphs, since the possible positive samples are intractable (i.e. O(N2)), we use a629 random subset. Since we also need node labels as the ground-truth, we only conduct pairwise node630 classification on datasets when node labels are available. We also exclude the results of PPI since631 the dataset is multi-label and cannot be used in a pairwise setting (You et al., 2019). Similar to632 Section 5.2, we adopt a simple inner product classifier and use AUC as the evaluation metric.633\nThe results are shown in Table 7. We observe consistent results as link prediction in Section 5.2, i.e.,634 SMP reports the best results on four datasets and the second-best results on the other three datasets.635 These results again verify that SMP can effectively preserve and utilize node proximities when636 needed while retaining comparable performance when the tasks are more permutation-equivariant637 like, e.g., on CS and Physics.638\nB.3 COMPARISON WITH USING IDS639\nWe further compare SMP with augmenting GNNs using a one-hot encoding of node IDs, i.e., the640 identity matrix. Intuitively, since the IDs of nodes are unique, such a method does not suffer from the641 automorphism problem and should also enable GNNs to preserve node proximities. However, theo-642 retically speaking, using such a one-hot encoding has two major problems. Firstly, the dimensional-643 ity of the identity matrix is N ×N , and thus the number of parameters in the first message-passing644 layer is also on the order of O(N). Therefore, the method will inevitably be computationally ex-645 pensive and may not be scalable to large-scale graphs. A large number of parameters will also more646 likely lead to the overfitting problem. Secondly, the node IDs are not transferable across different647 graphs, i.e., the node v1 in one graph and the node v1 in another graph do not necessarily share a648 similar meaning. But as the parameters in the message-passings depend on the node IDs (since they649 are input features), such a mechanism cannot handle inductive tasks well.7650\n7One may question whether SMP is transferable across different graphs since the stochastic features are independently drawn. Empirically, we find that SMP reports reasonably well results on inductive datasets such\nWe also empirically compare such a method with SMP and report the results in Table 8. The results651 show that SMP-Linear outperforms GCNonehot in most cases. Besides, GCNonehot fails to handle652 Physics, which is only a medium-scale graph, due to the heavy memory usage. One surprising result653 is that GCNonehot outperforms SMP-Linear on Grid, the simulated graph where nodes are placed on654 a 20 × 20 grid. A plausible reason is that since the edges in Grid follow a specific rule, using a655 one-hot encoding gives GCNonehot enough flexibility to learn and remember the rules, and the model656 does not overfit because the graph has a rather small scale.657\nB.4 ADDITIONAL LINK PREDICTION RESULTS658\nWe further report the results of link prediction on three GNN benchmarks: Cora, CiteSeer, and659 PubMed. The results are shown in Table 9. The results show similar trends as other datasets pre-660 sented in Section 5.2, i.e., SMP reports comparable results as other permutation-equivariant GNNs661 while PGNN fails to handle the task well.662\nB.5 ADDITIONAL ABLATION STUDIES663\nWe report the ablation study results for the node classification task and pairwise node classification664 task in Table 10 and Table 11, respectively. The results again show that SMP-Linear generally665 achieves good-enough results on the majority of the datasets and adding non-linearities does not666 necessarily lift the performance of SMP.667\nWe also compare whether the stochastic signals E are fixed or not during different training epochs668 for our proposed SMP. For brevity, we only report the results for the link prediction task in Table 12.669 The results show that fixing E usually leads to better results on transductive datasets (recall that670 datasets except Email and PPI are transductive) and resampling E leads to better results on inductive671 datasets in general. The results are consistent with our analysis in Section 4.1.672\nas Email and PPI. One plausible reason is that since the proximities of nodes are preserved even the random features per se are different (see Theorem 2), all subsequent parameters based on proximities can be transferred.\nB.6 COMPARISON OF PERMUTATION-EQUIVARIANT GNNS FOR LINK PREDICTION673\nTo investigate the performance of linear and non-linear variants of permutation-equivariant GNNs674 for the link prediction task, we additionally report both the training accuracies and the testing accu-675 racies of SGC, GCN, and GAT in Table 13. Notice that to ensure a fair comparison, we do not adopt676 the early stopping strategy here so that different models have the same number of training epochs677 (otherwise, if a model tends to overfit, the early stopping strategy will terminate the training process678 when the number of training epochs is small and result in a spurious underfitting phenomena).679\nThe results show that non-linear variants of GNNs (GCN and GAT) are more likely to overfit, i.e.,680 the margins between the training accuracies and the testing accuracies are usually larger, than the681 linear variant SGC. Besides, though possessing extra model expressiveness, non-linear GNNs are682 also difficult to train, i.e., the training accuracies of GCN and GAT are not necessarily higher than683 SGC. The results are consistent with the literature Wu et al. (2019); He et al. (2020).684\nC EXPERIMENTAL DETAILS FOR REPRODUCIBILITY685\nC.1 DATASETS686\n• Grid (You et al., 2019): A simulated 2D grid graph with size 20× 20 and no node feature.687 • Communities (You et al., 2019): A simulated caveman graph (Watts, 1999) composed of 20688 communities with each community containing 20 nodes. The graph is perturbed by rewiring 1%689 edges randomly. It has no node feature and the label of each node indicates which community690 the node belongs to.691\n• Email8 (You et al., 2019): Seven real-world email communication graphs. Each graph has six692 communities and each node has an integer label indicating the community the node belongs to.693\n• Coauthor Networks9 (Shchur et al., 2018): Two networks from Microsoft academic graph in694 CS and Physics with their nodes representing authors and edges representing co-authorships695 between authors. The node features are embeddings of the paper keywords of the authors.696\n• PPI8 (Hamilton et al., 2017): 24 protein-protein interaction networks. Each node has a 50-697 dimensional feature vector.698\n• PPA10 (Hu et al., 2020): A network representing biological associations between proteins from699 58 different species. The node features are one-hot vectors of the species that the proteins are700 taken from.701\n8https://github.com/JiaxuanYou/P-GNN/tree/master/data 9https://github.com/shchur/gnn-benchmark/tree/master/data/npz/\n10https://snap.stanford.edu/ogb/data/linkproppred/ppassoc.zip\n• Cora, CiteSeer, PubMed11 (Yang et al., 2016): Three citation graphs where nodes correspond702 to papers and edges correspond to citations between papers. The node features are bag-of-words703 and the node labels are the ground truth topics of the papers.704\nWe summarize the statistics of datasets in Table 14.705\nC.2 HYPER-PARAMETERS706\nWe use the following hyper-parameters:707\n• All datasets except PPA: we uniformly set the number of layers for all the methods as 2, i.e., 2708 message-passing steps, and set the dimensionality of hidden layers as 32, i.e., H(l) ∈ RN×32,709 for all 1 ≤ l ≤ L (for GAT, we use 4 heads with each head containing 8 units). We use Adam710 optimizer with an initial learning rate of 0.01 and decay the learning rate by 0.1 at epoch 200.711 The weight decay is 5e-4. We train the model for 1,000 epochs and evaluate the model every 5712 epochs. We adopt an early-stopping strategy by reporting the testing performance at the epoch713 which achieves the best validation performance. For SMP, the dimensionality of the stochastic714 matrix is d = 32. For P-GNN, we use the P-GNN-F version, which uses the truncated 2-hop715 shortest path distance instead of the exact shortest distance.716 • PPA: as suggested in the original paper (Hu et al., 2020), we set the number of GNN layers717 as 3 with each layer containing 256 hidden units and add a three-layer MLP after taking the718 Hadamard product between pair-wise node embeddings as the predictor, i.e., MLP(Hi Hj).719 We use Adam optimizer with an initial learning rate of 0.01. We set the number of epochs for720 training as 40, evaluate the results on validation sets every epoch, and report the testing results721 using the model with the best validation performance. We also found that the dataset had issues722 with exploding gradients and adopt a gradient clipping strategy by limiting the maximum p2-723 norm of gradients as 1.0. The dimensionality of the stochastic matrix in SMP is d = 64.724\nC.3 HARDWARE AND SOFTWARE CONFIGURATIONS725\nAll experiments are conducted on a server with the following configurations.726\n• Operating System: Ubuntu 18.04.1 LTS727 • CPU: Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz728 • GPU: NVIDIA TESLA M40 with 12 GB of memory729 11https://github.com/kimiyoung/planetoid/tree/master/data\n• Software: Python 3.6.8, PyTorch 1.4.0, PyTorch Geometric 1.4.3, NumPy 1.18.1, Cuda 10.1730" } ]
2,020
null
SP:f64ed00548580d2d06e5d8e529894144940ed630
[ "This paper proposes to tackle the problem of retrieving and clustering physiological signals by learning clinical prototypes via supervised contrastive learning. Three readily available patient attributes, disease, age, and sex, are used to assist the learning. A hard assignment of samples to prototype is proposed, and further relaxed to a soft one to utilize the samples that do not have exactly matched attribute set. A regularization term is also proposed to encourage intra-cluster distances. Two ECG datasets are used to evaluate the proposed model." ]
The ongoing digitization of health records within the healthcare industry results in large-scale datasets. Manually extracting clinically-useful insight from such datasets is non-trivial. However, doing so at scale while simultaneously leveraging patient-specific attributes such as sex and age can assist with clinical-trial enrollment, medical school educational endeavours, and the evaluation of the fairness of neural networks. To facilitate the reliable extraction of clinical information, we propose to learn embeddings, known as clinical prototypes (CPs), via supervised contrastive learning. We show that CPs can be efficiently used for large-scale retrieval and clustering of physiological signals based on multiple patient attributes. We also show that CPs capture attribute-specific semantic relationships.
[]
[ { "authors": [ "Yuki Asano", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Siddharth Biswal", "Cao Xiao", "Lucas M Glass", "Elizabeth Milkovits", "Jimeng Sun" ], "title": "Doctor2vec: Dynamic doctor representation learning for clinical trial recruitment", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2020 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Steve R Chamberlin", "Steven D Bedrick", "Aaron M Cohen", "Yanshan Wang", "Andrew Wen", "Sijia Liu", "Hongfang Liu", "William Hersh" ], "title": "Evaluation of patient-level retrieval from electronic health record data for a cohort discovery", "venue": "task. medRxiv,", "year": 2019 }, { "authors": [ "Deepak Roy Chittajallu", "Bo Dong", "Paul Tunison", "Roddy Collins", "Katerina Wells", "James Fleshman", "Ganesh Sankaranarayanan", "Steven Schwaitzberg", "Lora Cavuoto", "Andinet Enquobahrie" ], "title": "Xaicbir: Explainable ai system for content based retrieval of video frames from minimally invasive surgery", "venue": "IEEE 16th International Symposium on Biomedical Imaging (ISBI", "year": 2019 }, { "authors": [ "Sajad Darabi", "Mohammad Kachuee", "Shayan Fazeli", "Majid Sarrafzadeh" ], "title": "Taper: Time-aware patient ehr representation", "venue": "IEEE Journal of Biomedical and Health Informatics,", "year": 2020 }, { "authors": [ "Leonard W D’Avolio", "Thien M Nguyen", "Wildon R Farwell", "Yongming Chen", "Felicia Fitzmeyer", "Owen M Harris", "Louis D Fiore" ], "title": "Evaluation of a generalizable approach to clinical information retrieval using the automated retrieval console (arc)", "venue": "Journal of the American Medical Informatics Association,", "year": 2010 }, { "authors": [ "Alan H Gee", "Diego Garcia-Olano", "Joydeep Ghosh", "David Paydarfar" ], "title": "Explaining deep classification of time-series data with learned prototypes", "venue": null, "year": 1904 }, { "authors": [ "Travis R Goodwin", "Sanda M Harabagiu" ], "title": "Learning relevance models for patient cohort retrieval", "venue": "JAMIA open,", "year": 2018 }, { "authors": [ "Harsha Gurulingappa", "Luca Toldo", "Claudia Schepers", "Alexander Bauer", "Gerard Megaro" ], "title": "Semisupervised information retrieval system for clinical decision support", "venue": "In TREC,", "year": 2016 }, { "authors": [ "Kai Han", "Andrea Vedaldi", "Andrew Zisserman" ], "title": "Learning to discover novel visual categories via deep transfer clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "William Hersh" ], "title": "Information retrieval: a health and biomedical perspective", "venue": "Springer Science & Business Media,", "year": 2008 }, { "authors": [ "William R Hersh", "Robert A Greenes" ], "title": "Information retrieval in medicine: state of the art", "venue": "MD Computing: Computers in Medical Practice,", "year": 1990 }, { "authors": [ "William R Hersh", "David H Hickam" ], "title": "How well do physicians use electronic information retrieval systems?: A framework for investigation and systematic review", "venue": "Jama, 280(15):1347–1352,", "year": 1998 }, { "authors": [ "Li Huang", "Andrew L Shea", "Huining Qian", "Aditya Masurkar", "Hao Deng", "Dianbo Liu" ], "title": "Patient clustering improves efficiency of federated machine learning to predict mortality and hospital stay time using distributed electronic medical records", "venue": "Journal of biomedical informatics,", "year": 2019 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Alistair EW Johnson", "Tom J Pollard", "Lu Shen", "H Lehman Li-Wei", "Mengling Feng", "Mohammad Ghassemi", "Benjamin Moody", "Peter Szolovits", "Leo Anthony Celi", "Roger G Mark" ], "title": "Mimic-iii, a freely accessible critical care database", "venue": "Scientific Data,", "year": 2016 }, { "authors": [ "Dani Kiyasseh", "Tingting Zhu", "David A Clifton" ], "title": "Clocs: Contrastive learning of cardiac signals", "venue": "arXiv preprint arXiv:2005.13249,", "year": 2020 }, { "authors": [ "Matt J Kusner", "Joshua Loftus", "Chris Russell", "Ricardo Silva" ], "title": "Counterfactual fairness", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Isotta Landi", "Benjamin S Glicksberg", "Hao-Chih Lee", "Sarah Cherng", "Giulia Landi", "Matteo Danieletto", "Joel T Dudley", "Cesare Furlanello", "Riccardo Miotto" ], "title": "Deep representation learning of electronic health records to unlock patient stratification at scale", "venue": "arXiv preprint arXiv:2003.06516,", "year": 2020 }, { "authors": [ "Junnan Li", "Pan Zhou", "Caiming Xiong", "Richard Socher", "Steven CH Hoi" ], "title": "Prototypical contrastive learning of unsupervised representations", "venue": "arXiv preprint arXiv:2005.04966,", "year": 2020 }, { "authors": [ "Yue Li", "Pratheeksha Nair", "Xing Han Lu", "Zhi Wen", "Yuening Wang", "Amir Ardalan Kalantari Dehaghi", "Yan Miao", "Weiqi Liu", "Tamas Ordog", "Joanna M Biernacka" ], "title": "Inferring multimodal latent topics from electronic health records", "venue": "Nature Communications,", "year": 2020 }, { "authors": [ "Dianbo Liu", "Dmitriy Dligach", "Timothy Miller" ], "title": "Two-stage federated phenotyping and patient representation learning", "venue": "arXiv preprint arXiv:1908.05596,", "year": 2019 }, { "authors": [ "Christopher D Manning", "Hinrich Schütze", "Prabhakar Raghavan" ], "title": "Introduction to information retrieval", "venue": "Cambridge university press,", "year": 2008 }, { "authors": [ "Riccardo Miotto", "Li Li", "Brian A Kidd", "Joel T Dudley" ], "title": "Deep patient: an unsupervised representation to predict the future of patients from the electronic health records", "venue": "Scientific Reports,", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Adam M. Rhine" ], "title": "Information Retrieval for Clinical Decision Support", "venue": "PhD thesis,", "year": 2017 }, { "authors": [ "R Rani Saritha", "Varghese Paul", "P Ganesh Kumar" ], "title": "Content based image retrieval using deep learning process", "venue": "Cluster Computing,", "year": 2019 }, { "authors": [ "Nils Strodthoff", "Patrick Wagner", "Tobias Schaeffter", "Wojciech Samek" ], "title": "Deep learning for ECG analysis: Benchmarks and insights from PTB-XL", "venue": "arXiv preprint arXiv:2004.13701,", "year": 2020 }, { "authors": [ "Arnaud Van Looveren", "Janis Klaise" ], "title": "Interpretable counterfactual explanations guided by prototypes", "venue": "arXiv preprint arXiv:1907.02584,", "year": 2019 }, { "authors": [ "Sahil Verma", "Julia Rubin" ], "title": "Fairness definitions explained", "venue": "IEEE/ACM International Workshop on Software Fairness (FairWare),", "year": 2018 }, { "authors": [ "Patrick Wagner", "Nils Strodthoff", "Ralf-Dieter Bousseljot", "Wojciech Samek", "Tobias Schaeffter" ], "title": "PTBXL, a large publicly available electrocardiography dataset, 2020", "venue": "URL https://physionet. org/content/ptb-xl/1.0.1/", "year": 2020 }, { "authors": [ "Byron C Wallace", "Joël Kuiper", "Aakash Sharma", "Mingxi Zhu", "Iain J Marshall" ], "title": "Extracting pico sentences from clinical trial reports using supervised distant supervision", "venue": "The Journal of Machine Learning Research,", "year": 2016 }, { "authors": [ "Haolin Wang", "Qingpeng Zhang", "Jiahu Yuan" ], "title": "Semantically enhanced medical information retrieval system: a tensor factorization based approach", "venue": "IEEE Access,", "year": 2017 }, { "authors": [ "Yanshan Wang", "Andrew Wen", "Sijia Liu", "William Hersh", "Steven Bedrick", "Hongfang Liu" ], "title": "Test collections for electronic health record-based clinical information retrieval", "venue": "JAMIA open,", "year": 2019 }, { "authors": [ "Jianwei Zheng", "Jianming Zhang", "Sidy Danioko", "Hai Yao", "Hangyuan Guo", "Cyril Rakovski" ], "title": "A 12-lead electrocardiogram database for arrhythmia research covering more than 10,000 patients", "venue": "Scientific Data,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Physiological data are being collected at a burgeoning rate. Such growth is driven by the digitization of previous patient records, the presence of novel health monitoring and recording systems, and the recent recommendation to facilitate the exchange of health records (European Commission, 2019). This engenders large-scale datasets from which the manual extraction of clinically-useful insight is non-trivial. Such insight can include, but is not limited to, medical diagnoses, prognoses, or treatment.\nIn the presence of large-scale datasets, retrieving instances based on some user-defined criteria has been a longstanding goal within the machine learning community (Manning et al., 2008). This information retrieval (IR) process typically consists of a query that is used to search through a large database and retrieve matched instances. Within healthcare, the importance of an IR system is threefold (Hersh & Hickam, 1998; Hersh, 2008). First, it provides researchers with greater control over which patients to choose for clinical trial recruitment. Second, IR systems can serve as an educational and diagnostic tool, allowing physicians to identify seemingly similar patients who exhibit different clinical parameters and vice versa. Lastly, if the query were to consist of sensitive attributes such as sex, age, and race, then such a system would allow researchers to more reliably evaluate the individual and counterfactual fairness of a particular model (Verma & Rubin, 2018). To illustrate this point, let us assume the presence of a query instance that corresponds to a patient with an abnormality of the heart, atrial fibrillation, who is male and under the age of 25. To reliably determine the sensitivity of a model with respect to sex, one would observe its response when exposed to a counterfactual instance, namely the exact same instance but with a different sex label (Kusner et al., 2017). At present, deep-learning based IR systems within the healthcare domain fail to incorporate such patient-specific attributes.\nExisting IR systems which retrieve instances from the electronic health records (Wang et al., 2019; Chamberlin et al., 2019) do not incorporate an attribute-specific search and do not trivially extend to physiological signals. In this paper, we propose to learn embeddings, referred to as clinical prototypes (CPs). CPs are efficient descriptors of a combination of patient-specific attributes, such as disease, sex, and age. We learn these embeddings via contrastive learning whereby representations of instances are encouraged to be similar to their corresponding clinical prototype and dissimilar to the others. To the best of our knowledge, we are the first to design a supervised contrastive learning based large-scale retrieval system for electrocardiogram (ECG) signals.\nContributions. Our contributions are the following:\n• Attribute-specific clinical prototypes - we propose a supervised contrastive learning framework to learn embeddings, referred to as clinical prototypes (CPs), that are efficient descriptors of a set of patient attributes, e.g., disease, sex, and age. • Deep retrieval and clustering - we exploit CPs to retrieve instances corresponding to a specific\npatient-attribute combination and assign instances to various clusters." }, { "heading": "2 RELATED WORK", "text": "Clinical representation learning involves meaningfully representing clinical data for solving tasks. Most research attempts to learn representations of electronic health records (EHRs) (Miotto et al., 2016; Gee et al., 2019; Liu et al., 2019; Li et al., 2020b; Biswal et al., 2020; Darabi et al., 2020) in a generative manner. For example, Landi et al. (2020) and Huang et al. (2019) implement an autoencoder to learn patient representations. These representations are then clustered either in a hierarchical manner or via K-means. Other methods involve learning prototypes. For example, Li et al. (2020a) propose to do so via the ProtoNCE loss. Our approach, unlike theirs, exploits readily-available patient-attribute data and is not dependent upon the K-means algorithm. Moreover, Van Looveren & Klaise (2019) learn to perturb prototypes to derive interpretable counterfactual instances. Most similar to our work is that of Kiyasseh et al. (2020), which learns patient-specific representations while pre-training via contrastive learning, and Garnot & Landrieu (2020) where the distance between class prototypes learned in an end-to-end manner is regularized based on a predefined tree hierarchy. In contrast, we learn attribute-specific prototypes via supervised contrastive learning and capture their semantic relationships via distance-based regularization.\nClinical information retrieval whereby instances similar to a query are retrieved was first introduced in 1990 (Hersh & Greenes, 1990). Most research in this domain revolves around text (Gurulingappa et al., 2016; Wang et al., 2017; Rhine, 2017; Wallace et al., 2016). For example, D’Avolio et al. (2010) map text to SNOMED concepts to retrieve clinical documents. More recently, IR has been performed with biomedical images, and is referred to as content-based image retrieval (Saritha et al., 2019; Chittajallu et al., 2019). Others have extended this concept to EHR data (Goodwin & Harabagiu, 2018; Wang et al., 2019; Chamberlin et al., 2019). For example, Chamberlin et al. (2019) implement rudimentary IR methods such as divergence from randomness on the UPMC and MIMIC III (Johnson et al., 2016) datasets with the aim of discovering patient cohorts. In contrast to such methods, we implement a deep-learning based clinical information retrieval system for physiological signals." }, { "heading": "3 METHODS", "text": "" }, { "heading": "3.1 ATTRIBUTE-SPECIFIC CLINICAL PROTOTYPES", "text": "Information retrieval systems typically necessitate a query of some sort that is exploited to search through a large database and retrieve instances that satisfy criteria outlined by the initial query. Such a query can take on a multitude of forms (e.g., text, image, audio, etc.) depending on the modality of instances in the database. As we are primarily interested in large databases comprising physiological signals, we design a query that is based on such signals. Moreover, the type and specificity of instances that are retrieved highly depend on the criteria outlined by a query. In our context, these criteria comprise patient attribute information such as disease class, sex, and age. As a result, our query should be capable of retrieving physiological instances that are associated with the aforementioned patient attributes. To achieve this, we propose to learn a set of query embeddings, P , analogous to word embeddings in natural language processing, where |P | = M , representing each of the M possible patient-attribute combinations within a dataset. Each embedding, pA ∈ P , is an efficient descriptor of a set of attributes A = {α1, α2, α3} where α1 = disease class, α2 = sex and α3 = age. Given this attribute-specific interpretation, we refer to such embeddings as attributespecific clinical prototypes or CPs. We propose to learn such CPs, in an end-to-end manner, via contrastive learning, as explained next." }, { "heading": "3.2 LEARNING ATTRIBUTE-SPECIFIC CLINICAL PROTOTYPES", "text": "We assume the presence of a learner, fθ : x ∈ RD −→ v ∈ RE parameterized by θ, that maps a D-dimensional input, x, to an E-dimensional representation, v. In information retrieval systems, a\nreliable query should accurately retrieve instances that satisfy certain criteria. When dealing with representations, such reliability can be designed for by ensuring that the query embedding is in a similar subspace, and thus in proximity, to the representations of instances that satisfy said criteria. To achieve this proximity, we can attract query embeddings to representations in the database which share patient attribute information and repel them from those which do not. More specifically, we attract each representation of an instance associated with a set of attributes, A, to the clinical prototype, pA, that describes that same set. This can be achieved in two ways.\nHard assignment. We encourage each representation, vi = fθ(xi), of an instance, xi, associated with a particular set of attributes, A, to be similar to the single clinical prototype, pA, that describes the exact same set of attributes, and dissimilar to the remaining clinical prototypes, pj , where j 6= A. We quantify this similarity, s(vi, pA), using the cosine similarity, with a temperature parameter, τs. By mapping each representation to a single CP, we refer to this as a hard assignment. To encourage this behaviour, one would optimize the following objective function for a mini-batch of size, B:\nLcontra−hard = − 1\nB B∑ i=1 log ( es(vi,pA)∑M j e s(vi,pj) ) (1) s(vi, pA) = vi · pA ‖vi‖‖pA‖ · 1 τs (2)\nThe hard assignment approach assumes a bijective relationship between each representation, vi, and each clinical prototype, pA. This many-to-one mapping implies that CPs without a perfect attribute match (i.e., a near miss) do not leverage potentially useful information from representations that exhibit some, albeit imperfect, overlap in patient attributes.\nSoft assignment. To overcome the limitations of a hard assignment, we propose a soft assignment approach whereby each representation, vi, is attracted to a subset of disease class-specific clinical prototypes, L ⊂ P . We opted for this class-specific setup to avoid erroneously attracting representations to CPs from a different class. This attraction would hinder retrievals based on disease class.\nRecall, though, that the CPs in L still describe different sets of non-class attributes. By attracting each representation to these CPs uniformly, CPs within a class will collapse to a single point, and thus be unable to distinguish between various patient attributes. We support this claim in Fig. 1a where we illustrate the t-SNE projection of the CPs for five different disease classes. To avoid such collapse, we modulate the attraction between each representation, vi, associated with the attribute set, Ai, and each clinical prototype, pk, associated with the attribute set, Ak. More specifically, we introduce a weight, wik, that is dependent on the discrepancy between the respective attribute sets, d(Ai, Ak). This formulation, characterized by greater attraction between CPs and representations with more similar attribute sets, results in the CPs shown in Fig. 1b. Formally, we optimize the following objective function.\nLcontra−soft = − 1\nB B∑ i=1 [ M∑ k=1 ωik log ( es(vi,pk)∑M j e s(vi,pj) )] (3)\nωik =\n{ ed(Ai,Ak)∑|L| j e d(Ai,Aj) if disease classi = disease classk\n0 otherwise (4)\nd(Ai, Ak) = [δ(disease classi = disease classk) + δ(sexi = sexk) + δ(agei = agek)] · 1\nτω (5)\nwhere δ is the Kronecker delta function that evaluates to one if the argument is true and zero otherwise and τω is a temperature parameter that determines how soft the assignment is. For example, as τω −→ 0, the loss-term approaches the hard assignment formulation.\nArrangement of clinical prototypes. Clinical prototypes, learned in an end-to-end manner, as presented will exhibit a high and desirable degree of inter-class separability. However, prototypes within a class are still at risk of collapsing to a select few points. This would decrease their utility for attribute-based querying of instances. We would prefer to learn CPs that not only distinguish between different sets of attributes, but also reflect the semantic relationships between one another, as is done with word embeddings in natural language processing. Capturing semantic relationships is desirable as it allows for improved interpretability of the retrieval process. For example, representations that are equidistant from two CPs can indicate a graded difference in a particular attribute such as age. In Fig. 2 (left), we illustrate the desired arrangement of 16 CPs for two arbitrary classes.\nTo arrive at this arrangement, we propose a distance-based regularization term, derived as follows. First, we use the L2 norm to normalize all CPs in the set, P , and calculate the empirical pairwise Euclidean distance between them. As a result, we generate the matrix D̂ ∈ RM×M (Fig. 2 right). As we are only interested in the pairwise distances for prototypes within the same class, we focus on the sub-matrices, D̂c ∈ RM/C×M/C with c ∈ [1, . . . , C] where C is the number of class-specific clusters. Each class’ corresponding ground-truth sub-matrix, Dc, is populated with distances, dE , that reflect the semantic relationships between CPs. We choose dE = β × S where β ∈ R is a user-defined distance and S ∈ Z+ is the integer distance between attributes. For example, a pair of CPs that differ according to sex with A1 = {AFIB,M, < 25} and A2 = {AFIB,F, < 25}, results in S = 1 and dE = β × 1. We then calculate the mean-squared error between D̂c and Dc ∀c ∈ [1, . . . , C]. Our final objective function thus includes the soft contrastive loss (Eq. 3) and the regression loss.\nLreg = C\nM2 C∑ c=1 (D̂c −Dc)2 (6) Lcomb = Lcontra−soft + Lreg (7)" }, { "heading": "4 EXPERIMENTAL DESIGN", "text": "" }, { "heading": "4.1 DATASETS", "text": "We conduct our experiments using PyTorch (Paszke et al., 2019) on two large-scale datasets that consist of physiological time-series, such as the electrocardiogram (ECG), alongside cardiac arrhythmia labels. Chapman (Zheng et al., 2020) consists of 12-lead ECG recordings from 10,646 patients alongside labels which we group into 4 major classes. PTB-XL (Wagner et al., 2020) consists of 12-lead ECG recordings from 18,885 patients alongside labels which we group into 5 major classes (Strodthoff et al., 2020). Further details can be found in Appendix A." }, { "heading": "4.2 DEPLOYING CLINICAL PROTOTYPES FOR RETRIEVAL", "text": "We aim to exploit clinical prototypes to retrieve physiological instances, that satisfy certain criteria, from a large database. One could argue that a simple text-based query, instead of CPs, in modern databases will allow for the retrieval of appropriate instances. However, this only holds if all instances in the database are labelled. Although we have used, as an exemplar, readily-available attributes such as sex and age, our method extends to other attributes which cannot be trivially searched in a database. In other words, the maximal utility of retrieval via CPs is realized when the troves of instances in the database are unlabelled. To reiterate, the clinical motivation for such retrieval is multi-fold. First, it allows researchers to discover patients that may be suitable for clinical trial recruitment. This, in turn, may allow pharmaceutical companies to trial their drugs on a more diverse patient cohort that is more representative of the population. Second, CPs can be deployed across datasets from different clinical institutions, for example, to retrieve patient sub-cohorts that satisfy a set of attributes. These patient sub-cohorts can now be further analysed to identify sub-cohort variability, which in turn can guide future clinical treatment. With that in mind, to reliably and quantitatively evaluate the retrieval capabilities of the CPs, we must have access to the ground-truth attributes associated with the instances in the held-out set.\nTo perform retrieval, we treat the CPs as the query set and retrieve the closest instances (↓ Euclidean distance) from a held-out dataset. In evaluating this task of retrieval, we deploy a commonly used metric known as Precision atK (P@K). This metric quantifies whether at least one of theK retrieved instances matches the query. In our context, however, a match can occur according to any of the three attributes: class, sex, and age. Therefore, we extract a single attribute, αm, from the set of attributes, A, for each of the M clinical prototypes, and the same attribute, αi, for each of the K retrieved instances, and define the following attribute-specific P@K.\nP@K(α) = 1\nM M∑ m=1 δ ( K∑ i=1 δ(αi = αm) ≥ 1 ) (8)" }, { "heading": "4.3 DEPLOYING CLINICAL PROTOTYPES FOR CLUSTERING", "text": "We also exploit clinical prototypes to cluster instances. In this setting, and in contrast to the retrieval setting, we treat each representation of an instance as a query and search across the CPs to assign each representation a set of attributes. From this perspective, CPs can be thought of as multiple centroids of a cluster which are used to label representations. To evaluate these clusters, we first calculate the pairwise Euclidean distance between each representation and CP before assigning each representation the specific attribute, αpredi , of the CP to which it is closest. Given the ground-truth attribute for each representation, αtruei , we can calculate the accuracy, Acc(α), of the assignments. We can also quantify the agreement between the attribute assignments of the ground-truth, ~αtrue = {α1true, α2true, . . . , αNtrue}, and those obtained via clustering, ~αpred = {α1pred, α2pred, . . . , αNpred}, by calculating the adjusted mutual information AMI(α) ∈ [0, 1].\nAcc(α) = 1\nN N∑ i=1 δ(αpredi = α true i ) (9)\nAMI(α) =\n[ MI(~αtrue, ~αpred)− E(MI(~αtrue, ~αpred)) ] E(H(~αtrue),H(~αpred))− E(MI(~αtrue, ~αpred))\n(10)\nwhere MI(~αtrue, ~αpred) represents the mutual information between the ground-truth and the predicted set of attributes, and H(~α) represents the entropy of the set of attributes." }, { "heading": "4.4 BASELINES", "text": "Where appropriate, we compare our method to the following approaches:\n• K-Means implements Expectation Maximization to arrive at class-specific centroids. Instances are then assigned to the class of their nearest centroid. We perform K-Means a) on the input instances (K-Means Raw) and b) on representations of instances learned using Eq. 7 (K-Means Combined). • Mean Representations takes the mean of the representations, learned via Eq. 7, that belong to\na particular attribute combination and treats such mean representations as the centroids of the clusters. • DeepCluster (DC) (Caron et al., 2018) is an iterative method that performs K-Means on rep-\nresentations, pseudo-labels instances according to their assigned cluster, and then exploits such labels for supervised training. The final set of instance labels is taken from the epoch with the lowest validation loss. • Information Invariant Clustering (IIC) (Ji et al., 2019) maximizes the mutual information\nbetween the posterior class probabilities assigned to an instance and its perturbed counterpart. We adapt IIC to the time-series domain by perturbing instances with additive Gaussian noise, ∼ N (0, σ). We also incorporate auxiliary over-clustering as it was shown to significantly improve performance. The final set of instance labels is chosen by taking the argmax of the output probabilities. • SeLA (Asano et al., 2020) pseudo-labels instances by implementing the Sinkhorn-Knopp al-\ngorithm to allow for supervised training. We pseudo-label instances after each epoch of training and use prior class information to determine the number of clusters, which should boost performance. • Deep Transfer Cluster (DTC) (Han et al., 2019) calculates the distance between representations\nand cluster prototypes to generate a probability distribution over classes. The KL divergence between this distribution and a target distribution is minimized." }, { "heading": "4.5 HYPERPARAMETERS", "text": "We chose the temperature parameters, τ = 0.1, as per (Kiyasseh et al., 2020), and τw = 1. The regression regularization term (Eq. 6) requires the specification of β. We chose β = 0.2 due to our choice of distance metric (squared Euclidean distance) and the number of attribute groups. If β were too small in magnitude, the intra-cluster separability of clinical prototypes would be too small. If β were too large, then the clinical prototypes from different classes would begin to overlap with one another and class separability diminishes. For both datasets, sex ∈ {M,F}, and age is converted to quartiles. For the Chapman and PTB-XL datasets, |class| = 4 and 5, respectively, and thus M = |class| × |sex| × |age| = 32 and 40, respectively. We use a network with 1D convolutional modules to generate representations. Further network and implementation details can be found in Appendix B." }, { "heading": "5 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "5.1 DEPLOYING CLINICAL PROTOTYPES FOR RETRIEVAL", "text": "We exploit CPs to retrieve attribute-specific physiological signals from a database. In this section, we task the CPs with retrieving the closest K = [1, 5, 10] instances in a held-out set and, in Table 1, illustrate the resulting P@K. We also stratify the results according to the number of matched attributes between the query (CPs) and retrieved instances. For example, # of matched attributes = 3 implies that a perfect match has occurred for all attributes: class, sex, and age.\nWe find that the Mean Representation approach is most likely to retrieve physiological instances that satisfy the desired patient attribute criteria. For example, on Chapman, Mean Representation\nachieves a P@(1) = 91.9 whereas DTC and DROPS Combined achieve P@(1) = 71.9 and 76.3, respectively, when evaluated based on at least one attribute match. This implies that 91.9% of the representations used as the query set were able to retrieve a physiological instance with at least one attribute match. In retrieval settings, however, we are typically interested in perfect attribute matches (i.e., # of matched attributes = 3). In our context, the superior performance of Mean Representation extends as the # of matched attributes = 1→ 3. For example, on Chapman, Mean Representation achieves a P@(1) = 10.6 whereas DTC and DROPS Combined achieve P@(1) = 3.1 and 2.5, respectively. Recall that with Mean Representation, we are exploiting the average of the representations, learned via our method, that belong to a particular attribute combination as the query set. These mean representations can be visualized in Fig. 4 (top row). With this mind, the aforementioned findings illustrate the potential utility of our supervised contrastive learning setup in obtaining rich representations for attribute-specific retrieval. We hypothesize that the poorer performance of DROPS Combined relative to Mean Representation is due to the more extreme embedding characterized by the former’s clinical prototypes. This can be seen in Fig. 4 (top row). Although such clinical prototypes, which are farther from the class decision boundary, may be beneficial for retrieving instances from the same class, they will perform worse on other attributes.\nTo qualitatively evaluate clinical prototypes’ retrieval capabilities, we choose CPs at random to form the query set, calculate their Euclidean distance to the representations of instances in a held-out dataset, and retrieve the 5 which are closest. In Fig. 3, we visualize the ECG signals that correspond to these closest representations and colour their borders green if their class label matches that of the query, and red otherwise. We can see that at least 60% of the retrieved instances match the class label." }, { "heading": "5.2 DEPLOYING CLINICAL PROTOTYPES FOR CLUSTERING", "text": "So far, we have illustrated the utility of exploiting clinical prototypes for retrieval purposes. In this section, we evaluate the utility of CPs for clustering. In Table. 2, we illustrate the clustering performance of DROPS relative to that of state-of-the-art clustering methods.\nThere are several takeaways from Table 2. First, we find that our supervised contrastive learning paradigm generates rich representations. This can be seen by the superiority of K-Means Combined relative to K-Means Raw where the Acc(class) = 81.7 and 29.2, respectively. Note that although the former approach learns clinical prototypes, it does not yet exploit them directly for clustering. This, in turn, allows for the reliable evaluation of the representations alone. We provide further evidence in support of the richness of these representations in Sec. 5.3. When we directly leverage clinical prototypes for clustering, as is done with DROPS, we find that this further improves clustering performance. For example, DROPS Combined achieves an Acc(class) = 90.3 and 76.0 on the Chapman and PTB-XL datasets, respectively, which outperforms not only its K-Means counterpart but also recent state-of-the-art methods such as DTCluster, DeepCluster, and SeLA.\nTo further illustrate the utility of clinical prototypes, we compare DROPS Combined to the Mean Representation associated with each of the attribute combinations. We find that that the former ↑ Acc(class) ≈ 10% on both Chapman and PTB-XL. We hypothesize that this discrepancy is partially due to the asymmetric definition of the contrastive objective function (Eq. 3) in which negative samples are only considered from the set of clinical prototypes and not from the representations. Nonetheless, the aforementioned findings underscore the vital role clinical prototypes play in clustering. We also note that our method can be trivially incorporated into DeepCluster, for example, to generate pseudo-labels. Lastly, in Table 2b, we show that DROPS is flexible enough to simultaneously cluster according to multiple attributes, a feature that does not trivially extend to other methods. Such flexibility can provide researchers with improved control over sensitive attributes." }, { "heading": "5.3 CAPTURING SEMANTIC RELATIONSHIPS BETWEEN CLINICAL PROTOTYPES", "text": "Up until now, we have shown that clinical prototypes can be successfully deployed for retrieval and clustering. We encouraged such prototypes to exhibit inter and intra-cluster separability, with the aim of capturing semantic relationships between them. In this section, we look to confirm whether such semantic relationships are indeed captured. In Fig. 4 (top row), we illustrate the t-SNE projection of representations in the training set alongside the average class-specific CP. In Fig. 4 (bottom row), we illustrate the t-SNE projection of the CPs colour-coded and shaded according to sex and age groups, respectively. For clarity, we follow the same coding scheme presented in Fig. 2 (left).\nWe show that our training paradigm leads to clinical prototypes that can satisfy the semantic relationships between the class, sex, and age attributes. This can be seen by the high separability between, and ordering of, the sex and age attributes in Fig. 4 (bottom row). This observation holds regardless\nof the class attribute or the dataset that is experimented with. We further note the high degree of similarity between these projections, empirically-derived, and expected (see Fig. 2 left). We claim that the adoption of this formation by the CPs is driven primarily by our weighting mechanism (Eq. 3) and distance-based regularization term (Eq. 6)." }, { "heading": "6 DISCUSSION AND FUTURE WORK", "text": "In this paper, we propose a supervised contrastive learning framework, DROPS, for the retrieval and clustering of physiological signals. In the process, we learn representations, entitled clinical prototypes, that efficiently describe the state of a patient associated with a set of attributes, e.g., disease, sex, and age. We show that representations learned via DROPS can be exploited to retrieve instances that reflect desired criteria. Moreover, we show that that clinical prototypes, in addition to capturing semantic relationships between patient attributes, can simultaneously cluster physiological instances based on multiple attributes. We now elucidate several avenues worth exploring.\nLearning disentangled clinical prototypes. CPs are attribute-specific representations. Disentangling these representations into their constituent attributes might facilitate the interpretability of CPs and their application to fair ML. Preliminary exploratory work is performed in Appendix C.\nIncorporate continuous attributes. CPs are currently limited to discrete attributes. Scaling CPs to a continuous set and large number of attributes will allow for more fine-grained retrieval and further increase their utility." } ]
null
null
SP:8351823091c5320244dec89c71d3319a103cd6a4
[ "The paper presents a new way algorithm to compute the straight-through variant of the Gumbel Softmax gradient estimator. The method does not change the estimator's bias, but provably reduces its variance (with a small overhead, using Rao-blackwellization). The new estimator shows good performance on different tasks, and appears to lead to more efficient optimization for lower temperatures (lower bias)." ]
Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance. To counteract this, modern estimators either introduce bias, rely on multiple function evaluations, or use learned, input-dependent baselines. Thus, there is a need for estimators that require minimal tuning, are computationally cheap, and have low mean squared error. In this paper, we show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through RaoBlackwellization without increasing the number of function evaluations. This provably reduces the mean squared error. We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
[ { "affiliations": [], "name": "Max B. Paulus" }, { "affiliations": [], "name": "Chris J. Maddison" } ]
[ { "authors": [ "Yoshua Bengio", "Nicholas Léonard", "Aaron Courville" ], "title": "Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "James Bergstra", "Yoshua Bengio" ], "title": "Random search for hyper-parameter optimization", "venue": "Journal of machine learning research,", "year": 2012 }, { "authors": [ "David Blackwell" ], "title": "Conditional expectation and unbiased sequential estimation", "venue": "Ann. Math. Statist., 18(1):105–110,", "year": 1947 }, { "authors": [ "Yuri Burda", "Roger Grosse", "Ruslan Salakhutdinov" ], "title": "Importance weighted autoencoders", "venue": "arXiv preprint arXiv:1509.00519,", "year": 2015 }, { "authors": [ "Jihun Choi", "Kang Min Yoo", "Sang-goo Lee" ], "title": "Unsupervised learning of task-specific tree structures with tree-lstms", "venue": "In CoRR,", "year": 2017 }, { "authors": [ "Junyoung Chung", "Sungjin Ahn", "Yoshua Bengio" ], "title": "Hierarchical multiscale recurrent neural networks", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Andrew Davis", "Itamar Arel" ], "title": "Low-rank approximations for conditional feedforward computation in deep neural networks", "venue": "arXiv preprint arXiv:1312.4461,", "year": 2013 }, { "authors": [ "Andreea Gane", "Tamir Hazan", "Tommi Jaakkola" ], "title": "Learning with maximum a-posteriori perturbation models", "venue": "In Artificial Intelligence and Statistics,", "year": 2014 }, { "authors": [ "Peter W Glynn" ], "title": "Likelihood ratio gradient estimation for stochastic systems", "venue": "Communications of the ACM,", "year": 1990 }, { "authors": [ "Will Grathwohl", "Dami Choi", "Yuhuai Wu", "Geoffrey Roeder", "David Duvenaud" ], "title": "Backpropagation through the void: Optimizing control variates for black-box gradient estimation", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Sergey Levine", "Ilya Sutskever", "Andriy Mnih" ], "title": "Muprop: Unbiased backpropagation for stochastic neural networks", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Serhii Havrylov", "Germán Kruszewski", "Armand Joulin" ], "title": "Cooperative learning of disjoint syntax and semantics", "venue": "arXiv preprint arXiv:1902.09393,", "year": 2019 }, { "authors": [ "Eric Jang", "Shixiang Gu", "Ben Poole" ], "title": "Categorical Reparametrization with Gumble-Softmax", "venue": "In International Conference on Learning Representations (ICLR", "year": 2017 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-Encoding Variational Bayes", "venue": "arXiv e-prints, art", "year": 2013 }, { "authors": [ "Wouter Kool", "Herke van Hoof", "Max Welling" ], "title": "Estimating gradients for discrete random variables by sampling without replacement", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Dirk P Kroese", "Thomas Taimre", "Zdravko I Botev" ], "title": "Handbook of monte carlo methods, volume 706", "venue": null, "year": 2013 }, { "authors": [ "Yann LeCun", "Corinna Cortes" ], "title": "MNIST handwritten digit database", "venue": "URL http://yann. lecun. com/exdb/mnist,", "year": 2010 }, { "authors": [ "Runjing Liu", "Jeffrey Regier", "Nilesh Tripuraneni", "Michael I Jordan", "Jon McAuliffe" ], "title": "Raoblackwellized stochastic gradients for discrete distributions", "venue": "arXiv preprint arXiv:1810.04777,", "year": 2018 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actorcritic for mixed cooperative-competitive environments", "venue": "CoRR, abs/1706.02275,", "year": 2017 }, { "authors": [ "Chris J. Maddison" ], "title": "A Poisson process model for Monte Carlo", "venue": null, "year": 2016 }, { "authors": [ "Chris J. Maddison", "Andriy Mnih", "Yee Whye Teh" ], "title": "The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "André F.T. Martins", "Tsvetomila Mihaylova", "Nikita Nangia", "Vlad Niculae" ], "title": "Latent structure models for natural language processing", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts,", "year": 2019 }, { "authors": [ "Andriy Mnih", "Karol Gregor" ], "title": "Neural variational inference and learning in belief networks", "venue": "In Proceedings of the 31st International Conference on International Conference on Machine Learning-Volume", "year": 2014 }, { "authors": [ "Andriy Mnih", "Danilo J Rezende" ], "title": "Variational inference for monte carlo objectives", "venue": "In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume", "year": 2016 }, { "authors": [ "Igor Mordatch", "Pieter Abbeel" ], "title": "Emergence of grounded compositional language in multi-agent populations", "venue": "CoRR, abs/1703.04908,", "year": 2017 }, { "authors": [ "Nikita Nangia", "Samuel R. Bowman" ], "title": "Listops: A diagnostic dataset for latent tree learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Max Paulus", "Dami Choi", "Daniel Tarlow", "Andreas Krause", "Chris J Maddison" ], "title": "Gradient estimation with stochastic softmax tricks", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Adeel Pervez", "Taco Cohen", "Efstratios Gavves" ], "title": "Low Bias Low Variance Gradient Estimates for Hierarchical Boolean Stochastic Networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "C. Radhakrishna Rao" ], "title": "Information and the accuracy attainable in the estimation of statistical parameters", "venue": "Breakthroughs in Statistics: Foundations and Basic Theory,", "year": 1992 }, { "authors": [ "Danilo Jimenez Rezende", "Shakir Mohamed", "Daan Wierstra" ], "title": "Stochastic backpropagation and approximate inference in deep generative models", "venue": "In International Conference on Machine Learning,", "year": 2014 }, { "authors": [ "Ruslan Salakhutdinov", "Iain Murray" ], "title": "On the quantitative analysis of deep belief networks", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "John Schulman", "Nicolas Heess", "Theophane Weber", "Pieter Abbeel" ], "title": "Gradient estimation using stochastic computation graphs", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "George Tucker", "Andriy Mnih", "Chris J. Maddison", "Jascha Sohl-Dickstein" ], "title": "REBAR : Low-variance, unbiased gradient estimates for discrete latent variable models", "venue": "In Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Tim Vieira" ], "title": "Estimating means in a finite universe, 2017", "venue": "URL https://timvieira. github. io/blog/post/2017/07/03/estimating-means-in-a-finite-universe,", "year": 2017 }, { "authors": [ "Théophane Weber", "Nicolas Heess", "Lars Buesing", "David Silver" ], "title": "Credit assignment techniques in stochastic computation graphs", "venue": "In The 22nd International Conference on Artificial Intelligence and Statistics,", "year": 2019 }, { "authors": [ "Ronald J Williams" ], "title": "Simple statistical gradient-following algorithms for connectionist reinforcement learning", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Zichao Yang", "Zhiting Hu", "Ruslan Salakhutdinov", "Taylor Berg-Kirkpatrick" ], "title": "Improved variational autoencoders for text modeling using dilated convolutions", "venue": "CoRR, abs/1702.08139,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Models with discrete latent variables are common in machine learning. Discrete random variables provide an effective way to parameterize multi-modal distributions, and some domains naturally have latent discrete structure (e.g, parse trees in NLP). Thus, discrete latent variable models can be found across a diverse set of tasks, including conditional density estimation, generative text modelling (Yang et al., 2017), multi-agent reinforcement learning (Mordatch & Abbeel, 2017; Lowe et al., 2017) or conditional computation (Bengio et al., 2013; Davis & Arel, 2013).\nThe majority of these models are trained to minimize an expected loss using gradient-based optimization, so the problem of gradient estimation for discrete latent variable models has received considerable attention over recent years. Existing estimation techniques can be broadly categorized into two groups, based on whether they require one loss evaluation (Glynn, 1990; Williams, 1992; Bengio et al., 2013; Mnih & Gregor, 2014; Chung et al., 2017; Maddison et al., 2017; Jang et al., 2017; Grathwohl et al., 2018) or multiple loss evaluations (Gu et al., 2016; Mnih & Rezende, 2016; Tucker et al., 2017) per estimate. These estimators reduce variance by introducing bias or increasing the computational cost with the overall goal being to reduce the total mean squared error.\nBecause loss evaluations are costly in the modern deep learning age, single evaluation estimators are particularly desirable. This family of estimators can be further categorized into those that relax the discrete randomness in the forward pass of the model (Maddison et al., 2017; Jang et al., 2017; Paulus et al., 2020) and those that leave the loss computation unmodified (Glynn, 1990; Williams, 1992; Bengio et al., 2013; Chung et al., 2017; Mnih & Gregor, 2014; Grathwohl et al., 2018). The ones that do not modify the loss computation are preferred, because they avoid the accumulation of errors in the forward direction and they allow the model to exploit the sparsity of discrete computation. Thus, there is a particular need for single evaluation estimators that do not modify the loss computation.\n∗Work done partly at the Institute for Advanced Study, Princeton, NJ.\nIn this paper we introduce such a method. In particular, we propose a Rao-Blackwellization scheme for the straight-through variant of the Gumbel-Softmax estimator (Jang et al., 2017; Maddison et al., 2017), which comes at a minimal cost, and does not increase the number of function evaluations. The straight-through Gumbel-Softmax estimator (ST-GS, Jang et al., 2017) is a lightweight stateof-the-art single-evaluation estimator based on the Gumbel-Max trick (see Maddison et al., 2014, and references therein). The ST-GS uses the argmax over Gumbel random variables to generate a discrete random outcome in the forward pass. It computes derivatives via backpropagation through a tempered softmax of the same Gumbel sample. Our Rao-Blackwellization scheme is based on the key insight that there are many configurations of Gumbels corresponding to the same discrete random outcome and that these can be marginalized over with Monte Carlo estimation. By design, there is no need to re-evaluate the loss and the additional cost of our estimator is linear only in the number of Gumbels needed for a single forward pass. As we show, the Rao-Blackwell theorem implies that our estimator has lower mean squared error than the vanilla ST-GS. We demonstrate the effectiveness of our estimator in unsupervised parsing on the ListOps dataset (Nangia & Bowman, 2018) and on a variational autoencoder loss (Kingma & Welling, 2013; Rezende et al., 2014). We find that in practice our estimator trains faster and achieves better test set performance. The magnitude of the improvement depends on several factors, but is particularly pronounced at small batch sizes and low temperatures." }, { "heading": "2 BACKGROUND", "text": "For clarity, we consider the following simplified scenario. Let D ∼ pθ be a discrete random variable D ∈ {0, 1}n in a one-hot encoding, ∑ Di = 1, with distribution given by pθ(D) ∝ exp(DT θ) where θ ∈ Rn. Given a continuously differentiable f : R2n → R, we wish to minimize, min θ E[f(D, θ)], (1)\nwhere the expectation is taken over all of the randomness. In general θ may be computed with some neural network, so our aim is to derive estimators of the total derivative of the expectation with respect to θ for use in stochastic gradient descent. This framework covers most simple discrete latent variable models, including variational autoencoders (Kingma & Welling, 2013; Rezende et al., 2014).\nThe REINFORCE estimator (Glynn, 1990; Williams, 1992) is unbiased (under certain smoothness assumptions) and given by:\n∇REINF := f(D, θ) ∂ log pθ(D) ∂θ + ∂f(D, θ) ∂θ . (2)\nWithout careful use of control variates (Mnih & Gregor, 2014; Tucker et al., 2017; Grathwohl et al., 2018), the REINFORCE estimator tends to have prohibitively high variance. To simplify exposition we assume henceforth that f(D, θ) = f(D) does not depend on θ, because the dependence of f(D, θ) on θ is accounted for in the second term of (2), which is shared by most estimators and generally has low variance.\nOne strategy for reducing the variance is to introduce bias through a relaxation (Jang et al., 2017; Maddison et al., 2017). Define the tempered softmax softmaxτ : Rn → Rn by softmaxτ (x)i = exp(xi/τ)/ ∑n j=1 exp(xj/τ). The relaxations are based on the observation that the sampling of D can be reparameterized using Gumbel random variables and the zero-temperature limit of the tempered softmax under the coupling:\nD = lim τ→0 Sτ ; Sτ = softmaxτ (θ +G) (3)\nwhere G is a vector of i.i.d. Gi ∼ Gumbel random variables. At finite temperatures Sτ is known as a Gumbel-Softmax (GS) (Jang et al., 2017) or concrete (Maddison et al., 2017) random variable, and the relaxed loss E[f(Sτ , θ)] admits the following reparameterization gradient estimator for τ > 0:1\n∇GS := ∂f(Sτ )\n∂Sτ\nd softmaxτ (θ +G)\ndθ . (4)\n1For a function f(x1, x2), ∂f(z1, z2)/∂x1 is the partial derivative (e.g., a gradient vector) of f in the first variable evaluated at z1, z2. For a function g(θ), dg/dθ is the total derivative of g in θ. For example, d softmaxτ (θ +G)/dθ is the Jacobian of the tempered softmax evaluated at the random variable θ +G.\nThis is an unbiased estimator of the gradient of E[f(Sτ , θ)], but a biased estimator of our original problem (1). For this to be well-defined f must be defined on the interior of the simplex (where Sτ sits). This estimator has the advantage that it is easy to implement and generally low-variance, but the disadvantage that it modifies the forward computation of f and is biased. Henceforth, we assume D,Sτ , and G are coupled almost surely through (3).\nAnother popular family of estimators are the so-called straight-through estimators (c.f., Bengio et al., 2013; Chung et al., 2017). In this family, the forward computation of f is unchanged, but backpropagation is computed “through” a surrogate. One popular variant takes as a surrogate the tempered probabilities of D, resulting in the slope-annealed straight-through estimator (ST):\n∇ST := ∂f(D)\n∂D\nd softmaxτ (θ)\ndθ . (5)\nFor binary D, a lower bias variant of this estimator (FouST) was proposed in Pervez et al. (2020).\nThe most popular straight-through estimator is known as the straight-through Gumbel-Softmax (STGS, Jang et al., 2017). The surrogate for ST-GS is Sτ , whose Gumbels are coupled to D through (3):\n∇STGS := ∂f(D)\n∂D\nd softmaxτ (θ +G)\ndθ . (6)\nThe straight-through family has the advantage that they tend to be low-variance and f need not be defined on the interior of the simplex (although f must be differentiable at the corners). This family has the disadvantage that they are not known to be unbiased estimators of any gradient. These estimators are quite popular in practice, because they preserve the forward computation of f , which prevents the forward propagation of errors and maintains sparsity (Choi et al., 2017; Chung et al., 2017; Bengio et al., 2013).\nAll of the estimators discussed in this paper can be computed by any of the standard automatic differentiation software packages using a single evaluation of f on a realization of D or some underlying randomness. We present implementation details for these and our Gumbel-Rao estimator in the Appendix, emphasizing the surrogate loss framework (Schulman et al., 2015; Weber et al., 2019) and considering the multiple stochastic layer case not covered by (1)." }, { "heading": "3 GUMBEL-RAO GRADIENT ESTIMATOR", "text": "" }, { "heading": "3.1 RAO-BLACKWELLIZATION OF ST-GUMBEL-SOFTMAX", "text": "We now derive our Rao-Blackwelization scheme for the ST-GS estimator. Our approach is based on the observation that there is a many-to-one relationship between realizations of θ + G and D in the coupling described by (3) and that the variance introduced by θ + G can be marginalized out. The resulting estimator, which we call the Gumbel-Rao (GR) estimator, is guaranteed by the Rao-Blackwell theorem to have lower variance than ST-GS. In the next subsection we turn to the practical question of carrying out this marginalization.\nIn the Gumbel-max trick (3), D is a one-hot indicator of the index of arg maxi {θi +Gi}. Because this argmax operation is non-invertible, there are many configurations of θ +G that correspond to a single D outcome. Consider an alternate factorization of the joint distribution of (θ +G,D): first sample D ∼ pθ, and then θ +G given D. In this view, the Gumbels are auxillary random variables, at which the Jacobian of the tempered softmax is evaluated and which locally increase the variance of the estimator. This local variance can be removed by marginalization. This is the key insight of our GR estimator, which is given by,\n∇GR := ∂f(D) ∂D E [ d softmaxτ (θ +G) dθ ∣∣∣∣D] . (7) It is not too difficult to see that∇GR = E [∇STGS|D]. By the tower rule of expectation, GR has the same expected value as ST-GS and is an instance of a Rao-Blackwell estimator (Blackwell, 1947; Rao, 1992). Thus, it has the same mean as ST-GS, but a lower variance. Taken together, these facts imply that GR enjoys a lower mean squared error (not a lower bias) than ST-GS.\nProposition 1. Let ∇STGS and ∇GR be the estimators defined in (6) and (7). Let ∇θ := dE[f(D)]/dθ be the true gradient that we are trying to estimate. We have\nE [ ‖∇GR −∇θ‖2 ] ≤ E [ ‖∇STGS −∇θ‖2 ] . (8)\nProof. The proposition follows from Jensen’s inequality and the linearity of expectations, see C.1.\nWhile GR is only guaranteed to reduce the variance of ST-GS, Proposition 1 guarantees that, as a function of τ , the MSE of GR is a pointwise lower bound on ST-GS. This means GR can be used for estimation at temperatures, where ST-GS has low bias but prohibitively high variance. Thus, GR extends the region of suitable temperatures over which one can tune. This allows a practitioner to explore an expanded set when trading-off of bias and variance. Empirically, lower temperatures tend to reduce the bias of ST-GS, but we are not aware of any work that studies the convergence of the derivative in the temperature limit. In our experiments, we observe that our estimator facilitates training at lower temperatures to improve in both bias and variance over ST-GS. Thus, our estimator retains the favourable properties of ST-GS (single, unmodified evaluation of f ) while improving its performance." }, { "heading": "3.2 MONTE CARLO APPROXIMATION", "text": "The GR estimator requires computing the expected value of the Jacobian of the tempered softmax over the distribution θ +G|D. Unfortunately, an analytical expression for this is only available in the simplest cases.2 In this section we provide a simple Monte Carlo (MC) estimator with sample size K for E[dSτ/dθ|D], which we call the Gumbel-Rao Monte Carlo Estimator (GR-MCK). This estimator can be computed locally at a cost that only scales like nK (the arity of D times K).\nThey key property exploited by GR-MCK is that θ + G|D can be reparameterized in the following closed form. Given a realization of D such that Di = 1, Z(θ) = ∑n i=1 exp(θi), and Ej ∼ exponential i.i.d., we have the following equivalence in distribution (Maddison et al., 2014; Maddison, 2016; Tucker et al., 2017).\nθj +Gj |D d = { − log (Ej) + logZ(θ) if j = i − log ( Ej\nexp(θj) + EiZ(θ)\n) o.w.\n(9)\nWith this in mind, we define the GR-MCK estimator:\n∇GRMCK := ∂f(D)\n∂D\n[ 1\nK K∑ k=1 d softmaxτ (G k θ) dθ\n] , (10)\nwhere Gkθ ∼ θ + G|D i.i.d. using the reparameterization (9). For the case K = 1, our estimator reduces to the standard ST-GS estimator. The cost for drawing multiple samples Gkθ ∼ θ + G|D scales only linearly in the arity of D and is usually negligible in modern applications, where the bulk of computation accrues from the computation of f . Moreover, drawing multiple samples of θ +G|D can easily be parallelized on modern workstations (GPUs, etc.). Our estimator remains a single-evaluation estimator under this scheme, because the loss function f is still only evaluated at D. Finally, as with GR, the GR-MCK is guaranteed to improve in MSE over ST-GS for any K ≥ 1, as confirmed in Proposition 2.\nProposition 2. Let ∇STGS and ∇GRMCK be the estimators defined in (6) and (10). Let ∇θ := dE[f(D)]/dθ be the true gradient that we are trying to estimate. For all K ≥ 1, we have\nE [ ‖∇GRMCK −∇θ‖2 ] ≤ E [ ‖∇STGS −∇θ‖2 ] . (11)\nProof. The proposition follows from Jensen’s inequality and the linearity of expectations, see C.2.\n2For example, in the case of n = 2 (binary) and τ = 1 an analytical expression for the GR estimator is available." }, { "heading": "3.3 VARIANCE REDUCTION IN MINIBATCHES", "text": "The variance of GR-MCK can be reduced by increasing K or by averaging B i.i.d. samples of the GR-MCK estimator. An average of i.i.d. samples∇bGRMCK for b ∈ {1, . . . , B} is a generalization of minibatching by sampling data points with replacement. In particular, θ may depend on an additional source of randomness, i.e., θ = h(X) for X ∼ P . In this case, ∇STGS is a random variable that depends not only on D and G, but also on X . In this subsection, we consider the effect of increasing K and B separately. Expectations are taken over all the randomness.\nLet ∇bGRMCK be i.i.d. as ∇GRMCK for b ∈ {1, . . . , B} and define the following “minibatched” GR-MCK estimator:\n∇1:BGRMCK := 1\nB B∑ b=1 ∇bGRMCK . (12)\nProposition 3 summarizes the scaling of the variance of (12), and is an elementary application of the law of total variance.\nProposition 3. Let ∇STGS, ∇GR and ∇ 1:B\nGRMCK be the estimators defined in (6), (7) and (12). We have\nvar [ ∇1:BGRMCK ] =\nE [var [∇STGS|D,X]] BK + var [∇GR] B (13)\nwhere var is the trace of the covariance matrix.\nProof. The proposition follows directly from the law of total variance, see C.3.\nAs expected the total variance of ∇1:BGRMCK decreases like 1/B. The key point of Proposition 3 is that the component of the variance that K reduces can also be reduced by increasing the batch size B. This suggests that the effect of GR-MCK will be most pronounced at small batch sizes. Proposition 3 also indicates that there are diminishing returns to increasing K for a fixed batch size B, such that the variance of GR-MCK will eventually be dominated by the right-hand term of (13). In our experimental section, we explore various K and study the effect on gradient estimation in more detail.\nFinally, we note that the choice of a Monte Carlo scheme to approximate E [dSτ/dθ|D] permits the use of additional well-known variance reduction methods to improve the estimation properties of our gradient estimator. For example, antithetic variates or importance sampling are sensible methods to explore in this setting (Kroese et al., 2013). For low-dimensional discrete random variables, Gaussian quadrature or other numerical methods could be employed. However, we found the simple Monte Carlo scheme described above effective in practice and report results based on this procedure in the experimental section." }, { "heading": "4 RELATED WORK", "text": "The idea of using Rao-Blackwellization to reduce the variance of gradient estimators for discrete latent variable models has been explored in machine learning. For example, Liu et al. (2018) describe a sum-and-sample style estimator that analytically computes part of the expectation to reduce the variance of the gradient estimates. The favorable properties of their estimator are due to the RaoBlackwell theorem. Kool et al. (2020) describe a gradient estimator based on sampling without replacement. Their estimator emerges naturally as the Rao-Blackwell estimator of the importanceweighted estimator (Vieira, 2017) and the estimator described by Liu et al. (2018). Both of these estimators rely on multiple function evaluations to compute a gradient estimate. In contrast, our work is the first to consider Rao-Blackwellisation in the context of a single-evaluation estimator.\nRecently, Paulus et al. (2020) extend the Gumbel-Softmax gradient estimator to other discrete structures. Our approach can be used to reduce the variance of the corresponding straight-through variants, when an efficient reparameterization of the perturbation conditional on the discrete structure is available (Gane et al., 2014)." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 PROTOCOL", "text": "In this section, we study the effectiveness of our gradient estimator in practice. In particular, we evaluate its performance with respect to the temperature τ , the number of MC samples K and the batch size B. We measure the variance reduction and improvements in MSE our estimator achieves in practice, and assess whether its lower variance gradient estimates accelerate the convergence on the objective or improve final test set performance. Our focus is on single-evaluation gradient estimation and we compare against other non-relaxing estimators (ST, FouST, ST-GS and REINFORCE with a running mean as a baseline) and relaxing estimators (GS), where permissible. Experimental details are given in Appendix D.\nFirst, we consider a toy example which allows us to explore and visualize the variance of our estimator and suggests that it is particularly effective at low temperatures. Next, we evaluate the effect of τ and K in a latent parse tree task which does not permit the use of relaxed gradient estimators. Here, our estimator facilitates training at low temperatures to improve overall performance and is effective even with few MC samples. Finally, we train variational auto-encoders with discrete latent variables (Kingma & Welling, 2013; Rezende et al., 2014). Our estimator yields improvements at small batch sizes and obtains competitive or better performance than the GS estimator at the largest arity." }, { "heading": "5.2 QUADRATIC PROGRAMMING ON THE SIMPLEX", "text": "As a toy problem, we consider the problem of minimizing a quadratic program (p− c)ᵀQ(p− c) over the probability simplex ∆n−1 = {p ∈ Rn : pi ≥ 0, ∑n i=1 pi = 1} for Q ∈ Rn×n positive-definite and c ∈ Rn. This problem may be reframed as the following stochastic optimization problem, min\np∈∆n−1 E[(D − c)ᵀA(p)(D − c)],\nwhere D ∼ Discrete(p) and Aii(p) = (pi−ci) 2\npi−2pici+c2i Qii and Aij(p) = (pi−ci)(pj−cj) cicj−picj−cipjQij for i 6= j.\nWhile solving the above problem is simple using standard methods, it provides a useful testbed to evaluate the effectiveness of our variance reduction scheme. For this purpose, we consider Qij = exp (−2|i− j|) and ci = 13 in three dimensions. Our estimator reduces the variance in the gradient estimation over the entire simplex and is particularly effective at low temperatures in this problem. In Figure 1, we compare the log10-trace of the covariance matrix of ST-GS and GR-MC1000 at three different temperatures and display their difference over the entire domain. The improvement is universal. The pattern is not always intuitive (oval bull’s eyes), despite the simplicity of the objective function. Compared with ST-GS, our estimator on this example appears more effective closer to the corners and edges, which is important for learning discrete distributions. At lower temperatures, the difference between the two estimators becomes particularly acute. This suggests that our estimator may train better at lower temperatures and be more responsive to optimizing over the temperature to successfully trade off bias and variance." }, { "heading": "5.3 UNSUPERVISED PARSING ON LISTOPS", "text": "Straight-through estimators feature prominently in NLP (Martins et al., 2019) where latent discrete structure arises naturally, but the use of relaxations is often infeasible. Therefore, we evaluate our estimator in a latent parse tree task on subsets of the ListOps dataset (Nangia & Bowman, 2018). This dataset contains sequences of prefix arithmetic expressions x (e.g., max[ 3 min[ 8 2 ]]) that evaluate to an integer y ∈ {0, 1, . . . 9}. The arithmetic syntax induces a latent parse tree T . We consider the model by (Choi et al., 2017) that learns a distribution over plausible parse trees of a given sequence to maximize\nE qθ(T |x) [log pφ(y|T, x)] .\nBoth the conditional distribution over parse trees qθ(T |x) and the classifier pφ(y|T, x) are parameterized using neural networks. In this model, a parse tree T ∼ qθ(T |x) for a given sentence is sampled bottom-up by successively combining the embeddings of two tokens that appear in a given sequence until a single embedding for the entire sequence remains. This is then used for performing the subsequent classification. Because it is computationally infeasible to marginalize over all trees, Choi et al. (2017) rely on the ST-GS estimator for training. We compare this estimator against our estimator GR-MCK with K ∈ {10, 100, 1000}. We consider temperatures τ ∈ {0.01, 0.1, 1.0} and experiment with shallow and deeper trees by considering sequences of length L up to 10, 25 and 50. All models are trained with stochastic gradient descent with a batch size equal to the maximum L. Because we are interested in a controlled setting to investigate the effect of τ and K, our experimental set-up is significantly simpler than elsewhere (e.g., Havrylov et al., 2019). We give details and highlight important differences in Appendix D.1.\nOur estimator facilitates training at lower temperatures and achieves better final test set accuracy than ST-GS (Table 1). Increasing K improves the performance at low temperatures, where the differences between the estimators are most pronounced. Overall, across all temperatures this results in modest improvements, particularly for shallow trees and small batch sizes. We also find evidence for diminishing returns: The differences between ST-GS and GR-MC10 are larger than between GR-MC100 or GR-MC1000, suggesting that our estimator is effective even with few MC samples." }, { "heading": "5.4 GENERATIVE MODELING WITH DISCRETE VARIATIONAL AUTO-ENCODERS", "text": "Finally, we train variational auto-encoders (Kingma & Welling, 2013; Rezende et al., 2014) with discrete latent random variables on the MNIST dataset of handwritten digits (LeCun & Cortes, 2010). We used the fixed binarization of (Salakhutdinov & Murray, 2008) and the standard split into train, validation and test sets. Our objective is to maximize the following variational lower bound on the log-likelihood,\nlog p(x) > E qθ(Di|x) log 1 M M∑ j=1 pφ(x,D i) qθ(Dj |x) where x denotes the input image and Di ∼ qθ(Di|x) denotes a vector of discrete latent random variables. This objective takes a form in equation (1). For training, the bound is approximated using\nonly a single sample (M = 1). For final validation and testing, we use 5000 samples (M = 5000). Both the generative model pφ(x,D) and the variational distributions qθ(D|x) were parameterized using neural networks. We experiment with different batch sizes and discrete random variables of arities in {2, 4, 8, 16} as in Maddison et al. (2017). To facilitate comparisons, we do not alter the total dimension of the latent space and train all models for 50,000 iterations using stochastic gradient descent with momentum. Hyperparameters are optimised for each estimator using random search (Bergstra & Bengio, 2012) over twenty independent runs. More details are given in Appendix D.2.\nOur estimator effectively reduces the variance over the entire training trajectory (Figure 2a). Even a small number of MC samples (K = 10) results in sizable variance reductions. The variance reduction compares favorably to the magnitude of the minibatch variance (Appendix E). Empirically, we find that lower temperatures tend to reduce bias. Our estimator facilitates training at lower temperatures and thus features a lower MSE (Figure 2b). During training our estimator can trade off bias and variance to improve the gradient estimation. Empirically, we observed that on this task, the best models using ST-GS trained at an average temperature of 0.65, while the best models using GR-MC1000 trained at an average temperature of 0.35. This is interesting, because it indicates that our estimator may make the use of temperature annealing during training more effective. We find lower variance gradient estimates improve convergence of the objective (Figure 2c). GR-MC1000 reaches various performance thresholds on the validation set with reliably fewer iterations than ST or ST-GS. This effect is observable at different arities and persistent over the entire training trajectory.\nFor final test set performance, our estimator outperforms REINFORCE and all other straight-through estimators (Table 2). The improvements over ST-GS extend up to two nats (for batch size 20, 16-ary) at small batch sizes and are more modest at large batch sizes as expected (also see Appendix E). This confirms that our estimator might be particularly effective in settings, where training at high batch sizes is prohibitively expensive. The improvements from increasing the number of MC samples tend to saturate at K = 100 on this task. Further, our results suggest that relaxed estimators may be preferred (if they can be used) for discrete random variables of smaller arity. For example, the GS estimator outperforms all straight-through estimators for binary variables for both batch sizes. For large arities however, we find that straight-through estimators can perform competitively: Our estimator GR-MC1000 achieves the best performance overall and outperforms the GS estimator for 16-ary variables." }, { "heading": "6 CONCLUSION", "text": "We introduced the Gumbel-Rao estimator, a new single-evaluation non-relaxing gradient estimator for models with discrete random variables. Our estimator is a Rao-Blackwellization of the state-ofthe-art straight-through Gumbel-Softmax estimator. It enjoys lower variance and can be implemented efficiently using Monte Carlo methods. In particular and in contrast to most other work, it does not require additional function evaluations. Empirically, our estimator improved final test set performance in an unsupervised parsing task and on a variational auto-encoder loss. It accelerated convergence on the objective and compared favorably to other standard gradient estimators. Even though the\ngains were sometimes modest, they were persistent and particularly pronounced when models must be trained at low temperatures or with small batch sizes. We expect that our estimator will be most effective in such settings and that further gains may be uncovered when combining our RaoBlackwellisation scheme with an annealing schedule for the temperature. Finally, we hope that our work inspires further exploration of the use of Rao-Blackwellisation for gradient estimation." }, { "heading": "ACKNOWLEDGEMENTS", "text": "MBP gratefully acknowledges support from the Max Planck ETH Center for Learning Systems. CJM is grateful for the support of the James D. Wolfensohn Fund at the Institute of Advanced Studies in Princeton, NJ. Resources used in preparing this research were provided, in part, by the Sustainable Chemical Processes through Catalysis (Suchcat) National Center of Competence in Research (NCCR), the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute." } ]
2,021
RAO-BLACKWELLIZING THE STRAIGHT-THROUGH GUMBEL-SOFTMAX GRADIENT ESTIMATOR
SP:ba35b6f0d26b7a9d982a99116326047f496e5a03
[ "The authors propose a learning-based approach for image clustering. In particular, similarly to recent algorithms fro unsupervised representation learning, such a DeepCluster, they propose to iterate between clustering the images in the feature space of the network and updating the network weights to respect the clusters. Two main differences with respect to DeepCluster-like algorithms is that they target the task of clustering itself, and do not evaluate the generalizability of the leaned representations for other tasks, and that they propose to use cluster ensembles to improve training robustness. In particular, they generate 2 clusterings of the images at every iteration by applying different sets of data augmentations and feature transformations to the input images. The objective is then not only to respect these clusterings but also to enforce consistency between them over time, thus improving representation invariance to irrelevant image details. In an experimental evaluation on a set of standard image clustering benchmarks they outperform prior work in most scenarios. " ]
Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must either be closer in the representation space, or have a similar cluster assignment. In this work, we leverage this idea together with ensemble learning to perform clustering and representation learning. Ensemble learning is widely used in the supervised learning setting but has not yet been practical in deep clustering. Previous works on ensemble learning for clustering neither work on the feature space nor learn features. We propose a novel ensemble learning algorithm dubbed Consensus Clustering with Unsupervised Representation Learning (ConCURL) which learns representations by creating a consensus on multiple clustering outputs. Specifically, we generate a cluster ensemble using random transformations on the embedding space, and define a consensus loss function that measures the disagreement among the constituents of the ensemble. Thus, diverse ensembles minimize this loss function in a synergistic way, which leads to better representations that work with all cluster ensemble constituents. Our proposed method ConCURL is easy to implement and integrate into any representation learning or deep clustering block. ConCURL outperforms all state of the art methods on various computer vision datasets. Specifically, we beat the closest state of the art method by 5.9 percent on the ImageNet-10 dataset, and by 18 percent on the ImageNet-Dogs dataset in terms of clustering accuracy. We further shed some light on the under-studied overfitting issue in clustering and show that our method does not overfit as much as existing methods, and thereby generalizes better for new data samples.
[]
[ { "authors": [ "Yuki Markus Asano", "Christian Rupprecht", "Andrea Vedaldi" ], "title": "Self-labelling via simultaneous clustering and representation learning", "venue": "arXiv preprint arXiv:1911.05371,", "year": 2019 }, { "authors": [ "Ella Bingham", "Heikki Mannila" ], "title": "Random projection in dimensionality reduction: applications to image and text data", "venue": "In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pp", "year": 2001 }, { "authors": [ "Avrim Blum", "Tom Mitchell" ], "title": "Combining labeled and unlabeled data with co-training", "venue": "In Proceedings of the eleventh annual conference on Computational learning theory,", "year": 1998 }, { "authors": [ "Leo Breiman" ], "title": "Stacked regressions", "venue": "Machine learning,", "year": 1996 }, { "authors": [ "Sébastien Bubeck", "Ulrike Von Luxburg" ], "title": "Overfitting of clustering and how to avoid", "venue": null, "year": 2007 }, { "authors": [ "Mathilde Caron", "Piotr Bojanowski", "Armand Joulin", "Matthijs Douze" ], "title": "Deep clustering for unsupervised learning of visual features", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Mathilde Caron", "Ishan Misra", "Julien Mairal", "Priya Goyal", "Piotr Bojanowski", "Armand Joulin" ], "title": "Unsupervised learning of visual features by contrasting cluster assignments", "venue": "arXiv preprint arXiv:2006.09882,", "year": 2020 }, { "authors": [ "Jianlong Chang", "Lingfeng Wang", "Gaofeng Meng", "Shiming Xiang", "Chunhong Pan" ], "title": "Deep adaptive image clustering", "venue": "In The IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "Adam Coates", "Andrew Ng", "Honglak Lee" ], "title": "An analysis of single-layer networks in unsupervised feature learning", "venue": "In Proceedings of the fourteenth international conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "IEEE conference on computer vision and pattern recognition,", "year": 2009 }, { "authors": [ "Aniket Anand Deshmukh" ], "title": "Kernel Methods for Learning with Limited Labeled Data", "venue": "PhD thesis,", "year": 2019 }, { "authors": [ "Thomas G Dietterich" ], "title": "Ensemble methods in machine learning", "venue": "In International workshop on multiple classifier systems,", "year": 2000 }, { "authors": [ "Carlotta Domeniconi", "Muna Al-Razgan" ], "title": "Weighted cluster ensembles: Methods and analysis", "venue": "ACM Transactions on Knowledge Discovery from Data (TKDD),", "year": 2009 }, { "authors": [ "Carlotta Domeniconi", "Dimitris Papadopoulos", "Dimitrios Gunopulos", "Sheng Ma" ], "title": "Subspace clustering of high dimensional data", "venue": "In Proceedings of the 2004 SIAM international conference on data mining,", "year": 2004 }, { "authors": [ "Martin Ester", "Hans-Peter Kriegel", "Jörg Sander", "Xiaowei Xu" ], "title": "A density-based algorithm for discovering clusters in large spatial databases with noise", "venue": "In Kdd,", "year": 1996 }, { "authors": [ "Li Fei-Fei", "Rob Fergus", "Pietro Perona" ], "title": "Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories", "venue": "In 2004 conference on computer vision and pattern recognition workshop,", "year": 2004 }, { "authors": [ "Xiaoli Z Fern", "Carla E Brodley" ], "title": "Random projection for high dimensional data clustering: A cluster ensemble approach", "venue": "In Proceedings of the 20th international conference on machine learning", "year": 2003 }, { "authors": [ "Ana LN Fred", "Anil K Jain" ], "title": "Combining multiple clusterings using evidence accumulation", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2005 }, { "authors": [ "Yoav Freund", "Robert E Schapire" ], "title": "Experiments with a new boosting algorithm", "venue": "Citeseer,", "year": 1996 }, { "authors": [ "Brendan J Frey", "Delbert Dueck" ], "title": "Clustering by passing messages between data", "venue": "points. science,", "year": 2007 }, { "authors": [ "Salvador Garcia", "Francisco Herrera" ], "title": "An extension on“statistical comparisons of classifiers over multiple data sets”for all pairwise comparisons", "venue": "Journal of machine learning research,", "year": 2008 }, { "authors": [ "Joydeep Ghosh", "Ayan Acharya" ], "title": "Cluster ensembles. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "venue": null, "year": 2011 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Olivier J Hénaff", "Aravind Srinivas", "Jeffrey De Fauw", "Ali Razavi", "Carl Doersch", "SM Eslami", "Aaron van den Oord" ], "title": "Data-efficient image recognition with contrastive predictive coding", "venue": null, "year": 1905 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "arXiv preprint arXiv:1808.06670,", "year": 2018 }, { "authors": [ "Kyle Hsu", "Sergey Levine", "Chelsea Finn" ], "title": "Unsupervised learning via meta-learning", "venue": "arXiv preprint arXiv:1810.02334,", "year": 2018 }, { "authors": [ "Dong Huang", "Chang-Dong Wang", "Jian-Huang Lai" ], "title": "Locally weighted ensemble clustering", "venue": "IEEE transactions on cybernetics,", "year": 2017 }, { "authors": [ "Jiabo Huang", "Shaogang Gong", "Xiatian Zhu" ], "title": "Deep semantic clustering by partition confidence maximisation", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Lawrence Hubert", "Phipps Arabie" ], "title": "Comparing partitions", "venue": "Journal of classification,", "year": 1985 }, { "authors": [ "Anil K Jain", "M Narasimha Murty", "Patrick J Flynn" ], "title": "Data clustering: a review", "venue": "ACM computing surveys (CSUR),", "year": 1999 }, { "authors": [ "Xu Ji", "João F Henriques", "Andrea Vedaldi" ], "title": "Invariant information clustering for unsupervised image classification and segmentation", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "T Warren Liao" ], "title": "Clustering of time series data—a survey", "venue": "Pattern recognition,", "year": 2005 }, { "authors": [ "Francesco Masulli", "Andrea Schenone" ], "title": "A fuzzy clustering based segmentation system as support to diagnosis in medical imaging", "venue": "Artificial intelligence in medicine,", "year": 1999 }, { "authors": [ "Mike Mintz", "Steven Bills", "Rion Snow", "Dan Jurafsky" ], "title": "Distant supervision for relation extraction without labeled data", "venue": "In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 1003–1011,", "year": 2009 }, { "authors": [ "Chuang Niu", "Jun Zhang", "Ge Wang", "Jimin Liang" ], "title": "Gatcluster: Self-supervised gaussian-attention network for image clustering", "venue": "arXiv preprint arXiv:2002.11863,", "year": 2020 }, { "authors": [ "Rebecca Nugent", "Marina Meila" ], "title": "An overview of clustering applied to molecular biology", "venue": "In Statistical methods in molecular biology,", "year": 2010 }, { "authors": [ "E. Riba", "D. Mishkin", "D. Ponsa", "E. Rublee", "G. Bradski" ], "title": "Kornia: an open source differentiable computer vision library for pytorch", "venue": "In Winter Conference on Applications of Computer Vision,", "year": 2020 }, { "authors": [ "Alex Rodriguez", "Alessandro Laio" ], "title": "Clustering by fast search and find of density", "venue": "peaks. Science,", "year": 2014 }, { "authors": [ "Thomas Schops", "Johannes L Schonberger", "Silvano Galliani", "Torsten Sattler", "Konrad Schindler", "Marc Pollefeys", "Andreas Geiger" ], "title": "A multi-view stereo benchmark with high-resolution images and multi-camera videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2017 }, { "authors": [ "Alexander Strehl", "Joydeep Ghosh" ], "title": "Cluster ensembles—a knowledge reuse framework for combining multiple partitions", "venue": "Journal of machine learning research,", "year": 2002 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Alexander Topchy", "Anil K Jain", "William Punch" ], "title": "Clustering ensembles: Models of consensus and weak partitions", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2005 }, { "authors": [ "Wouter Van Gansbeke", "Simon Vandenhende", "Stamatios Georgoulis", "Marc Proesmans", "Luc Van Gool" ], "title": "Scan: Learning to classify images without labels", "venue": null, "year": 2020 }, { "authors": [ "Sandro Vega-Pons", "José Ruiz-Shulcloper" ], "title": "A survey of clustering ensemble algorithms", "venue": "International Journal of Pattern Recognition and Artificial Intelligence,", "year": 2011 }, { "authors": [ "C. Wah", "S. Branson", "P. Welinder", "P. Perona", "S. Belongie" ], "title": "The Caltech-UCSD Birds-200-2011 Dataset", "venue": "Technical Report CNS-TR-2011-001, California Institute of Technology,", "year": 2011 }, { "authors": [ "Jianlong Wu", "Keyu Long", "Fei Wang", "Chen Qian", "Cheng Li", "Zhouchen Lin", "Hongbin Zha" ], "title": "Deep comprehensive correlation mining for image clustering", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Zhirong Wu", "Yuanjun Xiong", "Stella X Yu", "Dahua Lin" ], "title": "Unsupervised feature learning via nonparametric instance discrimination", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yongqin Xian", "Christoph H Lampert", "Bernt Schiele", "Zeynep Akata" ], "title": "Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2018 }, { "authors": [ "Tong Xiao", "Tian Xia", "Yi Yang", "Chang Huang", "Xiaogang Wang" ], "title": "Learning from massive noisy labeled data for image classification", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Junyuan Xie", "Ross Girshick", "Ali Farhadi" ], "title": "Unsupervised deep embedding for clustering analysis", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Rui Xu", "Donald Wunsch" ], "title": "Survey of clustering algorithms", "venue": "IEEE Transactions on neural networks,", "year": 2005 }, { "authors": [ "Chengxu Zhuang", "Alex Lin Zhai", "Daniel Yamins" ], "title": "Local aggregation for unsupervised learning of visual embeddings", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Chang" ], "title": "A APPENDIX A.1 DATASET SUMMARY The dataset summary is given in Table 6. For ImageNet-10 and ImageNet-Dogs are subsets of Deng et al", "venue": null, "year": 2021 }, { "authors": [ "Caron" ], "title": "We explain the generation of multiple augmented views that are shown to be very effective in unsupervised learning (Chen et al., 2020). Note that these augmented views are different from the views of a Multi-view image data set such as Schops et al. (2017)", "venue": null, "year": 2021 }, { "authors": [ "Chen" ], "title": "We first crop a random patch of the image with scale ranging from 0.08 to 1.0, and resize the cropped patch to 224×224 (128×128 and 96×96 in the case of smaller resolution datasets such as Intel and STL10", "venue": null, "year": 2020 }, { "authors": [ "√ MI(U", "V U)MI(V" ], "title": "Skl) Suppose R is the groundtruth clustering and S is a partition, the RI of S is given as follows. Let a be the number of pairs of elements that are in the same set in R as well as in S; b be the number of pairs of elements that are in diferent sets in R", "venue": "Adjusted Rand Index (Hubert & Arabie,", "year": 1985 } ]
[ { "heading": "1 INTRODUCTION", "text": "Supervised learning algorithms have shown great progress recently, but generally require a lot of labeled data. However, in many domains (e.g., advertising, social platforms, etc.), most of the available data are not labeled and manually labeling it is a very labor, time, and cost intensive task (Xiao et al., 2015; Deshmukh, 2019; Mintz et al., 2009; Blum & Mitchell, 1998). On the other hand, clustering algorithms do not need labeled data to group similar data points into clusters. Some popular clustering algorithms include k-means, hierarchical clustering, DBSCAN (Ester et al., 1996), spectral clustering, etc., and the usefulness of each algorithm varies with the application. In this work, we deal with the clustering of images. Traditional clustering approaches focus on hand crafted features on which out of the box clustering algorithms are applied. However, hand crafted features may not be optimal, and are not scalable to large scale real word datasets (Wu et al., 2019). Advancements in deep learning techniques have enabled end-to-end learning of rich representations for supervised learning. On the other hand, simultaneously learning the feature spaces while clustering leads to degenerate solutions, which until recently limited end to end implementations of clustering with representation learning approaches (Caron et al., 2018). Recent deep clustering works take several approaches to address this issue such as alternating pseudo cluster assignments and pseudo supervised training, comparing the predictions with their own high confidence assignments (Caron et al., 2018; Asano et al., 2019; Xie et al., 2016; Wu et al., 2019), and maximizing mutual information between predictions of positive pairs (Ji et al., 2019). Although these methods show impressive performance on challenging datasets, we believe taking advantage of rich ideas from ensemble learning for clustering with representation learning will enhance the performance of deep clustering methods.\nEnsemble learning methods train a variety of learners and build a meta learner by combining the predictions of individual learners (Dietterich, 2000; Breiman, 1996; Freund et al., 1996). In practice they have been heavily used in supervised learning setting. Ensemble learning methods have also found their place in clustering i.e. knowledge reuse framework (Strehl & Ghosh, 2002) where a consensus algorithm is applied on constituent cluster partitions to generate an updated partition that clusters the data better than any component partition individually. However, the knowledge reuse framework and much of the consensus clustering literature that followed (Fern & Brodley, 2003; Fred & Jain, 2005; Topchy et al., 2005) do not make use of the underlying features used to generate the ensemble. We propose the use of consensus clustering as a way to extend ensemble methods to unsupervised representation learning. In particular, we define a ’disagreement’ measure among the constituents of the ensemble. The key motivation for this is that the diversity of the ensemble drives the minimization of the disagreement measure in a synergistic way, thereby leading to better representations. We propose Consensus Clustering with Unsupervised Representation Learning (ConCURL ) and following are our main contributions:\n1. A novel ensemble learning algorithm which learns representations by creating a consensus on multiple clustering outputs generated by applying random transformations on the embeddings.\n2. Our method outperforms the current state of the art clustering algorithms on popular computer vision datasets based on clustering metrics ( A.4).\n3. Even though there is no labeled data available while learning representations, clustering may still be prone to be overfitting to the “training data.” As stated in Bubeck & Von Luxburg (2007), in clustering, we generally assume that the finite data set has been sampled from some underlying space and the goal is to find the true approximate partition of the underlying space rather than the best partition in a given finite data set. Hence, to check generalizability of the method proposed we also evaluate our models on the “test data” - data that was not available during training/representation learning. Our method is more generalizable compared to state of the art methods (i.e. it outperforms the other algorithms when evaluated on the test set)." }, { "heading": "2 RELATED WORK", "text": "Clustering is a ubiquitous task and it has been actively used in many different scientific and practical pursuits such as detecting genes from microarray data (Frey & Dueck, 2007), clustering faces (Rodriguez & Laio, 2014), and segmentation in medical imaging to support diagnosis (Masulli & Schenone, 1999). We refer interested readers to these excellent sources for a survey of these uses Jain et al. (1999); Liao (2005); Xu & Wunsch (2005); Nugent & Meila (2010).\nClustering with Deep Learning: In their influential work, Caron et al. (2018) show that it is possible to train deep convolutional neural networks with pseudo labels that are generated by a clustering algorithm (DeepCluster). More precisely, in DeepCluster, previous versions of representations are used to assign pseudo labels to the data using an out of the box clustering algorithm such as k-means. These pseudo labels are used to improve the learned representation of the data by minimizing a supervised loss. Along the same lines, several more methods have been proposed. For example, Gaussian ATtention network for image clustering (GATCluster) (Niu et al., 2020) comprises four self-learning tasks with the constraints of transformation invariance, separability maximization, entropy analysis and attention mapping. Training is performed in two distinct steps, similar to Caron et al. (2018) where the first step is to compute pseudo targets for a large batch of data and the second step is to train the model in a supervised way using the pseudo targets. Both DeepCluster and GATCluster use k-means to generate pseudo labels which may not scale well. Wu et al. (2019) propose Deep Comprehensive Correlation Mining (DCCM), where discriminative features are learned by taking advantage of the correlations of the data using pseudo-label supervision and triplet mutual information among features. However, DCCM may be susceptible to trivial solutions (Niu et al., 2020). Invariant Information Clustering (IIC) (Ji et al., 2019) maximizes mutual information between the class assignments of two different views of the same image (paired samples) in order to learn representations that preserve what is common between the views while discarding instance specific details. Ji et al. (2019) argue that the presence of an entropy term in mutual information plays an important role in avoiding the degenerate solutions. However a large batch size is needed\nfor the computation of mutual information in IIC, which may not be scalable for larger image sizes which are common in popular datasets (Ji et al., 2019; Niu et al., 2020). Huang et al. (2020) extend the traditional maximal margin clustering idea to the deep learning paradigm, by learning the most semantically plausible clustering through minimizing a proposed partition uncertainty index. Their algorithm PICA uses a stochastic version of the index, thereby facilitating mini-batch training. PICA fails to assign a sample-correct cluster when that sample either has high foreground or background similarity to samples in other clusters. SCAN (Van Gansbeke et al., 2020) uses the learnt representations from a pretext task to find semantically closest images to the given image using nearest neighbors. SCAN achieves state-of-the-art results on CIFAR-10, CIFAR-100, STL-10, however using such priors from existing pretext tasks deviates from the main idea of our paper of end-to-end learning. We believe however that we can benefit by using SCAN on top of the features learnt using our algorithm to improve the clustering accuracy. Our proposed approach ConCURL is scalable to large datasets, does not suffer from trivial solutions and shows superior performance on a challenging set of image data sets. As shown in the experimental results, our proposed method also generalizes well to data points that were not available during training, when compared to the above approaches.\nSelf-supervised Representation Learning: Self-supervised learning is a sub-field of unsupervised learning in which the main goal is to learn general purpose representations by exploiting userdefined sub-tasks such as the relationship between different views of the same data. Although self-supervised learning methods show impressive performance on a variety of problems, it is not clear whether learned representations are good for clustering. There are many different flavors of self supervised learning such as Instance Recognition tasks (Wu et al., 2018), contrastive techniques (Chen et al., 2020), etc. In instance recognition tasks, each image is considered as its own category so that the learnt embeddings are well separated. Zhuang et al. (2019) propose a novel Local Aggregation method based on non-parametric instance discrimination tasks. They use a robust clustering objective (using multiple runs of k-means) to move statistically similar data points closer in the representation space and dissimilar data points further away. However, instance recognition tasks usually require a large memory bank which is memory intensive. In contrastive learning (Tian et al., 2019; He et al., 2020; Hénaff et al., 2019; Hjelm et al., 2018; Chen et al., 2020), representations are learned by maximizing agreement between different augmented views of the same data example (known as positive pairs) and minimizing agreement between augmented views of different examples (known as negative pairs). SimCLR (Chen et al., 2020) achieves state-of-the-art-results without specialized architectures or a memory bank of negative pairs (usually required by contrastive learning techniques). However, it still requires negative examples and, as it applies to instance classification, it has to compare every pair of images. These issues are addressed by Bootstrap your own latent (BYOL) (Grill et al., 2020) and Swapping Assignments between multiple Views (SwAV) (Caron et al., 2020). BYOL does not require negative examples and SwAV does not need to compare every pair of images. For the main study in this paper we use BYOL as a representation learning block and adapt the soft clustering loss which was used in SwAV to learn prototypes, thereby addressing both the issues of negative samples and need for comparing every pair of images. Note that our proposed algorithm can use any representation learning block like SimCLR, BYOL, & SwAV and can also use other soft clustering loss formulations.\nConsensus Clustering: Strehl & Ghosh (2002) propose a knowledge reuse framework; to build an unsupervised ensemble by using several distinct clusterings of the same data and assuming that the underlying features that are used to compute the clustering are not available and fixed. Fern & Brodley (2003) build on the cluster ensemble framework based on random projections. In this framework, Fern & Brodley (2003) show that a single run of clustering (random projection + Expectation Maximization) is highly unstable. They perform multiple runs of clustering and compute an aggregated similarity matrix which is used to cluster the data using an agglomerative clustering algorithm. Fred & Jain (2005) propose a voting approach to map the cluster assignments in the ensemble into a new similarity measure between clusterings. The co-association matrix thus formed can be used with any clustering algorithm to result in a new data partition. Weighted cluster ensembles (Domeniconi & Al-Razgan, 2009) uses Locally Adaptive Clustering (LAC) (Domeniconi et al., 2004) as a base algorithm which assigns weights to each feature of the sample according to the local variance of data along each dimension. Authors use LAC with different hyperparameters and propose three different partitioning algorithms to generate consensus among LAC clusterings. Huang et al. (2017) argues that most ensemble based methods treat all base clusterings equally and a few which weigh the base clusterings give those weights globally. Authors propose an ensemble clustering approach based\non cluster uncertainty (calculated by entropic criterion) and a local weighting strategy. There are also survey papers which have in depth analysis and details of various methods used in consensus clusteringVega-Pons & Ruiz-Shulcloper (2011); Ghosh & Acharya (2011). It is not clear how any of these methods can be adapted when one needs to do representation learning along with clustering. It is also not clear if one can come up with end-to-end learning architecture with any of the above methods. In contrast, our proposed consensus clustering method ConCURL can easily be integrated into any deep learning architecture for clustering and trained end-to-end." }, { "heading": "3 PROPOSED METHOD", "text": "Given a set of observations X = {xi}Ni=1, the goal is to learn a representation f , cluster representations C (called as prototypes henceforth) and partition N observations into K disjoint clusters (the value of K is assumed to be known). To check the generalization of the algorithm, the final goal is to partition previously unseen NT observations XT = {xi}N+NTi=N+1 into K disjoint clusters given learnt representation f and prototypes C." }, { "heading": "3.1 ALGORITHM", "text": "We propose a novel consensus loss to achieve the objective. Along with the novel consensus clustering loss, the loss function comprises of two more loss terms, namely a representation learning loss, a soft clustering loss. The three loss terms share a common backbone (such as ResNet) and have different loss specific layers after the representation layer of the backbone. Given an input batch Xb ⊂ X of sizeB, we use different data augmentations in the image space, such as random horizontal flip, etc. (more details in A.3) to generate two augmented views X 1b ,X 2b of the input batch. f1b , f2b are representations that are corresponding to the two views of the input batch. The superscript is used to identify the view of the input image. It’s usage will be made clear from context, and ignored otherwise. Our algorithm is mini-batch based, and end-to-end." }, { "heading": "3.1.1 UNSUPERVISED REPRESENTATION LEARNING", "text": "The first loss term is an unsupervised representation learning loss based on a pretext task such as BYOL (Grill et al., 2020), SimCLR (Chen et al., 2020), etc. In this work, we use BYOL which tries to minimize the distance between the online and target network embeddings of the two augmented views X 1b ,X 2b . We denote this loss by L1. Optimizing for loss L1 alone generally results in good representations but may not be suitable for a clustering task." }, { "heading": "3.1.2 SOFT CLUSTERING", "text": "For the second loss term, we use a soft clustering objective. In this work, we follow the framework presented in Caron et al. (2020), which is a centroid based technique and aims to maintain consistency between the clusterings of the augmented views X 1b and X 2b . We store a set of randomly initialized prototypes C = {c1, · · · , cK} ∈ Rd×K , where K is the number of clusters, and d is the dimension of the prototypes.\nWe use a two layer multi-layer perceptron g to project the features f1 and f2 to a lower dimensional space (of size d). This technique was shown to be useful in improving the representations of the layer before the MLP (Chen et al., 2020) and thereafter used in Grill et al. (2020) and Caron et al. (2020). The output of this MLP (referred to as embeddings) is denoted using Z1 = {z11, . . . , z1B} and Z2 = {z21, . . . , z2B} for view 1 and view 2 respectively. Soft clustering approaches based on centroids/prototypes often requires one to compute a measure of similarity between the image embeddings and the prototypes (Xie et al., 2016; Caron et al., 2020). We compute the probability of assigning a cluster j to image i using the normalized vectors z̄1i =\nz1i ||z1i || , z̄2i = z2i ||zi2|| and c̄j = cj ||cj|| as\np1i,j = exp( 1τ 〈z̄ 1 i , c̄j〉)∑\nj′ exp( 1 τ 〈z̄ 1 i , c̄j′ 〉)\n, p2i,j = exp( 1τ 〈z̄ 2 i , c̄j〉)∑\nj′ exp( 1 τ 〈z̄ 2 i , c̄j′ 〉)\n. (1)\nWe concisely write p1i = {p1i,j}Kj=1 and p2i = {p2i,j}Kj=1. Here, τ is a temperature parameter. Note that, we use pi to denote the predicted cluster assignment probabilities for image i (when not referring to a particular view), and a shorthand p is used when i is clear from context.\nThe idea of predicting assignments p, and then comparing them with high-confidence estimates q (referred to as codes henceforth) of the predictions is not new (Xie et al., 2016). While Xie et al. (2016) uses pretrained features (from autoencoder) to compute the predicted assignments and the codes, doing this in an end to end unsupervised manner might lead to degenerate solutions. Asano et al. (2019) avoids such degenerate solutions by enforcing an equi-partition constraint (the prototypes equally partition the data) during code computation. Caron et al. (2020) follow the same formulation but compute the codes in an online manner for each mini-batch. The assignment codes are computed by solving the following optimization problem\nQ1 = arg max Q∈Q Tr(QTCTZ1) + H(Q) and Q2 = arg max Q∈Q Tr(QTCTZ2) + H(Q), (2)\nwhere Q = {q1, . . . ,qB} ∈ RK×B+ , Q is the transportation polytope defined by\nQ = {Q ∈ RK×B+ |Q1B = 1\nK 1K ,Q\nT1K = 1\nB 1B} 1K is a vector of ones of dimension K and H(Q) = − ∑ i,j Qi,j logQi,j . The above optimization is computed using a fast version of the Sinkhorn-Knopp algorithm (Cuturi, 2013) as described in Caron et al. (2020).\nAfter computing the codes Q1 and Q2, we compute the loss using the probabilities pij and the assigned codes qij by comparing the probabilities of view 1 with the assigned codes of view 2 and vice versa, given as\nL2,1 = − 1\n2B B∑ i=1 K∑ j=1 q2ij log p 1 ij , L2,2 = − 1 2B B∑ i=1 K∑ j=1 q1ij log p 2 ij , L2 = L2,1 + L2,2 (3)" }, { "heading": "3.1.3 CONSENSUS CLUSTERING", "text": "The third loss term is the proposed consensus clustering loss. We generate a cluster ensemble by first performing transformations on the embeddingsZ1, Z2 and the prototypesC. At the beginning of the algorithm we randomly initializeM such transformations and fix them throughout training. Suppose using a particular random transformation (a randomly generated matrixA), we get z̃ = Az, c̃ = Ac. We then compute the softmax probabilities p̃ij using the normalized vectors z̃/||z̃|| and c̃/||c̃||. Repeating this with the M transformations results in M predicted cluster assignment probabilities for each view. When the network is untrained, the embeddings z are random and applying the random transformations followed by computing the predicted cluster assignments leads to a diverse set of soft cluster assignments.\nThere can be many clusterings for any given latent space (either by randomization introduced in the space or different clustering algorithms). We propose to use consensus clustering so that we can find a latent space that maximizes performance on many different clusterings. To make the method effective, we need more diversity in the clusterings among each component of the ensemble at the start and ideally the diversity should decrease with training and the clustering performance should get better. Creating such a diverse ensemble leads to better representations and better clusters.\nTo compute the consensus loss, once the probabilities p̃ij are computed, we compare the codes generated using (2) of view 1 with the p̃ of view 2 and vice versa, given as\nL31 = − 1\n2BM B∑ i=1 M∑ m=1 K∑ j=1 q2ij log p̃ (1,m) ij , L32 = − 1 2BM B∑ i=1 M∑ m=1 K∑ j=1 q1ij log p̃ (2,m) ij (4)\nL3 = L31 + L32 (5)\nThe architecture for computing the ensemble and the computation of consensus loss is shown in Fig. (1). The final loss that we sought to minimize is the combination of the losses L1, L2, L3\nLtotal = αL1 + βL2 + γL3. (6)\nwhere α, β, γ are non-negative constants. L1 minimizes the distance between the embeddings of different views of the input batch, L2 maintains consistency between clusterings of two augmented views of the input batch, and L3 maintains consistency between clusterings of the randomly transformed embeddings. During inference, to compute a clustering of the input images, we use the computed assignments {qi}Ni=1 and assign the cluster index as ci = arg maxk qik for the ith datapoint. The summary of the algorithm is presented in A.2." }, { "heading": "3.2 CHOICE OF FEATURE TRANSFORMATIONS", "text": "Fred & Jain (2005) discuss different ways to generate an ensemble of clusterings which are tabulated in Table 1. In our proposed algorithm, we focus on choosing of data representation to generate cluster ensembles.\nUsing different clustering methods (or multiple runs from different initializations) showed success when the goal is to aggregate the cluster partitions of the ensemble into a final partition. However,\nsince our goal is to learn representations simultaneously using back-propagation, it is natural to use a clustering algorithm in which we can update its parameters via back-propagation. It is infact possible to use clustering algorithms like IIC (Ji et al., 2019) that enable parameter updates, however, if multiple such algorithms are used to generate the ensemble, the backward computation graph consumes an enormous amount of memory. For large datasets (and large architectures), such techniques may not be practical. However, by using feature transformations, the amount of such memory overhead is small.\nBy fixing a stable clustering algorithm, we can generate arbitrarily large ensembles by applying different transformations on the embeddings. Random projections were successfully used in Consensus Clustering previously (Fern & Brodley, 2003). By generating ensembles using random projections, we have control over the amount of diversity we can induce into the framework, by varying the dimension of the random projection. In addition to Random Projections, we also used diagonal transformations (Hsu et al., 2018) where different components of the representation vector are scaled differently. Hsu et al. (2018) illustrate that such scaling enables a diverse set of clusterings which is helpful for their meta learning task." }, { "heading": "4 EXPERIMENTS", "text": "We evaluated our algorithm and compared against existing work on nine popular image datasets with a mix of high and medium resolution datasets: ImageNet-10, ImageNet-Dogs, STL10, CIFAR-10, CIFAR100-20, CUB, Caltech-101, AwA2 and Intel Image classification. The dataset summary is given in Table 6 (see A.1). The resolution column shows the size to which we resized the images in our algorithm. To the best of our knowledge some of these datasets (Intel, Caltech101, CUB, and AwA2) are being used for systematically evaluating clustering results for the first time (Section A.5). In our comparison, we considered some recent state-of-the-art methods that directly solve for a clustering objective in an end-to-end fashion from random initialization, and do not use any prior information (nearest neighbors for example) derived through other pretext tasks. The metrics for evaluation are explained in A.4." }, { "heading": "4.1 RESULTS", "text": "In Table 2, we show the best accuracy we get for the proposed method ConCURL and compare it against the best accuracy achieved by state-of-the-art methods. For ImageNet-10 and ImageNetDogs, we trained using the train split and evaluated on the train split. For STL10, CIFAR10 and CIFAR-100, similar to earlier approaches (Huang et al., 2020; Niu et al., 2020) we used both train and test splits for training and evaluation. Note that PICA also uses unlabelled data split of 100k points in STL10. In all datasets ConCURL outperforms all state-of-the-art algorithms. In the ImageNet-10 dataset, ConCURL is better by 5.9% and 21% compared to PICA and GATCluster respectively in terms of clustering accuracy. ConCURL is better than PICA by more than 9% in terms of NMI and by more than 11% in terms of ARI in the ImageNet-10 dataset. We beat the nextbest DCCM method by 18% in ImageNet-Dogs. We used the popular Iman and Daveport Test to calculate the p-value for statistical significance test (Garcia & Herrera, 2008). Our proposed method ConCURL was consistently ranked at the top for all three metrics ACC, ARI and NMI with p-value of 1.34× 10−4 , 4.57× 10−5 and 1.45× 10−3 respectively. This shows that results presented in the paper are statistically significant.\nWe performed additional experiments to compare ConCURL to PICA when trained only on the “train” split for STL-10, CIFAR-10 and CIFAR-100. From Table 3, we observe that ConCURL achieves better clustering on all metrics. From Table 2 and 3, we can observe that increasing the data for training improves the performance of the algorithm. In order to understand the generalization of\nthe algorithm, in Table 4, we show results for the case when the models were trained only on “train” split and evaluated on “test data” that was not used during training. We can observe that we perform better when evaluated on the test data.\nWe also perform an ablation study on the affect of the losses L1, L2, L3 (see A.6) and observe that the using consensus loss L3 almost always improves accuracy. This shows the importance of consensus loss (L3) and how ensemble learning through proposed consensus helps in achieving better clusters. Even though proposed method outperforms all state-of-the-art methods there are some assumptions that we make which are same as all state-of-the-methods compared. First assumption is that “K - number of clusters” are known and second assumption is that the data sample is equally distributed among “K” clusters. These assumptions are important to make the methodology work because the final layer in our case needs to know the number of clusters so that we can evaluate it later for fair comparison with other methods. Also, the fast Sinkhorn-Knopp algorithm used to solve Eq. 2 may not be optimal if the data samples are not equally distributed among “K” clusters." }, { "heading": "4.2 EFFECT OF NUMBER OF RANDOM TRANSFORMATIONS AND EMBEDDING SIZE", "text": "In order to study the sensitivity of the algorithm to the random transformations, we performed an ablation study for STL10 trained on the train split (Table 5). Recall that M is number of transformations used in algorithm 1 and M = 30, 50 yield good results for the two types of transformations (random projections and diagonal transformations). We can also observe that there isn’t a large difference between results obtained using either of the transformations. The results fluctuate with a margin of ± 0.06 and still outperform the other baselines in almost all cases. In the first column of the Table 5, we used diagonal transformations, and varied the number of transformations. The second column of the Table 5 contains results with a fixed dimension of random projection (=64), and varies the number of transformations. The third column of the Table 5 contains results with a fixed number of transformations (=100), and varies the dimension of projection (the original embedding size is 256)." }, { "heading": "4.3 CONSENSUS AMONG EACH COMPONENT OF THE ENSEMBLE", "text": "We also measure the diversity of the ensemble at every epoch to observe the affect of consensus as training progresses. For each component of the ensemble, we use the softmax probability estimates p and compute cluster assignments by taking an arg max on p of each image. If there are M components in the ensemble, we get M such cluster assignments. We then compute a pairwise NMI (Normalized Mutual Information) (similar analysis to Fern & Brodley (2003)) between every two components of the ensemble, and compute the average and standard deviation of the pairwise NMI across the M(M−1)2 pairs. We observe from Figure 2a that the pairwise NMI increases as training progresses and becomes closer to 1. At the beginning of training, the ensemble is very diverse (small NMI score with a larger standard deviation); and as training progresses, the diversity is reduced (large NMI score with a smaller standard deviation)." }, { "heading": "5 CONCLUSION", "text": "In this work, we leverage the ideas in unsupervised representation learning along with ensemble learning to perform clustering. We propose a novel ensemble learning algorithm which learns a representation by creating a consensus on multiple clustering outputs. Our proposed method outperforms all state of the art methods on various computer vision datasets. We also present the issue of overfitting in clustering and show that our method generalizes well on new data samples that were not available during training. This work is one of the first successful applications of ensemble learning in the deep clustering domain. This idea could easily be extended to use other general purpose representation and soft clustering algorithms than the ones used in this paper. A possible extension can be to leverage the knowledge reuse framework of Strehl & Ghosh (2002) and use the clusterings output by the ensemble to compute a better quality partition of the input data. We believe that ensemble learning algorithms could also be effective in increasing robustness in clustering and we are planning to investigate this point further." }, { "heading": "A APPENDIX", "text": "A.1 DATASET SUMMARY\nThe dataset summary is given in Table 6. For ImageNet-10 and ImageNet-Dogs are subsets of Deng et al. (2009). We used the same classes as Chang et al. (2017) for evaluating on these two datasets.\nA.2 ALGORITHM - PSEUDO CODE\nThe pseudo code for our algorithm is given in Algorithm 1.\nAlgorithm 1: Consensus Clustering algorithm (ConCURL ) Input: Dataset X = {xi}Ni=1, K,B, α, β, γ,M,d Output: Cluster label ci of xi ∈ X\n1 Randomly initialize network parameters w,K prototypes c1:K ,M random projection matricesR1:M to dimension d and e = 0; 2 while e < total epoch number do 3 for b ∈ {1, 2, . . . , bNB c} do 4 SelectB samples as Xb from X ; 5 Make a forward pass on two views of the input batch (X 1b ,X 2 b ), and obtain the features z 1 1:B , z 2 1:B ; 6 Compute loss L1 which is the representation loss 7 Compute probability of ith sample belonging to the jth cluster, pi,j for both views separately, using normalized z, c eq ( 1); 8 Compute codes of the current batch q using eq (2); 9 Compute loss L2 using eq(3);\n10 form ∈ {1, 2, . . . ,M } do 11 z̃, c̃←− Compute random transformations of z, c; 12 Compute probability of ith sample belonging to the jth cluster, (p̃(1,m)i,j , p̃ (2,m) i,j ) using normalized z̃, c̃ eq (1); 13 end 14 Compute loss L3 using eq (5) ; 15 Compute total loss using eq (6). Update the parameters, and prototypes using gradients 16 end 17 e := e+ 1 18 end 19 Make forward pass on all the data and store the features; 20 foreach xi ∈ X do 21 Compute probability of ith sample belonging to the jth cluster, pi,j using normalized zi, cj eq (1); 22 Compute codes q using eq (2); 23 ci := argmaxk qik; 24 end\nA.3 IMPLEMENTATION DETAILS\nWe use a residual network (He et al., 2016) with 34 layers (current state of the art clustering results Huang et al. (2020) also use the same architecture). The MLP projection head consists of a hidden layer of size 2048, followed by batch normalization and ReLU layers, and an output layer of size 256. We use the Adam optimizer (Kingma & Ba, 2014) with a learning rate of 0.0005. We implemented our algorithm using the Pytorch framework and trained our algorithm using a V100 GPU, taking 8 hours to train ImageNet-10 (13,000 training size) with a batch size of 128 for 500 epochs using 1 GPU.\nWe explain the generation of multiple augmented views that are shown to be very effective in unsupervised learning (Chen et al., 2020). Note that these augmented views are different from the views of a Multi-view image data set such as Schops et al. (2017). Indeed it is possible to use more than two augmented views, but we limit to two for the sake of simplicity. Caron et al. (2020) propose an augmentation technique (Multi-Crop) to use more than two views. In this work, we use the augmentations used in Chen et al. (2020); Grill et al. (2020). We first crop a random patch of the image with scale ranging from 0.08 to 1.0, and resize the cropped patch to 224×224 (128×128 and 96×96 in the case of smaller resolution datasets such as Intel and STL10 respectively). The resulting image was then flipped horizontally with a probability of 0.5. We then apply color transformations, starting by randomly changing the brightness, contrast, saturation and hue with a probability of 0.8. The image was then changed to gray-scale with a probability of 0.2. Then we applied a Gaussian Blur with kernel size (23×23) and a sigma chosen uniformly randomly between 0.1 and 2.0. The probability of applying the Gaussian Blur was 1.0 for view 1 and 0.5 for view 2. During evaluation, we resized the image such that the smaller edge of the image is of size 256 (not required for STL, Intel, CIFAR10, CIFAR100-20), and a center crop is performed with the resolution mentioned in Table 6. The color transformations were computed using Kornia (Riba et al., 2020) which is a differentiable computer vision library for Pytorch.\nTo compute the random transformations on the embeddings z, we followed two techniques. We used random projections (Bingham & Mannila, 2001) with output dimension d, and transformed the embeddings z to the new space with dimension d. We also used diagonal transformation (Hsu et al., 2018) where we multiply z with a randomly generated diagonal matrix of the same dimension as z. We initializedM random transformations at the beginning and remain fixed throughout the training.\nWe performed model selection on the hyperparameters of the random transformations on the embedding space such as the number of random transformations M (ranging from 10 to 100) and the dimensionality of the output space if using a random projection (we used [32, 64, 128, 256, 512]). We evaluated the models based on the metrics mentioned in Section A.4 on the data used for training the representations. Note that we fixed the number of prototypes to be equal to the number of ground truth classes. It was shown however that over-clustering leads to better representations (Caron et al., 2020; Ji et al., 2019; Asano et al., 2019) and we can extend our model to include an over-clustering block with a larger set of prototypes (Ji et al., 2019) and alternate the training procedure between the blocks.\nA.4 EVALUATION METRICS\nWe evaluate our algorithm by computing traditional clustering metrics (Cluster Accuracy, Normalized Mutual Information, and Adjusted Rand Index). Note that for measuring clustering metrics, the usual approach taken in literature is to evaluate the cluster metrics on the train data. Here, we report results on both the train data, as well as test data separately.\nCluster Accuracy The clustering accuracy is computed by first computing a cluster partition of the input data. Once the partitions are computed and cluster indices assigned to each input data point, the linear assignment map is computed using Kuhn-Munkres (Hungarian) algorithm that reassigns the cluster indices to the true labels of the data. Clustering accuracy is then given by\nACC =\n∑N i=1 1{ytrue(xi) = c(xi)}\nN ,\nwhere ytrue(xi) is a true label of xi and c(xi) is the cluster assignment produced by an algorithm (after Hungarian mapping).\nNormalized Mutual Information For two clusteringsU, V , with each containing |U |, |V | clusters respectively, and let |Ui| be the number of samples in cluster Ui of clustering U (similarly for V ) , Mutual Information (MI) is given by\nMI(U, V ) = |U |∑ i=1 |V |∑ j=1 |Ui ∩ Vi| N log N |Ui ∩ Vj | |Ui||Vj |\nwhere N is the number of data points under consideration. Normalized Mutual Information is defined as\nNMI(U, V ) = MI(U, V )√\nMI(U,U)MI(V, V )\nAdjusted Rand Index (Hubert & Arabie, 1985; Skl) Suppose R is the groundtruth clustering and S is a partition, the RI of S is given as follows. Let a be the number of pairs of elements that are in the same set in R as well as in S; b be the number of pairs of elements that are in diferent sets in R, and different sets in S. Then\nRI = a+ b( n 2 ) ARI =\nRI − E[RI] max(RI)− E[RI]\nA.5 MORE DATASETS RESULTS\nIn Table 7, we show evaluation metrics on other four datasets for our method on both train and test split.\nA.6 ABLATION STUDY ON THE LOSSES\nIn this subsection, we study the effect of weights α, β and γ on the final metrics.Results for the weight configuration corresponding to α = 1, β = 1, γ = 1 is what is shown in the main paper. For the case of (α = 1, β = 0, γ = 0), we computed the cluster accuracy, NMI, ARI by computing the embeddings of all the data output by the representation learning algorithm used for L1 (here Grill et al. (2020)). Then we computed a K-means clustering on the embeddings (the target projection layer embeddings in this case) to obtain a partition of the data, and follow the same procedure mentioned in A.4. For all the experiments in this section, we trained only on the train split of the datasets.\nA.7 T-SNE PLOTS\nIn figure 3, we show t-sne plot of the ImageNet-10 embeddings obtained from ConCURL trained model. One can clearly see the separation between various clusters with the exception of airline and airship clusters. Airline and airship clusters are mixed together on leftmost and righmost part of the t-sne plot." } ]
2,020
null
SP:150de466f594e7fede616959fe98956e300dfefe
[ "This paper proposes an alternative data augmentation method for robust image classification. Training on augmented data has been shown to improve robustness. The proposed method mixes discretized images with real images and uses a consistency loss to enforce smoothness among those predictions. Experimental results include classification, detection and segmentation. Section 5 includes ablation studies on the choice of consistency loss and choice of discretization. " ]
Convolutional Neural Networks (CNNs) are vulnerable to unseen noise on input images at the test time, and thus improving the robustness is crucial. In this paper, we propose DJMix, a data augmentation method to improve the robustness by mixing each training image and its discretized one. Discretization is done in an unsupervised manner by an autoencoder, and the mixed images are nearly impossible to distinguish from the original images. Therefore, DJMix can easily be adapted to various image recognition tasks. We verify the effectiveness of our method using classification, semantic segmentation, and detection using clean and noisy test images.
[]
[ { "authors": [ "Alexander A. Alemi", "Ian Fischer", "Joshua V. Dillon", "Kevin Murphy" ], "title": "Deep variational information bottleneck", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking Atrous Convolution for Semantic Image Segmentation", "venue": null, "year": 2017 }, { "authors": [ "Ekin D. Cubuk", "Barret Zoph", "Dandelion Mane", "Vijay Vasudevan", "Quoc V. Le" ], "title": "Autoaugment: Learning augmentation strategies from data", "venue": null, "year": 2019 }, { "authors": [ "M Everingham", "S M A Eslami", "L VanGool", "C K I Williams", "J Winn", "A Zisserman" ], "title": "The Pascal Visual Object Classes Challenge: A Retrospective", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Ian Fisher", "Alexander A. Alemi" ], "title": "Ceb improves model robustness", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Robert Geirhos", "Heiko H. Schütt", "Carlos R. Medina Temme", "Matthias Bethge", "Jonas Rauber", "Felix A. Wichmann" ], "title": "Generalisation in humans and deep neural networks", "venue": "NeurIPS,", "year": 2018 }, { "authors": [ "Robert Geirhos", "Claudio Michaelis", "Felix A. Wichmann", "Patricia Rubisch", "Matthias Bethge", "Wieland Brendel" ], "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Keren Gu", "Brandon Yang", "Jiquan Ngiam", "Quoc Le", "Jonathon Shlens" ], "title": "Using Videos to Evaluate Image Model Robustness", "venue": null, "year": 2019 }, { "authors": [ "B Hariharan", "P Arbeláez", "L Bourdev", "S Maji", "J Malik" ], "title": "Semantic contours from inverse detectors", "venue": "In ICCV,", "year": 2011 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": null, "year": 2016 }, { "authors": [ "Dan Hendrycks", "Thomas Dietterich" ], "title": "Benchmarking neural network robustness to common corruptions and perturbations", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Dan Hendrycks", "Norman Mu", "Ekin D. Cubuk", "Barret Zoph", "Justin Gilmer", "Balaji Lakshminarayanan" ], "title": "AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Kilian Q. Weinberger", "Laurens van der Maaten" ], "title": "Densely Connected Convolutional Networks", "venue": null, "year": 2017 }, { "authors": [ "Andrew Ilyas", "Shibani Santurkar", "Dimitris Tsipras", "Logan Engstrom", "Brandon Tran", "Aleksander Madry" ], "title": "Adversarial examples are not bugs, they are features", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Jason Jo", "Yoshua Bengio" ], "title": "Measuring the tendency of cnns to learn surface statistical regularities", "venue": null, "year": 2017 }, { "authors": [ "Jeff Johnson", "Matthijs Douze", "Hervé Jégou" ], "title": "Billion-scale similarity search with GPUs", "venue": null, "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning Multiple Layers of Features from Tiny Images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "ImageNet Classification with Deep Convolutional Neural Networks", "venue": "In NIPS,", "year": 2012 }, { "authors": [ "Ilya Loshchilov", "Frank Hutter" ], "title": "SGDR: Stochastic Gradient Descent with Warm Restarts", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards Deep Learning Models Resistant to Adversarial Attacks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Claudio Michaelis", "Benjamin Mitzkus", "Robert Geirhos", "Evgenia Rusak", "Oliver Bringmann", "Alexander S. Ecker", "Matthias Bethge", "Wieland Brendel" ], "title": "Benchmarking robustness in object detection: Autonomous driving when winter is coming", "venue": null, "year": 2019 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Ali Razavi", "Aäron van den Oord", "Oriol Vinyals" ], "title": "Generating diverse high-fidelity images with VQ-VAE-2", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wojciech Zaremba", "Ilya Sutskever", "Joan Bruna", "Dumitru Erhan", "Ian Goodfellow", "Rob Fergus" ], "title": "Intriguing properties of neural networks", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Naftali Tishby", "Fernando C Pereira", "William Bialek" ], "title": "The information bottleneck method", "venue": "In The 37th Annual Allerton Conference on Communication, Control, and Computing,", "year": 2000 }, { "authors": [ "Yuji Tokozume", "Yoshitaka Ushiku", "Tatsuya Harada" ], "title": "Between-class Learning for Image Classification", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Aaron van den Oord", "Oriol Vinyals", "Koray Kavukcuoglu" ], "title": "Neural Discrete Representation Learning", "venue": "In NIPS,", "year": 2017 }, { "authors": [ "Igor Vasiljevic", "Ayan Chakrabarti", "Gregory Shakhnarovich" ], "title": "Examining the Impact of Blur on Recognition by Convolutional Networks. arXiv, 2016", "venue": null, "year": 2016 }, { "authors": [ "Haohan Wang", "Xindi Wu", "Zeyi Huang", "Eric P. Xing" ], "title": "High-frequency component helps explain the generalization of convolutional neural networks", "venue": null, "year": 2020 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Doll" ], "title": "Aggregated Residual Transformations for Deep Neural Networks", "venue": null, "year": 2017 }, { "authors": [ "Dong Yin", "Raphael Gontijo Lopes", "Jon Shlens", "Ekin Dogus Cubuk", "Justin Gilmer" ], "title": "A fourier perspective on model robustness in computer vision", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide Residual Networks", "venue": "In BMVC,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond Empirical Risk Minimization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Richard Zhang" ], "title": "Making convolutional networks shift-invariant again", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Zhun Zhong", "Liang Zheng", "Guoliang Kang", "Shaozi Li", "Yi Yang" ], "title": "Random erasing data augmentation", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Chen" ], "title": "2017) for 30 epochs with a batch size of 32 and a learning rate of 1.0×10−3. We used SGD with a momentum of 0.9 and set its initial learning rate to 0.02. The learning rate is multiplied by a factor", "venue": null, "year": 2017 }, { "authors": [ "Chen" ], "title": "See https:// github.com/pytorch/vision/tree/master/references/segmentation for further details", "venue": null, "year": 2017 } ]
[ { "heading": null, "text": "1 INTRODUCTION\nCNNs are the de facto standard components of image recognition tasks and achieve excellent performance. However, CNNs are vulnerable to unseen noise on input images. Such harmful noise includes not only adversarially generated noise (Szegedy et al., 2014; Goodfellow et al., 2014), but also naturally possible noise such as blur by defocusing and artifacts generated by JPEG compression (Vasiljevic et al., 2016; Hendrycks & Dietterich, 2019). Natural noise on input images is inevitable in the real world; therefore, making CNNs robust to natural noise is crucial for practitioners.\nA simple approach to solving this problem is adding noise to training images, but this does not make models generalize to unseen corruptions and perturbations (Vasiljevic et al., 2016; Geirhos et al., 2018; Gu et al., 2019). For example, even if Gaussian noise of a certain variance is added during training, models fail to generalize to Gaussian noise of other variances. Nonetheless, some data augmentation methods are effective for improving robustness. For example, Yin et al. reported that extensive augmentation, such as AutoAugment (Cubuk et al., 2019), improves the robustness. Similarly, Hendrycks et al. proposed to mix differently augmented images during training to circumvent the vulnerability. We will further review previous approaches in Section 2.\nDespite the effectiveness of these data augmentation and mixing approaches, these methods require handcrafted image transformations, such as rotation and solarizing. Particularly when geometrical transformations are used, the mixed images cannot have trivial targets in non classification tasks, for instance, semantic segmentation and detection. This lack of applicability to other tasks motivates us to introduce robust data augmentation without such transformations.\nIn this paper, we propose Discretizing and Joint Mixing (DJMix) which mixes original and discretized training images to improve the robustness. The difference between the original and obtained images is nearly imperceptible, as shown in Figure 1, which enables the use of DJMix in various image recognition tasks. In Section 3, we will introduce DJMix and analyze it empirically and theoretically. We show that DJMix reduces mutual information between inputs and internal representations to ignore harmful features and improve CNNs’ resilience to test-time noise.\nTo benchmark the robustness of CNNs to unseen noise, Hendrycks & Dietterich (2019) introduced ImageNet-C as a corrupted counterpart of the ImageNet validation set (Russakovsky et al., 2015). CNN models are evaluated using this dataset on the noisy validation set, whereas they are trained without any prior information on the corruptions on the original training set. Similarly, Geirhos et al. created noisy ImageNet and compared different behaviors between humans and CNN models with image noise. In addition to these datasets designed for classification, we cre-\nated Segmentation-C and Detection-C datasets, which are corrupted counterparts of the PASCAL-VOC validation sets (Everingham et al., 2015).\nWe demonstrate the robustness of CNN models trained with DJMix on various tasks using these benchmark datasets in Section 4. Additionally, we perform experimental analyses, including ablation studies, to verify DJMix in Section 5. In summary, our contributions are summarized as follows:\n1. We introduce DJMix, a simple task-agnostic data augmentation method for improving robustness. DJMix mixes the original and discretized training images and can be straightforwardly adapted to various image recognition tasks. We empirically demonstrate the effectiveness of this approach.\n2. We analyze DJMix theoretically from the Information Bottleneck perspective, which could help analyze other robust methods. We also investigate DJMix from the Fourier sensitivity perspective.\n3. We create datasets, Segmentation-C and Detection-C, to benchmark the robustness of CNN models on semantic segmentation and detection tasks.\n2 RELATED WORK\nSmall corruptions or perturbations on images can drastically change the predictions of CNN models. While adversarially generated noise, i.e., adversarial examples (Szegedy et al., 2014; Goodfellow et al., 2014), can be thought of as the worst case, natural noise also harms the performance of CNNs. Such natural noise includes blurs and artifacts generated by JPEG compression (Vasiljevic et al., 2016; Hendrycks & Dietterich, 2019). Because of this vulnerability, CNN models sometimes predict inconsistently among adjacent video frames (Gu et al., 2019; Hendrycks & Dietterich, 2019). For the real-world application of CNNs, this vulnerability needs to be overcome. A strong defense against adversarial examples is adversarial training, where CNN models are trained with adversarial examples (Goodfellow et al., 2014; Madry et al., 2018). Unfortunately, this approach fails in natural noise, because CNNs trained on a specific type of noise do not generalize to other types of noise (Geirhos et al., 2018; Vasiljevic et al., 2016). Instead, we need robust methods that are agnostic to test-time noise a priori (Hendrycks & Dietterich, 2019).\nData augmentation is a practical approach to improving the generalization ability on clean data, e.g., by randomly flipping and cropping (Krizhevsky et al., 2012; He et al., 2016), mixing different images (Zhang et al., 2018; Tokozume et al., 2018), or erasing random regions (DeVries & Taylor; Zhong et al., 2020). Some types of data augmentation are also reported to improve robustness. For example, strong data augmentation, namely, AutoAugment (Cubuk et al., 2019), can improve the\nrobustness (Yin et al., 2019). Similarly, AugMix is a data augmentation method to alleviate the problem by mixing images that are differently augmented from each input image. CNNs exploit the texture or higher-frequency domains of images (Jo & Bengio, 2017; Ilyas et al., 2019; Wang et al., 2020), and thus, CNNs trained on detextured ImageNet images by style transfer show robustness to noise on input images (Geirhos et al., 2019).\nOrthogonal to manipulating input images, enhancing CNN architectures or components is also a possible direction. Larger CNN models with feature aggregation used in DenseNet (Huang et al., 2017) and ResNeXt (Xie et al., 2017) show better robustness to natural noise (Gu et al., 2019; Hendrycks & Dietterich, 2019). MaxBlur has been proposed to improve the shift invariance of subsampling operations used in pooling operations, e.g., MaxPooling, and enhance the robust performance (Zhang, 2019).\nOur approach, DJMix, belongs to the first ones, which use data augmentation to enhance robustness. Unlike previous methods, DJMix applies imperceptible and task-agnostic augmentation to images. This property allows us to use DJMix for various image recognition tasks.\n3 DJMIX FOR ROBUST DATA AUGMENTATION\nA CNN model fθ : RD → RD ′\nis usually trained to minimize the task loss ℓ(fθ(x),y), where x ∈ RD is an input image, and y ∈ RD′ is its target. When the task is a D′-category classification task, y is a one-hot vector and ℓ is cross-entropy.\nDJMix uses a pair of loss functions, the task loss ℓ(fθ(x̂),y) and the consistency loss d(fθ(x̂), fθ(x)). Then, CNN models are trained to minimize\nℓ(fθ(x̂),y) + γd(fθ(x̂), fθ(x)), (1)\nwhere γ is a positive coefficient. x̂ ∈ RD is a discretized image of an input image x, which we will describe in Section 3.1, and d is a divergence, such as the Jensen–Shannon divergence (Section 3.2). We will discuss why DJMix improves robustness in Sections 3.3 and 3.4, both theoretically and empirically.\n3.1 DISCRETIZATION OF IMAGES\nDJMix discretizes each input image x into g(x), where g : RD → RD is a discretizing autoencoder (DAE), whose bottleneck is discretized. Specifically, we used the Vector-Quantized Variational AutoEncoder used by van den Oord et al. (2017) and Razavi et al. (2019). This DAE g has a bottleneck of dimension C and discretizes the features by vector quantization with the codebook size of 2K . DAE is pretrained on training data to minimize Ex‖g(x) − x‖22 in an unsupervised manner.\nAs we will show in Section 5, mixing each input image and its discretized one improves the robustness. More precisely, instead of using x̂ = g(x), we use\nx̂ = βx+ (1− β)g(x), (2)\nwhere β ∈ [0, 1] is sampled from a random distribution. Following Zhang et al. (2018), we adopt Beta distribution. Although this mixing strategy is similar to that of AugMix, some differences exist. A major difference is the discrepancy between x and x̂. Because AugMix applies geometric and color-enhancing operations to obtain x̂, its appearance is different from x, whereas DJMix yields a nearly identical x̂ from x. A minor difference is the task loss: DJMix uses ℓ(fθ(x̂),y), whereas AugMix uses ℓ(fθ(x),y). We will analyze this difference in Section 5.\n3.2 CONSISTENCY LOSS\nThe consistency loss d(fθ(x̂), fθ(x)) forces a CNN model to map x and x̂ closely and make these representations indistinguishable. Following Hendrycks et al. (2020), we use the Jensen–Shannon\n(JS) divergence as the divergence d. We compare other divergences and distances with the JS divergence in Section 5.\nDJMix appears similar to AugMix (Hendrycks et al., 2020), as both use the mixing of images and the consistency loss. However, the details of the mixing processes are different: whereas AugMix yields a different looking x̂ from x, DJMix augments a similar x̂ to x. Owing to this property, DJMix can be used in various image recognition tasks, as we will show in Section 4. Additionally, the task loss of DJMix uses mixed images as ℓ(fθ(x̂),y), whereas that of AugMix uses the original image as ℓ(fθ(x),y). Empirically, we found that ℓ(fθ(x̂),y) improves the robustness compared with ℓ(fθ(x),y), which we will show in Section 5.1.\n3.3 FROM INFORMATION BOTTLENECK PERSPECTIVE\nThe Information Bottleneck objective (Tishby et al., 2000) can be written with an intermediate feature z of a model fθ as maxθ I(z,y;θ) s.t. I(x, z;θ) ≤ I , where I(w, v) := DKL(p(w, v)||p(w)p(v)) is mutual information between w and v, and I is a positive constraint. Supervised training is expected to maximize I(z,y;θ). However, without the constraint, z highly likely contains unnecessary details of the input; then the models learn vulnerable representation (Alemi et al., 2017; Fisher & Alemi, 2020). Importantly, DJMix introduces this constraint to improve the robustness by ignoring task-irrelevant details. For the following theorem, we assume that β in Equation (2) is 0, and fθ and g have enough capacities to achieve training losses being 0.\nTheorem 1. Let z be fθ(x). After convergence of the model fθ trained with DJMix, mutual information is constrained by the logarithm of the codebook size, i.e.,\nI(x, z;θ) ≤ K. (3)\nProof. After convergence, d(fθ(x̂), fθ(x)) becomes 0, or equivalently, fθ(x̂) = fθ(x) from the assumption. x is quantized into a codeword x̂ ∈ {1, 2, . . . , 2K} in the DAE g. Therefore, we obtain\nK = H(Uniform{1, 2, . . . , 2K}) (H: entropy) = H(x̂) ≥ H(x̂)−H(x̂ | fθ(x̂)) = I(fθ(x̂), x̂) ≥ I(fθ(x),x) (from Data Processing Inequality) = I(x, z)\n3.4 FROM FOURIER SENSITIVITY PERSPECTIVE\nFigure 2 presents the sensitivity of CNN models trained with and without DJMix to additive noise of Fourier-basis vectors (Yin et al., 2019). Here, we used WideResNet trained on CIFAR10. As can be seen, DJMix improves robustness to a wide range of frequencies: from lower frequencies, depicted in the center area, to higher frequencies, which appearing in the edges. These results imply why CNN models trained with DJMix show robustness to input images with noise. The experiments discussed in Section 4 further demonstrate more empirical robustness of DJMix.\n4 EXPERIMENTS AND RESULTS\nIn this section, we present experimental results. We first introduce experimental settings and new datasets, Segmentation-C, and Detection-C, whose input images are artificially corrupted to measure the robustness. Then, we present empirical results and comparisons with other methods in Section 4.1 for classification, Section 4.2 for semantic segmentation, and Section 4.3 for detection. We conducted the experiments three times with different random seeds for each setting and reported the averaged values, except for ImageNet experiments. We describe the additional details of the experiments in Appendix B.\nIMPLEMENTATION\nWe implemented DJMix as well as CNN models using PyTorch (Paszke et al., 2019) and used FAISS (Johnson et al., 2017) to make the nearest neighbor search in DAE faster. We used classification models used by Hendrycks et al. (2020) 1. For segmentation and detection tasks, we used DeepLab-v3 and Faster-RCNN from torchvision2, whose backbone networks are ResNet-50 (He et al., 2016) pretrained on ImageNet (Russakovsky et al., 2015).\nDAE is pretrained on each dataset for the classification task and on ImageNet for other tasks. We set a dictionary size to 512, i.e., K = 9, following Razavi et al. (2019). We set the parameters of Beta distribution (β0,β1) for mixing in Equation (2) to (1.0, 0.5), and the coefficient for the consistency loss γ to 1.0.\nDJMix is a task-agnostic method and can improve robustness by itself. Additionally, DJMix can be incorporated with task-specific data augmentation. We introduce a DJMix variant that applies random data augmentation (DJMix+RA), consisting of AugMix’s augmentation operations. We describe more details of RA in Appendix B.5.\nDATASETS\nFor classification, we used three datasets, CIFAR10, CIFAR100 (Krizhevsky, 2009), and ImageNet (Russakovsky et al., 2015), consisting of 10, 100, and 1,000 categories, respectively. We trained CNN models on clean training sets and evaluated the models using the accompanying clean test sets. We also evaluated the models on corrupted test sets (CIFAR10-C, CIFAR100-C, and ImageNet-C) proposed by Hendrycks & Dietterich (2019), which are created to measure the behavior of CNN models with 15 common corruptions.\nTo benchmark the robustness in segmentation and detection tasks, we created Segmentation-C and Detection-C datasets from PASCAL VOC-2012 (Everingham et al., 2015). Namely, we degenerated images of the test set of PASCAL VOC-2012 for segmentation and the validation set of PASCAL VOC-2012 for detection using 10 degeneration operations used in ImageNet-C: gaussian_noise, shot_noise, impulse_noise, snow, frost, fog, brightness, contrast, pixelate, and jpeg_compression. We omitted five blur operations, namely defocus_blur, glass_blur, motion_blur, zoom_blur, and gaussian_blur, because the expected outputs for segmentation and detection under corruption are not trivial. Examples of Segmentation-C are presented in Figure 3. Following the convention, we trained models on the augmented dataset of PASCAL-VOC (Hariharan et al., 2011) for segmentation and the union of train and validation datasets of VOC-2007 and VOC-2012 for detection. Similar to\n1https://github.com/google-research/augmix 2https://github.com/pytorch/vision/tree/v0.7.0.\nDetection-C, Michaelis et al. (2019) also created corrupted test images of detection datasets for autonomous driving.\nMETRICS\nFor the classification task, we use the error rates as the metric on the original test sets. On the corrupted data, we measure the corrupted error rate Ec,s, where c is the corruption type, e.g., Gaussian noise, and s is the severity level, and report the statistics. Precisely, we use the averaged scores Ec,sEc,s for CIFAR10-C and CIFAR100-C, and Corruption Error EsEc,s/EsEAlexNetc,s for ImageNet-C, following Hendrycks & Dietterich (2019), where EAlexNetc,s is the error rate of AlexNet (Krizhevsky et al., 2012). For the segmentation task, we report the mean intersection over union (mIoU) on the clean data. On Segmentation-C, we use corrupted mIoU Ic,s and report averaged mIoU EsIc,s. Similarly, for the detection task, we report mean average precision (mAP) on the clean data and averaged corrupted mAP EsAc,s of corrupted mAP Ac,s on Detection-C.\n4.1 CLASSIFICATION\nWe trained models on CIFAR10, CIFAR100, and ImageNet. For CIFAR10 and CIFAR100, we used DenseNet-BC (k = 12, d = 100) (Huang et al., 2017), WideResNet-28-2 (Zagoruyko & Komodakis, 2016), and ResNeXt-29 (Xie et al., 2017). For ImageNet, we used ResNet-50 (He et al., 2016).\nThe results on CIFAR10 and CIFAR100 are presented in Table 1 with comparison with the baseline, mixup (Zhang et al., 2018), AugMix, and AugMix without geometric transformations (GT). Table 1 shows that AugMix without GTs degenerates performance both on clean and corrupted data from AugMix, indicating that the robustness of AugMix heavily depends on GTs. DJMix shows balanced performance between clean and corrupted data, without such GTs. DJMixwith RA further decreases error rates on corrupted datasets, as well as on clean datasets. We present the results on ImageNet in Table 2. Again, DJMix decreases Corruption Errors, particularly when strong data augmentation is introduced (DJMix+RA).\n4.2 SEMANTIC SEGMENTATION\nWe trained DeepLab-v3 (Chen et al., 2017) on PASCAL-VOC. The logits before upsampling are used for the consistency loss.\nTable 3 shows the mIoU comparison of baseline, AugMix without GTs (AugMix∗), DJMix, and DJMixwith random augmentation without GTs (DJMix+RA∗). We omitted GTs, because defining the targets for mixed images that differently applied GTs are not trivial for segmentation and detection tasks. AugMix w/o GT uses pairs of original and augmented images, i.e., the width is set to 2. As can be seen, DJMix improves the robustness, especially when combined with extra data augmentation. In some cases, such as Gaussian noise, shot noise, and impulse noise, DJMix+RA markedly\nenhances the performance from DJMix and AugMix w/o GT, which implies the importance of the combination of task-specific and task-agnostic augmentation in practice.\n4.3 DETECTION\nWe trained Faster-RCNN (Ren et al., 2015) on PASCAL-VOC. The consistency loss between the output logits of the backbone network is used for training. Table 4 shows that DJMix yields better performance on almost all corruption types. As semantic segmentation, we compare baseline, AugMix∗, DJMix, and DJMix+RA∗. Similarly to semantic segmentation, DJMix markedly improves the robustness in the detection task.\n5 ANALYSIS\n5.1 ABLATION STUDIES\nDESIGN OF TASK LOSS: The task loss of DJMix presented in Equation (1) is ℓ(fθ(x̂),y), but ℓ(fθ(x),y), as AugMix, is also a possible choice. We compare these choices in Table 5 (a, b). ℓ(fθ(x̂),y) improves the robustness compared with ℓ(fθ(x),y).\nCHOICE OF CONSISTENCY LOSS: DJMix uses JS divergence as the consistency loss, but other divergences can also be used as the loss function. Here, we compare the performance when JS divergence is replaced with KL divergence and L2 distance. As can be seen from Table 5 (a, c), JS and KL show similar performance, whereas L2 shows performance degeneration on corrupted data.\nEFFECT OF DISCRETIZATION: We verify the effect of the discretization of DJMix using DAE by substituting the standard autoencoder for DAE. Namely, we removed vector quantization modules of DAE and pretrained the AE on the training data to minimize the reconstruction error as DAE. Table 5 (a, d) shows that discretization improves CNNs’ robustness as expected from the Information Bottleneck perspective presented in Section 3.3.\nEFFECT OF MIXING: Table 5 (e) shows test error rates of DJMix without mixing, i.e., β = 0 in Equation (2) where only discretized images are used. The results show that mixing is indispensable to retain the performance on clean data. We present further experiments on betas in Appendix B.\n5.2 COMPUTATIONAL OVERHEAD OF DJMIX\nWe find that the computational overhead by the DAE is negligible. However, the number of forward passes affects the training time. For instance, the standard training on CIFAR10 using WideResNet for 200 epoch requires approximately 1 hour in our environment. DJMix with two forward passes per update takes about 2 hours, and AugMix with three forward passes per update takes about 3 hours. Importantly, DJMix does not modify the components of CNNs as AugMix; therefore, these methods do not affect the test-time speed, which is preferable for the real-world applications.\n6 CONCLUSION\nIn this paper, we have proposed DJMix, a novel task-agnostic approach to make CNN models robust to test-time corruption. To achieve task-agnostic robustness, we have used an autoencoder with a discretization bottleneck. Unlike previous approaches, the image modification of DJMix does not affect the appearance of images, which is useful for non classification tasks. Experiments have shown that DJMix improves the robustness of CNN models to input noise in semantic segmentation and detection, in addition to classification. We have found that combining task-specific and taskagnostic augmentation methods further improves performance on noisy images. We hope that data augmentation for robustness, including DJMix, bridges research and the real-world practice of deep learning.\nREFERENCES Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. Deep variational information\nbottleneck. In ICLR, 2017.\nLiang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv, 2017.\nEkin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V. Le. Autoaugment: Learning augmentation strategies from data. In CVPR, 2019.\nTerrance DeVries and Graham W. Taylor. Improved Regularization of Convolutional Neural Networks with Cutout. arXiv.\nM Everingham, S M A Eslami, L VanGool, C K I Williams, J Winn, and A Zisserman. The Pascal Visual Object Classes Challenge: A Retrospective. In ICCV, 2015.\nIan Fisher and Alexander A. Alemi. Ceb improves model robustness. In ICLR, 2020.\nRobert Geirhos, Heiko H. Schütt, Carlos R. Medina Temme, Matthias Bethge, Jonas Rauber, and Felix A. Wichmann. Generalisation in humans and deep neural networks. In NeurIPS, 2018.\nRobert Geirhos, Claudio Michaelis, Felix A. Wichmann, Patricia Rubisch, Matthias Bethge, and Wieland Brendel. Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2019.\nIan J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. arXiv, 2014.\nKeren Gu, Brandon Yang, Jiquan Ngiam, Quoc Le, and Jonathon Shlens. Using Videos to Evaluate Image Model Robustness. arXiv, 2019.\nB Hariharan, P Arbeláez, L Bourdev, S Maji, and J Malik. Semantic contours from inverse detectors. In ICCV, 2011.\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.\nDan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019.\nDan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. In ICLR, 2020.\nGao Huang, Zhuang Liu, Kilian Q. Weinberger, and Laurens van der Maaten. Densely Connected Convolutional Networks. In CVPR, 2017.\nAndrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In NeurIPS, 2019.\nJason Jo and Yoshua Bengio. Measuring the tendency of cnns to learn surface statistical regularities. arXiv, 2017.\nJeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with GPUs. arXiv, 2017.\nAlex Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, 2009.\nAlex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS, 2012.\nIlya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Warm Restarts. In ICLR, 2016.\nAleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR, 2018.\nClaudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, and Wieland Brendel. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv, 2019.\nAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In NeurIPS, 2019.\nAli Razavi, Aäron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with VQ-VAE-2. In NeurIPS, 2019.\nShaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NIPS, 2015.\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.\nChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2014.\nNaftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. In The 37th Annual Allerton Conference on Communication, Control, and Computing, 2000.\nYuji Tokozume, Yoshitaka Ushiku, and Tatsuya Harada. Between-class Learning for Image Classification. In ICLR, 2018.\nAaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural Discrete Representation Learning. In NIPS, 2017.\nIgor Vasiljevic, Ayan Chakrabarti, and Gregory Shakhnarovich. Examining the Impact of Blur on Recognition by Convolutional Networks. arXiv, 2016. URL http://arxiv.org/abs/ 1611.05760.\nHaohan Wang, Xindi Wu, Zeyi Huang, and Eric P. Xing. High-frequency component helps explain the generalization of convolutional neural networks. In CVPR, June 2020.\nSaining Xie, Ross Girshick, and Piotr Doll. Aggregated Residual Transformations for Deep Neural Networks . In CVPR, 2017.\nDong Yin, Raphael Gontijo Lopes, Jon Shlens, Ekin Dogus Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision. In NeurIPS, 2019.\nSergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. In BMVC, 2016.\nHongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond Empirical Risk Minimization. In ICLR, 2018.\nRichard Zhang. Making convolutional networks shift-invariant again. In ICML, 2019.\nZhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. In AAAI, 2020.\nA ADDITIONAL ABLATION STUDIES\nA.1 THE EFFECT OF BETA DISTRIBUTION PARAMETERS\nMain experiments used (β0,β1) = (1.0, 0.5) of Beta distribution for mixing x̂ = βx + (1 − β)g(x), where β ∼ Beta(β0,β1). Figure 5 shows test error rates on different combinations of the parameters using WideResNet. Larger β0 and smaller β1 yield x̂ close to x, and vice versa, which is reflected in the results on CIFAR10, i.e., clean data. We used (β0,β1) = (1.0, 0.5), which balances performance on clean and corrupted data.\nB EXPERIMENTAL SETTINGS DETAILS\nThis section describes additional experimental settings.\nB.1 DISCRETIZING AUTOENCODER\nWe trained the autoencoder for 200 epochs to minimize the reconstruction error, and its codebook is updated by exponential moving average. The hyperparameters are identical to Razavi et al. (2019).\nB.2 CLASSIFICATION\nWe trained models on CIFAR10, CIFAR100, and ImageNet. For CIFAR10 and CIFAR100, we used DenseNet-BC (k = 12, d = 100) (Huang et al., 2017), WideResNet-28-2 (Zagoruyko & Komodakis, 2016) and ResNeXt-29 (Xie et al., 2017). We trained these networks for 200 epochs using stochastic gradient descent with a momentum of 0.9 and setting an initial learning rate to 0.1 that decays by cosine annealing with warm restart (Loshchilov & Hutter, 2016). We set a weight decay to 1× 10−4 and a batch size to 128. We used data augmentation of random horizontal flipping, random cropping, and random erasing (Zhong et al., 2020) by default. For ImageNet, we\nused ResNet-50 (He et al., 2016) and trained it for 100 epochs using SGD with a momentum of 0.9 and a weight decay of 1 × 10−4 . We set the batch size to 1,024 and an initial learning rate to 0.4 that decays at 30, 60, and 90th epochs. We used random cropping and horizontal flipping as the base data augmentation.\nWhen training ResNet-50 on ImageNet, we used automatic mixed precision (AMP) implemented in PyTorch v1.6 to save the GPU memory consumption. We also used AMP for semantic segmentation and detection tasks.\nB.3 SEMANTIC SEGMENTATION\nWe trained DeepLab-v3 (Chen et al., 2017) for 30 epochs with a batch size of 32 and a learning rate of 1.0×10−3. We used SGD with a momentum of 0.9 and set its initial learning rate to 0.02. The learning rate is multiplied by a factor of (1 − iterationtotal iteration )\n0.9 as Chen et al. (2017). See https:// github.com/pytorch/vision/tree/master/references/segmentation for further details.\nB.4 DETECTION\nWe trained Faster-RCNN (Ren et al., 2015) for 26 epochs with a batch size of 32 and a learning rate of 1.0× 10−3. The learning rate is divided by 10 at 16th and 22nd epochs, while the first 1,000 iterations are the warmup period. See https://github.com/pytorch/vision/tree/ master/references/detection for further details.\nB.5 RANDOM AUGMENTATION\nWe used random augmentation (RA) as task-specific data augmentation, which is orthogonal to DJMix. Basically, we followed the data augmentation module of AugMix, and the only difference is the width. Whereas AugMix sets the width to 3, DJMix uses the width of 1, i.e., only a single stream of operations is applied to each input image. Each image augmented by RA is used as an input x to DJMix." } ]
2,020
DJMIX: UNSUPERVISED TASK-AGNOSTIC AUGMEN-
SP:470fdc4e61564e09d538f7ecd1225494e08416f2
[ "This paper proposes LLBoost that enables adjusting the last linear layer without impacting the training accuracy under the assumption that the last linear layer is in an over-parametrized situation. When the last layer is not over-parametrized, LLBoost first applies the low rank approximation to the training feature matrix through the SVD decomposition, which may affect the original training accuracy. The reason why LLBoost does not change the training accuracy is explained as follows: In an over-parametrized noiseless linear regression, a solution of a linear system $y = wX$ obtained by the gradient descent with an initial value of $w^{(0)}$ is given in a closed form of $\\hat{w} = w^{(0)} (I - X(X^\\top X)^\\dagger X^\\top) + yX^\\dagger$. Therefore, we can compute a solution of $y = wX$ by simply generating $w^{(0)}$ randomly and applying this formula. It is also experimentally verified that LLBoost can adjust the last linear layer without impacting the training accuracy (after appriximated with SVD when necessary). The authors also present theoretical results that sampling $w^{(0)}$ uniformly on the hyper-shpere of appropriate radius leads to a solution that is better than the minimum norm solution ($yX^\\dagger$) with constant probability." ]
While deep networks have produced state-of-the-art results in several domains from image classification to machine translation, hyper-parameter selection remains a significant computational bottleneck. In order to produce the best possible model, practitioners often search across random seeds or use ensemble methods. As models get larger, any method to improve neural network performance that involves re-training becomes intractable. For example, computing the training accuracy of FixResNext-101 (829 million parameters) on ImageNet takes roughly 1 day when using 1 GPU. In this work, we present LLBoost, a theoretically-grounded, computationallyefficient method to boost the validation accuracy of pre-trained overparameterized models without impacting the original training accuracy. LLBoost adjusts the last layer of a neural network by adding a term that is orthogonal to the training feature matrix, which is constructed by applying all layers but the last to the training data. We provide an efficient implementation of LLBoost on the GPU and demonstrate that LLBoost, run using only 1 GPU, improves the test/validation accuracy of pre-trained models on CIFAR10, ImageNet32, and ImageNet. In the over-parameterized linear regression setting, we prove that LLBoost reduces the generalization error of any interpolating solution with high probability without affecting training error.
[]
[ { "authors": [ "Peter L. Bartlett", "Philip M. Long", "Gábor Lugosi", "Alexander Tsigler" ], "title": "Benign overfitting in linear regression", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Siyuan Ma", "Soumik Mandal" ], "title": "Reconciling modern machinelearning practice and the classical bias–variance trade-off", "venue": "Proceedings of the National Academy of Sciences,", "year": 2019 }, { "authors": [ "Mikhail Belkin", "Daniel Hsu", "Ji Xu" ], "title": "Two models of double descent for weak features", "venue": null, "year": 1903 }, { "authors": [ "Yoshua Bengio" ], "title": "Practical recommendations for gradient-based training of deep architectures", "venue": "CoRR, abs/1206.5533,", "year": 2012 }, { "authors": [ "K Bibas", "Y. Fogel", "M. Feder" ], "title": "A new look at an old problem: A universal learning approach to linear regression", "venue": null, "year": 1905 }, { "authors": [ "Leo Breiman", "David Freedman" ], "title": "How many variables should be entered in a regression equation", "venue": "Journal of the American Statistical Association,", "year": 1983 }, { "authors": [ "Heinz Werner Engl", "Martin Hanke", "Andreas Neubauer" ], "title": "Regularization of inverse problems, volume 375", "venue": "Springer Science & Business Media,", "year": 1996 }, { "authors": [ "Dumitru Erhan", "Yoshua Bengio", "Aaron Courville", "Pierre-Antoine Manzagol", "Pascal Vincent", "Samy Bengio" ], "title": "Why Does Unsupervised Pre-training Help Deep Learning", "venue": "Journal of Machine Learning Research (JMLR),", "year": 2010 }, { "authors": [ "Yoav Freund", "Robert Schapire" ], "title": "A short introduction to boosting", "venue": "Journal of Japanese Society for Artificial Intelligence,", "year": 1999 }, { "authors": [ "Walter Gautschi" ], "title": "Some Elementary Inequalities Relating to the Gamma and Incomplete Gamma Function", "venue": "Journal of Mathematics and Physics,", "year": 1959 }, { "authors": [ "Seyyed Hossein Hasanpour", "Mohammad Rouhani", "Mohsen Fayyaz", "Mohammad Sabokrou" ], "title": "Lets keep it simple, using simple architectures to outperform deeper and more complex architectures", "venue": "arXiv preprint arXiv:1608.06037,", "year": 2016 }, { "authors": [ "Trevor Hastie", "Andrea Montanari", "Saharon Rosset", "Ryan J Tibshirani" ], "title": "Surprises in highdimensional ridgeless least squares interpolation", "venue": "arXiv preprint arXiv:1903.08560,", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Gao Huang", "Yixuan Li", "Geoff Pleiss", "Zhuang Liu", "John E. Hopcroft", "Kilian Q. Weinberger" ], "title": "Snapshot ensembles: Train 1, get m for free", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Like Hui", "Mikhail Belkin" ], "title": "Evaluation of Neural Architectures Trained with Square Loss vs Cross-Entropy in Classification Tasks", "venue": "arXiv preprint arXiv:2006.07322,", "year": 2020 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Master’s thesis, University of Toronto,", "year": 2009 }, { "authors": [ "Yann LeCun", "Léon Bottou", "Yoshua Bengio", "Patrick Haffner" ], "title": "Gradient-based learning applied to document recognition", "venue": "Proceedings of the IEEE,", "year": 1998 }, { "authors": [ "Shengquiao Li" ], "title": "Concise formulas for the area and volume of a hyperspherical cap", "venue": "Asian Journal of Mathematics and Statistics,", "year": 2011 }, { "authors": [ "Marco Loog", "Alexander Mey", "Jesse H. Krijthe", "David M.J. Tax" ], "title": "A brief prehistory of double descent", "venue": "Proceedings of the National Academy of Sciences,", "year": 2020 }, { "authors": [ "Partha P. Mitra" ], "title": "Understanding overfitting peaks in generalization error: Analytical risk curves for l2 and l1 penalized interpolation", "venue": null, "year": 1906 }, { "authors": [ "Vidya Muthukumar", "Kailas Vodrahalli", "Vignesh Subramanian", "Anant Sahai" ], "title": "Harmless interpolation of noisy data in regression", "venue": "IEEE Journal on Selected Areas in Information Theory,", "year": 2020 }, { "authors": [ "Travis E Oliphant" ], "title": "A guide to NumPy, volume 1", "venue": "Trelgol Publishing USA,", "year": 2006 }, { "authors": [ "Aaron van den Oord", "Nal Kalchbrenner", "Koray Kavukcuglu" ], "title": "Pixel recurrent neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga", "Alban Desmaison", "Andreas Kopf", "Edward Yang", "Zachary DeVito", "Martin Raison", "Alykhan Tejani", "Sasank Chilamkurthy", "Benoit Steiner", "Lu Fang", "Junjie Bai", "Soumith Chintala" ], "title": "Pytorch: An imperative style, highperformance deep learning library", "venue": "Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein", "Alexander C. Berg", "Fei-Fei Li" ], "title": "ImageNet large scale visual recognition challenge", "venue": "International Journal of Computer Vision,", "year": 2015 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Hugo Touvron", "Andrea Vedaldi", "Matthijs Douze", "Hervé Jégou" ], "title": "Fixing the train-test resolution discrepancy", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2019 }, { "authors": [ "Stefan Van Der Walt", "S Chris Colbert", "Gael Varoquaux" ], "title": "The numpy array: a structure for efficient numerical computation", "venue": "Computing in Science & Engineering,", "year": 2011 } ]
[ { "heading": null, "text": "In this work, we present LLBoost, a theoretically-grounded, computationallyefficient method to boost the validation accuracy of pre-trained overparameterized models without impacting the original training accuracy. LLBoost adjusts the last layer of a neural network by adding a term that is orthogonal to the training feature matrix, which is constructed by applying all layers but the last to the training data. We provide an efficient implementation of LLBoost on the GPU and demonstrate that LLBoost, run using only 1 GPU, improves the test/validation accuracy of pre-trained models on CIFAR10, ImageNet32, and ImageNet. In the over-parameterized linear regression setting, we prove that LLBoost reduces the generalization error of any interpolating solution with high probability without affecting training error." }, { "heading": "1 INTRODUCTION", "text": "Over the past decade, deep networks have produced a number of state-of-the-art results including surpassing human performance on the ImageNet classification task (26; 14). However, tuning hyperparameters to produce the best possible neural network in a given application remains a computationally expensive task. State-of-the-art results often involve selecting the best model using multiple random seeds (14; 27; 12; 28) or ensembling (15), and training even a single model can take several days even when using multiple GPUs. Hence, it is critical to identify computationally efficient approaches to improve the performance of pre-trained models without the need of re-training them.\nIn this work, we present LLBoost, a theoretically-grounded, computationally-efficient method to boost the validation accuracy of pre-trained, over-parameterized models without impacting the training accuracy. Figure 1 provides an overview of our method as well as the main results. As shown in Figure 1A, our method adjusts the last fully-connected layer of a neural network by selecting the best performing perturbation out of several orthogonal to the training feature matrix, which is constructed by applying all but the last layer to the training data. In Figure 1B, we provide an example showing how our method applied to a model trained under a poorly chosen random seed can boost validation accuracy comparable to that of a model trained under a better seed. Lastly, Figure 1C shows that our method can significantly improve the validation accuracy of pre-trained neural networks on large datasets using a fraction of the training time.\nThe intuition for our method is based on characterizing the benefit of random initialization in overparameterized linear regression. In particular, consider a dataset (X, y) ⊂ Rd×n×R1×n with n < d for which there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data, we use gradient descent with learning rate η and initialization w(0) to minimize the squared loss, i.e. to solve:\narg min w∈Rd\n1 2 ‖y − wX‖2.\nIt is well-known (8) that gradient descent converges to the following closed-form solution:\nwr = w (0)(I −X(XTX)†XT ) + yX†,\nwhere A† is the pseudo-inverse of A (see Appendix A for a proof). In this work, we prove that when n < d, sampling w(0) on a hyper-sphere leads to wr with lower generalization error than the minimum norm solution yX† with constant probability. Since the last layer of modern deep networks is usually linear, we apply this result from regression to the last layer of networks to arrive at our method, LLBoost.\nWe end the introduction with a summary of our main contributions:\n1. We present LLBoost, a computationally efficient method for boosting the validation accuracy of pre-trained neural networks without affecting training accuracy. Additionally, we provide an efficient implementation of LLBoost on the GPU.\n2. We provide a wealth of empirical evidence that LLBoost improves the validation accuracy of pre-trained neural networks and prove that it does not affect training accuracy for overparameterized models.\n3. We provide evidence that LLBoost yields a computational advantage over random seed search for training neural networks.\n4. In the over-parameterized linear regression setting, we prove that LLBoost reduces the test error of any interpolating solution with high probability without affecting training error." }, { "heading": "2 RELATED WORK", "text": "Understanding the benefits of over-parameterization is a recent topic of major interest in machine learning. (2) introduced the double descent curve showing that when increasing model capacity past the interpolation threshold, test error can drop a second time. (20) noted that this phenomenon\nhad been noticed empirically several decades prior to the work of (2). A line of recent work have provided a theoretical analysis of double descent in linear regression (13; 3; 1; 22; 5; 21). In particular, (13; 3) analyzed the generalization error for the minimum norm solution in over-parameterized linear models. These works proved that the minimum norm solution can yield lower generalization error in the over-parameterized regime than in the under-parameterized regime. Our theoretical analysis primarily builds on that of (3), but we analyze the generalization error of random-initialization interpolating solutions instead of the minimum-norm solution.\nVarious methods, including random seed search and ensembling, are standardly used to improve the performance of machine learning models through training. (4) recommends using 5-10 random seeds if computational power is available, and this is typically done in state-of-the-art models; for example, (14) considered 5 random seeds for ResNet-110 and (12) considered at least 2 random seeds on CIFAR10 (17). (9) rigorously analyzed the impact of random seeds on model performance by considering between 50-400 random seeds for models of varying depth on MNIST (18). Figure 1 from their work demonstrates that random seeds can affect the validation accuracy of models by up to a few percentage points on MNIST. Since this comes at a significant computational price, it is critical to identify methods that can obtain such boosts without having to perform random seed search, which is the topic of this paper.\nAnother popular approach to improving neural network performance is ensembling, for example via bagging (6) or boosting (10). Recently, (15) presented an approach to ensembling, which involved training a single network and saving the model parameters each time training stopped at a local minimum. (15) built upon (30), which introduced horizontal and vertical voting to ensemble a model across training epochs.\nLLBoost can be used in combination with all the above approaches to further boost their performance, since it only adjusts the last layer of a pre-trained model without requiring any training." }, { "heading": "3 PRELIMINARIES AND METHOD", "text": "We now present the preliminaries relevant for formally introducing LLBoost. Since our method is derived from over-parameterized linear regression, we first describe this problem setting.\nNoiseless Regression. Let (X, y) ⊂ Rd×n×R1×n denote the training dataset where d is the number of dimensions and n is the number of samples. In noiseless linear regression there exists w∗ ∈ R1×d such that y = w∗X . In order to estimate w∗ from the data, we use gradient descent to minimize the squared loss. Formally, we solve:\narg min w∈Rd\n1 2 ‖y − wX‖2 (1)\nusing gradient descent with learning rate η, which proceeds according to the updates:\nw(t+1) = w(t) + η(y − w(t)X)XT .\nThe following well-known theorem (8) gives the solution, w(∞), for the objective (1) given by gradient descent.\nTheorem 1. Let (X, y) ⊂ Rd×n × R1×n with X of rank r < d, let λmax denote the largest eigenvalue ofXXT , and letA† denote the pseudo-inverse of a matrixA. Given initializationw(0) ∈ Rd, gradient descent used to solve:\narg min w∈Rd\n1 2 ‖y − wX‖2\nwith learning rate η < λ−1max converges to w (∞) = w(0)(I −X(XTX)†XT ) + yX†.\nWhen w(0) = 0 (zero initialization), w(∞) is the minimum `2-norm solution for solving the linear system wX = y. The following property of w(∞) is used to demonstrate that LLBoost does not impact training error.\nLemma 1. Let w(∞) = w(0)(I−X(XTX)†XT ) + yX†. Then, w(0)(I−X(XTX)†XT ) ⊥ yX†.\nThe proof follows directly by substituting inX = UΣV T given by the singular value decomposition (SVD). Lemma 1 implies that LLBoost does not affect training predictions since it only ever adds a component orthogonal to the span of the training feature matrix. We primarily focus on the noiseless regression setting, but we note that our theory directly extends to the Gaussian model from (3) and (7).\nWe now present a result from random matrix theory, which is at the basis of our results. The proof of the following lemma relies on the rotation invariance of the multivariate Gaussian distribution and is presented in Appendix B.\nLemma 2. If X ∈ Rd×n with columns x(i) ∼ N (0, Id×d), then the singular vectors of X are uniformly distributed on the unit sphere Sd−1. In particular, if x ∼ N (0, Id×d), then x‖x‖ is uniformly distributed on the unit sphere Sd−1. Method Overview. Given the above preliminaries, we present LLBoost in Algorithm 1. LLBoost takes in the following inputs: (1) the training feature matrix X; (2) the validation feature matrix Xt; (3) the validation labels yt; (4) the last layer weights w; (5) the number of samples to consider T ; and (6) the radius of the hyper-sphere to sample from γ. LLBoost begins by constructing the projection matrix P = I −X(XTX)†XT . Then, LLBoost samples z ∼ N (0, I), normalizes z and adds the perturbation γzP to w. LLBoost repeats this process and returns the perturbation that most improves the validation accuracy.\nRemarks. As shown in Figure 1A, X and Xt are constructed by applying all but the last layer of the neural network to the training/test data. If X is not full rank (as is the case for neural networks that are not over-parameterized), we can create a low rank X by truncating the lower singular values of X . This correction only mildly affects training error as is shown in Section 4. For classification settings, yt should be converted to 1-hot vectors. When the last layer has a bias, one can simply append a 1 to w and append an extra row of 1’s to X and Xt. In most of our experiments, we consider up to T = 100, 000 samples. When selecting γ, one can utilize a binary search approach starting with γ = 1√\nd and either doubling or halving γ depending on whether the validation accuracy\nincreased or decreased respectively. However, as a simple heuristic one can start with γ = 1√ d ‖w‖ (as is recommended by Theorem 3). Lastly, when the network has multiple outputs (as in multiclass problems), we let the entries of the weight matrix be i.i.d. standard normal variables and then normalize by the Frobenius norm.\nLLBoost does not impact training error/accuracy. Since the perturbations constructed by LLBoost are always projected into the space orthogonal to the training features X , Lemma 1 implies that LLBoost does not alter the training predictions. This is an important aspect of our algorithm since otherwise, LLBoost could just overfit the test data.\nAlgorithm 1 LLBoost (X,Xt, yt, w, T, γ) Input: X := rank r training feature matrix; Xt := val. feature matrix; yt := val. labels; w := last\nlayer weights; T := number of samples; γ := radius of hyper-sphere Output: wbest := adjusted last layer weights\n1: wbest = None 2: accbest = 0; (1, d) = w.shape; U,Σ, V T ← SVD(X) 3: P ← Id×d − UIr×rUT 4: for t← 1 to T do 5: z ← N (0, Id×d) 6: z ← z‖z‖ 7: wr ← γzP + w 8: acc = GetV alAcc(Xt, yt, wr) 9: if acc > accbest then\n10: wbest = wr; accbest = acc 11: end if 12: end for 13: return wbest\nGPU Implementation. We can easily remove the loop in Algorithm 1 by using the GPU since the dominating computations are vector-matrix products. A full implementation is provided using PyTorch (25) and NumPy (23; 29) in the footnote below1. In practice, computing the validation accuracy for each sample of wr is the limiting factor in LLBoost. For large datasets, the functions used to compute the SVD and the projection matrix P need to be selected carefully. Note that with respect to SVD, the reduced SVD is sufficient for our algorithm. While for the ImageNet training features from FixResNext-101 the full SVD would require constructing a matrix with roughly 1012 entries, the reduced SVD only requires 108 entries." }, { "heading": "4 EMPIRICAL RESULTS", "text": "In Figure 2, we provide empirical evidence that LLBoost consistently improves the validation accuracy of state-of-the-art models on CIFAR10, ImageNet32 (24), and ImageNet. For models finetuned2 on ImageNet32 or CIFAR10, we provide all training details in Appendix C. Importantly, as proven by Lemma 1, we note that the training accuracy remains unchanged after applying LLBoost.\nHandling under-parameterized models. If n denotes the number of training samples, the size of the training feature matrix is 512 × n for ResNets-18 and 34 while it is 2048 × n for ResNet-50 and FixResNext-101 (28). Hence ResNet-18 and 34 are over-parameterized on the training set when n < 512, while ResNet-50 and FixResNext-101 are over-parameterized for n < 2048. When the training feature matrix, X , has full rank, we truncate the lower singular values of X in order to be able to search the space orthogonal to X . Formally, if X = UΣV T , we construct X̂ = U Σ̂V T\nwhere we zero out the bottom entries of Σ. When X̂ has rank k, we refer to it as the rank k approximation of X .\nOur experiments are summarized as follows:\n1. On ImageNet32, we fine-tune pre-trained ResNets of varying depth on subsets of varying size of two classes (Kit-Fox vs. English-Setter). There are a total of 2600 examples (1300 of each class). We then apply LLBoost to these pre-trained models and use a rank 12\n1https://anonymous.4open.science/r/b11fa900-efbf-45ae-aee8-a264d954cb51/ 2By fine-tuning, we mean training a pre-trained model on a new dataset. This is discussed further in the\ntransfer learning tutorials provided by PyTorch (25).\napproximation for the feature matrix when necessary. This set of experiments demonstrates that LLBoost improves transfer learned models on small datasets.\n2. On CIFAR10, we fine-tune a pre-trained ResNet-18. We then apply LLBoost to the pretrained network. In this setting, there are 50, 000 training examples and so the feature matrix is full rank. To make the model over-parameterized, we instead use the rank 212 approximation. Remarkably, this approximation does not impact the original training accuracy.\n3. On ImageNet, we apply LLBoost to pre-trained FixResNext-101. ImageNet contains 1, 281, 167 training examples and the feature matrix is again full rank. In this case, we use the rank 1000 approximation, and the original training accuracy decreases from 95.193% to 94.924% when applying the original weights to the rank 1000 approximation.\nIn all settings, LLBoost improves validation accuracy. In addition, as proven by Lemma 1, there is no impact on training accuracy in the over-parameterized setting. In under-parameterized settings such as FixResNext-101 on ImageNet, our experiments indicate only minor decrease in training accuracy when using a low rank approximation to X . In Appendix D, we additionally present: (1) the difference in `2/Frobenius norm between the LLBoost weights and the original weights; (2) the run-time of LLBoost; (3) the choice of γ used in the experiments. In Appendix F, we discuss how the rank was chosen when approximating full rank feature matrices, and provide an analysis of how low rank approximations affect train/validation accuracy.\nRemarks. In the last column of Figure 2, we also provide the performance of perturbing the last layer of a neural network with a usual distribution such as the standard normal. Note that such a standard normal perturbation significantly impacts the original training accuracy; in fact for larger datasets such as ImageNet, the training and validation accuracies drop below 1%. Hence, without the projection operator, perturbations to the last layer can be devastating to network performance. In Appendix E, we also demonstrate that when including the projection operator but using standard normal initialization (instead of uniform on the hyper-sphere as done in LLBoost) can similarly reduce validation accuracy. While we have thus far concentrated on validation accuracy, in Appendix G we show that in the setting of training, validation, and test splits, LLBoost not only improves validation accuracy, but also improves test accuracy without impacting training accuracy.\nImprovement over Random Seed Search. We demonstrate in Figure 3 that LLBoost provides a significant computational advantage over random seed search. In particular, in Figure 3 we compare the performance of fine-tuning pre-trained models on ImageNet32 and CIFAR10 using random seeds. Naturally, LLBoost can be applied to boost the performance of all these models. Moreover, Figure 3 illustrates that LLBoost can boost the performance of a model trained using a poor random seed to that of a model trained using a well-selected seed. Since LLBoost only takes a fraction of the training time, these experiments identify LLBoost as a computationally efficient alternative to random seed search." }, { "heading": "5 THEORETICAL RESULTS", "text": "We now present our derivation and analysis of LLBoost for over-parameterized noiseless linear regression. In the noiseless regression setting, we assume that the labels y ∈ R1×n are generated by the product of w∗ ∈ R1×d and data X ∈ Rd×n. Let ŵr = w(0)(I − X(XTX)†XT ) + yX† and ŵ = yX† denote interpolating solutions for the linear system wX = y given by gradient descent from initialization w(0) and 0 respectively. In this section we establish the following:\n1. Generalization bounds on wr.\n2. There exist infinitely many w(0) such that ‖ŵr−w∗‖ < ‖ŵ−w∗‖, i.e. there exist infinitely many random initializations that out-perform the minimum norm solution. In particular, we show that even if ‖w(0) − w∗‖ is large, ‖ŵr − w∗‖ can be arbitrary close to 0.\n3. Sampling w(0) uniformly on the hyper-sphere of appropriate radius leads to ŵr outperforming the minimum norm solution with constant probability.\nThe following proposition (with proof in Appendix H) compares the generalization error of the random initialization solution, ŵr and the minimum norm solution, ŵ.\nProposition 1. Let X = UΣV T denote the SVD of X and let Σ⊥ = I − ΣΣ†. Then, (a) ‖ŵ − w∗‖ = ‖w∗UΣ⊥UT ‖, (b) ‖ŵr − w∗‖ = ‖(w(0) − w∗)UΣ⊥UT ‖ ≤ ‖w(0) − w∗‖.\nBy using Proposition 1 and extending the proof technique of (3), we establish the following generalization bounds for the solution starting from initialization w(0).\nTheorem 2. Assuming data x, x(i) ∼ N (0, Id×d) for i ∈ [n], then (1) Ex,X [(y − ŵx)2] = EX [‖ŵ − w∗‖2] = ‖w∗‖2 ( 1− n\nd\n) ,\n(2) Ex,X [(y − ŵrx)2] = EX [‖ŵr − w∗‖2] = ‖w(0) − w∗‖2 (\n1− n d\n) .\nThe proof is presented in Appendix I. From Theorem 2, the out of sample performance of the solution initialized at w(0) is governed by the distance between w(0) and w∗, which matches intuition. While this result is in expectation over the data drawn from an isotropic Gaussian, we now prove the remarkable fact that even if ‖w(0) − w∗‖ = c1 is large, ‖ŵr − w∗‖ can be any value between 0 and c1.\nProposition 2. Let r denote the rank of the data matrix X and let c1 ≥ c2 ≥ 0. Then there exists w(0) such that ‖w(0) − w∗‖ = c1 and ‖ŵr − w∗‖ = c2. The full proof is presented in Appendix J. Importantly, Proposition 2 provides the following intuition around the benefit of random initialization in over-parameterized models. When X has rank r for r small, the space orthogonal toX is large with dimension d−r. There is thus a priori no reason for the true w∗ to lie in the span of the training data. Hence, by sampling w(0) to add a term orthogonal to X we can expect to obtain a boost in performance over the minimum norm solution. The following proposition and theorem present a sampling method for w(0), which provably provides a boost over the minimum norm solution with constant probability.\nProposition 3. Let Ud represent the uniform distribution on the sphere. Assume that the data x, x\n(2) i ∼ N (0, Id×d) for i ∈ [n] and that w(0) ∼ Ud. Then, Pw(0)(Ex,X [(y − ŵrx)2] ≤ Ex,X [(y − ŵx)2]) = Γ ( d 2 ) √ πΓ ( d−1 2 ) ∫ φ 0 sind−2 θdθ, (2)\nwhere φ = cos−1 (\n1 2‖w∗‖\n) and Γ(x) denotes the Gamma function, which satisfies Γ(x) = (x −\n1)Γ(x− 1) with Γ(1) = 0 and Γ( 12 ) = √ π.\nThe proof is given in Appendix K. The benefit of Proposition 3 is that we can lower bound the right hand side of Equation (2) by a constant greater than 0. This is computed explicitly in Theorem 3.\nTheorem 3. Let Zw(0) = Ex,X [(y − ŵx)2] − Ex,X [(y − ŵrx)2] denote the difference in expected generalization error between the minimum `2-norm solution and the random initialization solution. If w0 ∼ Ud(γ), ≥ 0, and κ = γ+ γ 2‖w∗‖ , then\n1 π cos−1 (κ)−\n√ d(d− 1)( √ 2(d− 4) + 1)κ (√ 1− κ2 ) 2 √ π(d− 2) ≤ Pw(0) ( Zw(0) ≥ ( 1− n d )) ≤ 1 2\nIn particular, if ‖w∗‖ = 1, γ = 1√ d , and = 1d , then κ = 1 2 √ d and\n1 2 − 1 2 √ 2π ≤ lim d→∞\nPw(0) ( Zw(0) ≥ 1\nd\n( 1− n\nd\n)) ≤ 1\n2 (3)\nRemarks. Theorem 3 is at the core of LLBoost; it implies that the probability of ŵr having lower expected generalization error than ŵ is constant. Note that based on Equation (3) this constant is lower bounded by roughly .3 (meaning we get a 30% chance of improvement). Importantly, Theorem 3 can be trivially extended to demonstrate that LLBoost can reduce the generalization error of any interpolating solution with constant probability (provided that the generalization error is not already 0). We also remark that the lower bound in Equation (3) is nearly tight: in practice, this constant is roughly .31. Lastly, note that all of the above theory holds also for the regression setting with multiple outputs by replacing the usual `2 norm with the Frobenius norm.\nWhile the full proof of Theorem 3 is presented in Appendix L, we here provide a sketch. The proof provides an approximation for the integral on the right hand side of Equation (2) by using Gautschi’s Inequality (11) and integration by parts. Note that while it may seem at first that φ should be uniformly distributed when w(0) is uniformly distributed on the sphere (and thus the probability should go to 1/2 as d goes to∞), it is in fact not true that φ is uniformly distributed when w(0) is uniformly distributed, which is why a more precise argument is required." }, { "heading": "6 DISCUSSION", "text": "In this work, we presented LLBoost, a theoretically-grounded, computationally-efficient method to improve the validation accuracy of pre-trained neural networks without impacting training accuracy. Through a variety of experiments on ImageNet32, CIFAR10, and ImageNet, we demonstrated that our method is practically relevant and can be used to boost the performance of state-of-the-art pretrained models. A standard method to improve the performance of neural networks is random seed search. We showed that LLBoost provides a significant computational advantage over random seed search and can even boost the performance of a model trained on a poorly selected seed to that of a model trained on a well-selected seed. Thus, LLBoost is a computationally efficient alternative to random seed search. Lastly, we provided theoretical footing for our method by showing that LLBoost provably reduces the test error of any interpolating solution in over-parameterized linear regression. An interesting direction of future work is to identify alternate sampling schemes for LLBoost (other than uniform on the hyper-sphere) that provably yield a greater increase in validation accuracy without decreasing training accuracy." }, { "heading": "A GRADIENT DESCENT IN LINEAR REGRESSION", "text": "Theorem 4. Let (X, y) ⊂ Rd×n × Rn with X of rank r and X = UΣV T its singular value decomposition (SVD). Given an initialization w(0) = 0, gradient descent used to solve:\narg min w∈Rd\n1 2 ‖y − wX‖2.\nwith learning rate η < 1 λmax(XXT ) converges to:\nw(∞) = yV Σ†UT , where Σ† = 1 σ1\n0 . . . 0\n0 1σ2 . . . 0 0r×d−r 0 . . . 1σr 0d−r×r 0d−r×d−r\n .\nProof. Let S = XXT and S′ = yXT . Then, w(t+1) = w(t)(I− ηS) + ηS′. Now we directly solve the recurrence relation; namely,\nw(t) = ηS′((I − ηS)t−1 + (I − ηS)t−2 + ...+ (I − ηS)1 + I).\nLet X = UΣV T denote the singular value decomposition of X where {σ1, . . . , σr} are the nonzero entries of Σ and r is the rank of X . Then, S = UΣ2UT , and S′ = yV ΣUT . Thus, we can simplify the recurrence relation:\nw(t) = ηS′U((I − ηΣ2)t−1 + (I − ηΣ2)t−2 + ...+ (I − ηΣ2)1 + I)UT .\nSince (I − ηΣ2)t−1 + (I − ηΣ2)t−2 + ...+ (I − ηΣ2)1 + I is a geometric series, for η < 1 σ21\n, we have:\nw(t) = ηS′UΣ+UT ,\nΣ+ = 1−(1−ησ21)t ησ21 0 . . . 0 0 1−(1−ησ22)t ησ22 . . . 0 0r×d−r 0 . . . 1−(1−ησ2r)t\nησ2r\n0d−r×r tId−r×d−r\n .\nNow substituting in S′ = yV ΣUT gives us:\nw(t) = yV Σ†UT ,\nΣ† = 1−(1−ησ21)t σ1 0 . . . 0 0 1−(1−ησ22)t σ2 . . . 0 0r×d−r 0 . . . 1−(1−ησ2r)t\nσr\n0d−r×r 0d−r×d−r\n .\nLastly, we can take the limit as t→∞ to conclude that\nw(∞) = lim t→∞ w(t) = yV Σ†UT , where Σ† = 1 σ1\n0 . . . 0\n0 1σ2 . . . 0 0r×d−r 0 . . . 1σr 0d−r×r 0d−r×d−r\n .\nNote that the proof above can be easily extended to the setting of a random initialization w(0)." }, { "heading": "B DISTRIBUTION OF SINGULAR VECTORS OF A RANDOM MATRIX", "text": "Proof. We use the rotational invariance of the multivariate isotropic Gaussian. If A is an orthonormal matrix, then we have:\nxT I−1x = xTAT I−1Ax = (Ax)T I−1(Ax).\nNow, suppose A,B are both orthonormal matrices, then we have:\nAXBT = (A⊗B)Xv,\nwhere Xv ∈ Rdn is the row-major vectorization of X and ⊗ is the Kronecker product. Now, since A,B are orthonormal, we have that A ⊗ B is orthonormal. Hence, AXBT must have the same distribution as X , and thus the singular vectors of AXBT must have the same distribution as those of X . Since singular vectors lie on Sd−1 and since the distribution is rotation invariant, we conclude that the singular vectors are uniformly distributed on Sd−1." }, { "heading": "C TRAINING DETAILS", "text": "We now describe the training methodology we used to train pre-trained models on ImageNet32 and CIFAR10. The optimizer, initialization, learning rate, and seeds used to train the ResNets in Figure 2 and 3 are presented in Figure 4. Note that all of our models were trained with mean squared error, as discussed in (16). We trained models on ImageNet32 for 150 epochs and on CIFAR10 for 50 epochs. We then saved the model with the highest validation accuracy.\nFor all experiments, we used the PyTorch deep learning library (25). We trained our models on a shared server with 1 Titan Xp and 2 GeForce GTX 1080 Ti’s. We only used 1 GPU at a time for training neural networks and applying LLBoost." }, { "heading": "D ADDITIONAL EXPERIMENTAL DETAILS", "text": "In this section, we provide the following additional details regarding the experiments in Figure 2 and 3:\n1. The number of components used in the low-rank approximations for a full rank training feature matrix (Figure 5).\n2. The size of the perturbation produced by LLBoost and the values of γ used for the models in Figure 2 (Figure 6).\n3. A comparison between training time and the time taken for LLBoost to improve the models in Figure 2 (Figure 7)." }, { "heading": "E PERFORMANCE OF PROJECTED STANDARD NORMAL PERTURBATIONS", "text": "In Figure 2, we demonstrated that perturbing the last layer without projecting to the space orthogonal to the feature matrix provided a drastic decrease in the training and validation accuracy. In Figure 8, we illustrate the impact of using a perturbation that is randomly sampled from a standard normal and then projected to the space orthogonal to the feature matrix. Again, we see that the validation accuracies can drop significantly for larger datasets in this case. Note that including the projection operator preserves the training accuracy in all cases, as is guaranteed by Lemma 1." }, { "heading": "F LOW RANK APPROXIMATIONS FOR FEATURE MATRICES", "text": "As discussed in Section 4, when the feature matrix, X , is full rank, we needed to use a low-rank approximation such that the space orthogonal to X . In this section, we discuss our method of\nchoosing the number of components of the SVD to keep for producing the low-rank approximation forX . We then present how the number of components selected affects the performance of LLBoost.\nIn Figure 9, we visualize the normalized singular values of the feature matrix for models from Figure 2. In Figure 9A, we do not use a low-rank approximation as the size of the dataset is already smaller than the number of features. In Figure 9B, the feature matrices are full rank, and so we use a lowrank approximation for the feature matrix with the number of components selected shown red. In particular, we chose a number of components that is well past the elbow in the curve so that there was not a significant drop in training accuracy.\nIn Figure 10, we demonstrate how the number of components selected for the low-rank approximation affects the validation accuracy of LLBoost. In particular, we observe that using a lower rank approximation generally increases the improvement provided by LLBoost. This matches the intuition provided by Proposition 2: when the space orthogonal to the training feature matrix, X , is large, there is no reason to believe that the best linear solution lies in the span ofX . Hence, sampling the space orthogonal to X yields an improvement. We note that since only a few singular values of X are large, there is no impact to the training accuracy when using a low-rank approximation for X (shown in the second column of the tables in Figure 10)." }, { "heading": "G LLBOOST APPLIED TO TRAIN, VALIDATION, TEST SPLITS", "text": "In Figure 2 and Figure 3, we demonstrated that LLBoost improves the validation accuracy of pretrained models without impacting the training accuracy. To ensure that LLBoost is not overfitting\nthe validation set, we additionally split the validation data into validation and test data and check that LLBoost improves validation and test accuracy without impacting training accuracy3.\n3For ImageNet32, the validation set size is only 100 examples, and so we split the training set and re-train.\nIn Figures 11 and 12, we present examples of how LLBoost (which selects the perturbation that improves validation accuracy) improves both validation and test accuracy without impacting training accuracy." }, { "heading": "H PROOF OF PROPOSTION 1", "text": "Proof. We first consider ŵ − w∗:\nŵ − w∗ = yV Σ†UT − w∗\n= w∗XV Σ†UT − w∗ (since y = w∗X) = w∗UΣΣ†UT − w∗(UΣ⊥UT + UΣ⊥⊥UT ) = w∗UΣ⊥ ⊥ UT − w∗(UΣ⊥UT + UΣ⊥⊥UT )\n= −w∗UΣ⊥UT\nThus, we have shown (1). Now for (2), we have:\nŵr − w∗ = w(0)UΣ⊥UT + ŵ − w∗ = w(0)UΣ⊥UT − w∗UΣ⊥UT = (w(0) − w∗)UΣ⊥UT . Hence, (2) follows from (1)." }, { "heading": "I PROOF OF THEOREM 2", "text": "Proof. The proof follows from Lemma 1. Since the columns of X are drawn from N (0, Id×d), Lemma 2 implies that the columns of U are drawn from the uniform distribution on the sphere in Rd. Hence we have that:\nEX [UΣ⊥UT ] = EX\n[ d∑\ni=n+1\nuiu T i\n] =\nd∑ i=n+1 EX [uiuTi ] = 1− n d .\nThis implies (1) since: EX [‖ŵ − w∗‖2 = w∗EX [UΣ⊥UT ]w∗T = ‖w∗‖2 (\n1− n d\n) .\nSimilarly, we get (2), which completes the proof." }, { "heading": "J PROOF OF PROPOSITION 2", "text": "Proof. Let aT = w(0) − w∗. We need to find a such that:\n(1) ‖aTUΣ⊥UT ‖2 = d∑\ni=r+1\n|〈a, ui〉|2 = c22,\n(2) aTa = c21.\nTo do this, we instead first let a′ = c1a and show that there exists a solution to:\n(1) ‖a′TUΣ⊥UT ‖2 = d∑\ni=r+1\n|〈a′, ui〉|2 = c22 c21 ,\n(2) a′Ta′ = 1.\nWe will show that there is a solution to the above system by using the intermediate value theorem. First, note that the unit sphere is path connected in Rd. Now for a′ = ur+1, we have ‖a′‖ = 1 and ‖a′TUΣ⊥UT ‖2 = 1. Next, note that for a′ = u1, ‖a′‖ = 1 and |a′TUΣ⊥UT ‖2 = 0. Thus, by the intermediate value theorem we conclude that there exists some a′ on the unit sphere such that ‖a′TUΣ⊥UT ‖2 = c 2 2\nc21 , which completes the proof." }, { "heading": "K PROOF OF PROPOSITION 3", "text": "Proof. Note that we have:\nPw(0)(Ex,X [(y − ŵrx)2] ≤ Ex,X [(y − ŵx)2]) ⇐⇒ Pw(0) ( ‖w(0) − w∗‖2 ( 1− n\nd\n) ≤ ‖w∗‖2 ( 1− n\nd )) ⇐⇒ Pw(0) ( 〈w(0), w ∗\n‖w∗‖〉 ≥ 1 2‖w∗‖\n) .\nSince w(0) and w ∗\n‖w∗‖ are unit vectors on Sd−1, the desired probability is equivalent to that of the ratio of the area of the spherical cap (19) defined by the co-latitude angle φ = cos−1 ( 1\n2‖w∗‖\n) to the\nsurface area of Sd−1, which completes the proof." }, { "heading": "L PROOF OF THEOREM 3", "text": "Proof. We here present the proof for the case that γ = 1; however, the proof is easily extendable to the case of arbitrary γ. The proof relies on the following inequalities, which are commonly used in analysis.\nProposition 4 (Reduction Formula).∫ sind θdθ = −1\nd cos θ(sin θ)d−1 + d− 1 d\n∫ sind−2 θdθ\nProposition 5 (Gautschi’s Inequality).\nx1−s < Γ(x+ 1) Γ(x+ s) < (x+ 1)1−s ; s ∈ (0, 1) ; x > 0\nCorollary 1. For s ∈ (0, 1) and x > 0:\n(1) √ x < Γ(x+ 1) Γ(x+ 12 ) < √ x+ 1,\n(2) 1√ x+ 1\n< Γ(x+ 12 )\nΓ(x+ 1) < 1√ x .\nProposition 6. k∑ i=1 1√ i ≤ ∫ k 0 1√ x dx = 2 √ k Let K = ∫ φ 0\n(sin θ)d−2dθ. We will lower bound this integral. For convenience of notation, we will skip writing the limits of integration. By using the reduction formula for the powers of ∫ (sin θ)ndθ, and assuming d is even for convenience, we have:\nK = − 1 d− 2 cosφ(sinφ) d−3 − 1 d− 2 d− 3 d− 4 cosφ(sinφ) d−5 − . . .− (d− 3)!! (d− 2)!! cosφ sinφ+ Γ(d−12 )√ πΓ(d2 ) φ\n= − 1 d− 2 cosφ sinφ Γ(d−12 )√ πΓ(d−22 )\n[√ πΓ(d−22 )\nΓ(d−12 ) (sinφ)d−4 +\n√ πΓ(d−42 )\nΓ(d−32 ) (sinφ)d−6 + . . .+\n√ πΓ( 22 )\nΓ( 32 )\n]\n+ Γ(d−12 )√ πΓ(d2 ) φ\n≥ − 1 d− 2 cosφ sinφ\nΓ(d−12 ) Γ(d−22 ) d−42∑ i=1 (sin2 φ)i√ 2i+1 2 + 1 + Γ(d−12 )√ πΓ(d2 ) φ (by Gautschi’s Inequality)\n≥ − 1 d− 2 cosφ sinφ\nΓ(d−12 ) Γ(d−22 ) d−42∑ i=1 (sin2 φ)i√ 2i 2 + 1 + Γ(d−12 )√ πΓ(d2 ) φ\n= − 1 d− 2 cosφ sinφ\nΓ(d−12 ) Γ(d−22 ) d−42∑ i=1 1√ i + 1 + Γ(d−12 )√ πΓ(d2 ) φ\n≥ − 1 d− 2 cosφ sinφ\nΓ(d−12 ) Γ(d−22 )\n[ 2 √ d− 4\n2 + 1\n] +\nΓ(d−12 )√ πΓ(d2 ) φ.\nSince φ = cos−1 (\n1+ 2‖w∗‖\n) , then\nK ≥ − 1 (d− 2) 1 + 2‖w∗‖\n√ 1− (1 + ) 2\n4‖w∗‖2 Γ(d−12 )\nΓ(d−22 )\n[ 2 √ d− 4\n2 + 1\n] +\nΓ(d−12 )√ πΓ(d2 )\ncos−1 ( 1 +\n2‖w∗‖\n) .\nAgain by Gautschi’s Inequality we obtain:\nΓ(d−12 ) Γ(d−22 ) <\n√ d− 1\n2 ,\nand hence,\nK > − 1 (d− 2) 1 + 2‖w∗‖\n√ 1− (1 + ) 2\n4‖w∗‖2\n√ d− 1\n2\n[ 2 √ d− 4\n2 + 1\n] +\nΓ(d−12 )√ πΓ(d2 )\ncos−1 ( 1 +\n2‖w∗‖\n) .\nThus, we have that: Γ ( d 2 ) √ πΓ ( d−1 2 ) ∫ φ 0 sind−2 θdθ > − √ d 2π\n1 (d− 2) 1 + 2‖w∗‖\n√ 1− (1 + ) 2\n4‖w∗‖2\n√ d− 1\n2\n[ 2 √ d− 4\n2 + 1\n]\n+ 1 π cos−1\n( 1 +\n2‖w∗‖\n) .\nHence, assuming ‖w∗‖ = √ d c , we obtain:\nlim d→∞\nΓ ( d 2 ) √ πΓ ( d−1 2 ) ∫ φ 0 sind−2 θdθ ≥ −c(1 + ) 2 √ 2π + 1 π π 2 = 1 2 − c(1 + ) 2 √ 2π .\nNote that we have: Pw(0) ( 〈w(0), w ∗\n‖w∗‖〉 ≥ (1 + ) 2‖w∗‖\n) ≤ Pw(0) ( 〈w(0), w ∗ ‖w∗‖〉 ≥ 0 ) = 1 2 ,\nand hence, we conclude that:\n1 2 − c(1 + ) 2 √ 2π ≤ lim d→∞\nΓ ( d 2 ) √ πΓ ( d−1 2 ) ∫ φ 0 sind−2 θdθ ≤ 1 2 ." } ]
2,020
null
SP:a044379a14bccab23cf617ef66896ecd78edb6ea
[ "This paper proposes to compress nerf models with entropy loss, where instead of directly training nerf model parameters, it trains a new function F which takes some compressed information and decodes to the nerf models. Then it did the same things as nerf, which render scenes in novel views. The authors show that the function F could largely compress the original nerf models while keeping similar PSNR." ]
Some forms of novel visual media enable the viewer to explore a 3D scene from essentially arbitrary viewpoints, by interpolating between a discrete set of original views. Compared to 2D imagery, these types of applications require much larger amounts of storage space, which we seek to reduce. Existing approaches for compressing 3D scenes are based on a separation of compression and rendering: each of the original views is compressed using traditional 2D image formats; the receiver decompresses the views and then performs the rendering. We unify these steps by directly compressing an implicit representation of the scene, a function that maps spatial coordinates to a radiance vector field, which can then be queried to render arbitrary viewpoints. The function is implemented as a neural network and jointly trained for reconstruction as well as compressibility, in an end-to-end manner, with the use of an entropy penalty on the parameters. Our method significantly outperforms a state-of-the-art conventional approach for scene compression, achieving simultaneously higher quality reconstructions and lower bitrates. Furthermore, we show that the performance at lower bitrates can be improved by jointly representing multiple scenes using a soft form of parameter sharing.
[]
[ { "authors": [ "T. Akenine-Möller", "E. Haines", "N. Hoffman" ], "title": "Real-time rendering", "venue": null, "year": 2019 }, { "authors": [ "P. Astola", "I. Tabus" ], "title": "Wasp: Hierarchical warping, merging, and sparse prediction for light field image compression", "venue": "In 2018 7th European Workshop on Visual Information Processing (EUVIP),", "year": 2018 }, { "authors": [ "N. Bakir", "W. Hamidouche", "O. Déforges", "K. Samrouth", "M. Khalil" ], "title": "Light field image compression based on convolutional neural networks and linear approximation", "venue": "In 2018 25th IEEE International Conference on Image Processing (ICIP),", "year": 2018 }, { "authors": [ "J. Ballé", "V. Laparra", "E.P. Simoncelli" ], "title": "End-to-end optimized image compression", "venue": "In 5th Int. Conf. on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "D. Barina", "T. Chlubna", "M. Solony", "D. Dlabaja", "P. Zemcik" ], "title": "Evaluation of 4d light field compression methods", "venue": "In arXiv:1905.07432,", "year": 2019 }, { "authors": [ "Y. Bengio", "N. Léonard", "A. Courville" ], "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "venue": "In arXiv:1308.3432,", "year": 2013 }, { "authors": [ "M. Courbariaux", "Y. Bengio", "J.-P. David" ], "title": "Binaryconnect: Training deep neural networks with binary weights during propagations", "venue": "In arXiv:1511.00363,", "year": 2016 }, { "authors": [ "X. Glorot", "Y. Bengio" ], "title": "Understanding the difficulty of training deep feedforward neural networks. volume", "venue": "Proceedings of Machine Learning Research,", "year": 2010 }, { "authors": [ "S. Han", "H. Mao", "W.J. Dally" ], "title": "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding", "venue": "In 4th International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "M. Havasi", "R. Peharz", "J.M. Hernández-Lobato" ], "title": "Minimal random code learning: Getting bits back from compressed model parameters", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "C. Jia", "X. Zhang", "S. Wang", "S. Ma" ], "title": "Light field image compression using generative adversarial network-based view synthesis", "venue": "IEEE Journal on Emerging and Selected Topics in Circuits and Systems,", "year": 2019 }, { "authors": [ "X. Jiang", "M. Le Pendu", "C. Guillemot" ], "title": "Light field compression using depth image based view synthesis", "venue": "IEEE International Conference on Multimedia Expo Workshops (ICMEW),", "year": 2017 }, { "authors": [ "D.P. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In 3rd International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "F. Li", "B. Zhang", "B. Liu" ], "title": "Ternary weight networks", "venue": "In arXiv:1605.04711,", "year": 2016 }, { "authors": [ "H. Li", "A. Kadav", "I. Durdanovic", "H. Samet", "H.P. Graf" ], "title": "Pruning filters for efficient convnets", "venue": "In arXiv:1608.08710,", "year": 2017 }, { "authors": [ "J. Liu", "S. Wang", "R. Urtasun" ], "title": "Dsic: Deep stereo image compression", "venue": "IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "L. Liu", "J. Gu", "K.Z. Lin", "T.-S. Chua", "C. Theobalt" ], "title": "Neural sparse voxel fields", "venue": "In arXiv:2007.11571,", "year": 2020 }, { "authors": [ "B. Mildenhall", "P. Srinivasan", "R. Ortiz-Cayon", "N.K. Kalantari", "R. Ramamoorthi", "R. Ng", "A. Kar" ], "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "venue": "ACM Transactions on Graphics (TOG),", "year": 2019 }, { "authors": [ "B. Mildenhall", "P. Srinivasan", "M. Tancik", "J.T. Barron", "R. Ramamoorthi", "R. Ng" ], "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "D. Oktay", "J. Ballé", "S. Singh", "A. Shrivastava" ], "title": "Scalable model compression by entropy penalized reparameterization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "K. Schwarz", "Y. Liao", "M. Niemeyer", "A. Geiger. Graf" ], "title": "Generative radiance fields for 3d-aware image synthesis", "venue": "In arXiv:2007.02442,", "year": 2020 }, { "authors": [ "V. Sitzmann", "M. Zollhöfer", "G. Wetzstein" ], "title": "Scene representation networks: Continuous 3dstructure-aware neural scene representations", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "V. Sitzmann", "J. Martel", "A. Bergman", "D. Lindell", "G. Wetzstein" ], "title": "Implicit neural representations with periodic activation functions", "venue": "In arXiv,", "year": 2020 }, { "authors": [ "G.J. Sullivan", "J. Ohm", "W. Han", "T. Wiegand" ], "title": "Overview of the high efficiency video coding (hevc) standard", "venue": "IEEE Transactions on Circuits and Systems for Video Technology,", "year": 2012 }, { "authors": [ "M. Tancik", "P. Srinivasan", "B. Mildenhall", "S. Fridovich-Keil", "N. Raghavan", "U. Singhal", "R. Ramamoorthi", "J.T. Barron", "R. Ng" ], "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "venue": "In arXiv:2006.10739,", "year": 2020 }, { "authors": [ "Z. Wang", "E.P. Simoncelli", "A.C. Bovik" ], "title": "Multiscale structural similarity for image quality assessment", "venue": "In The Thrity-Seventh Asilomar Conference on Signals, Systems Computers,", "year": 2003 }, { "authors": [ "R. Zhang", "P. Isola", "A.A. Efros", "E. Shechtman", "O. Wang" ], "title": "The unreasonable effectiveness of deep features as a perceptual metric", "venue": null, "year": 2018 }, { "authors": [ "Z. Zhao", "S. Wang", "C. Jia", "X. Zhang", "S. Ma", "J. Yang" ], "title": "Light field image compression based on deep learning", "venue": "IEEE International Conference on Multimedia and Expo (ICME),", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "The ability to render 3D scenes from arbitrary viewpoints can be seen as a big step in the evolution of digital multimedia, and has applications such as mixed reality media, graphic effects, design, and simulations. Often such renderings are based on a number of high resolution images taken of some original scene, and it is clear that to enable many applications, the data will need to be stored and transmitted efficiently over low-bandwidth channels (e.g., to a mobile phone for augmented reality).\nTraditionally, the need to compress this data is viewed as a separate need from rendering. For example, light field images (LFI) consist of a set of images taken from multiple viewpoints. To compress the original views, often standard video compression methods such as HEVC (Sullivan et al., 2012) are repurposed (Jiang et al., 2017; Barina et al., 2019). Since the range of views is narrow, light field images can be effectively reconstructed by “blending” a smaller set of representative views (Astola & Tabus, 2018; Jiang et al., 2017; Zhao et al., 2018; Bakir et al., 2018; Jia et al., 2019). Blending based approaches, however, may not be suitable for the more general case of arbitrary-viewpoint 3D scenes, where a very diverse set of original views may increase the severity of occlusions, and thus would require storage of a prohibitively large number of views to be effective.\nA promising avenue for representing more complete 3D scenes is through neural representation functions, which have shown a remarkable improvement in rendering quality (Mildenhall et al., 2020; Sitzmann et al., 2019; Liu et al., 2020; Schwarz et al., 2020). In such approaches, views from a scene are rendered by evaluating the representation function at sampled spatial coordinates and then applying a differentiable rendering process. Such methods are often referred to as implicit representations, since they do not specify explicitly the surface locations and properties within the scene, which would be required to apply some conventional rendering techniques like rasterization (Akenine-Möller et al., 2019). However, finding the representation function for a given scene requires training a neural network. This makes this class of methods difficult to use as a rendering method in the existing framework, since it is computationally infeasible on a low-powered end device like a mobile phone, which are often on the receiving side. Due to the data processing inequality, it may also be inefficient to compress the original views (the training data) rather than the trained\nEntropy\nPoses\nViews of 3D scene Neural representation function Loss\nRadiance field\nSender\nReceiver\nImages\nPose\n11000010...\n+ length(11000010...) Reconstruction error\nRate\nNovel rendering\nRadiance field?\nFigure 1: Overview of cNeRF. The sender trains an entropy penalized neural representation function on a set of views from a scene, minimizing a combination of rate and distortion. The receiver can use the compressed model to render novel views.\nrepresentation itself, because the training process may discard some information that is ultimately not necessary for rendering (such as redundancy in the original views, noise, etc.).\nIn this work, we propose to apply neural representation functions to the scene compression problem by compressing the representation function itself. We use the NeRF model (Mildenhall et al., 2020), a method which has demonstrated the ability to produce high-quality renders of novel views, as our representation function. To reduce redundancy of information in the model, we build upon the model compression approach of Oktay et al. (2020), applying an entropy penalty to the set of discrete reparameterized neural network weights. The compressed NeRF (cNeRF) describes a radiance field, which is used in conjunction with a differentiable neural renderer to obtain novel views (see Fig. 1). To verify the proposed method, we construct a strong baseline method based on the approaches seen in the field of light field image compression. cNeRF consistently outperforms the baseline method, producing simultaneously superior renders and lower bitrates. We further show that cNeRF can be improved in the low bitrate regime when compressing multiple scenes at once. To achieve this, we introduce a novel parameterization which shares parameters across models and optimize jointly across scenes." }, { "heading": "2 BACKGROUND", "text": "We define a multi-view image dataset as a set of tuplesD = {(Vn, Xn)}Nn=1, where Vn is the camera pose and Xn is the corresponding image from this pose. We refer to the 3D ground truth that the views capture as the scene. In what follows, we first provide a brief review of the neural rendering and the model compression approaches that we build upon while introducing the necessary notation.\nNeural Radiance Fields (NeRF) The neural rendering approach of Mildenhall et al. (2020) uses a neural network to model a radiance field. The radiance field itself is a learned function gθ : R5 → (R3,R+), mapping a 3D spatial coordinate and a 2D viewing direction to a RGB value and a corresponding density element. To render a view, the RGB values are sampled along the relevant rays and accumulated according to their density elements. The learned radiance field mapping gθ is parameterized with two multilayer perceptrons (MLPs), which Mildenhall et al. (2020) refer to as the “coarse” and “fine” networks, with parameters θc and θf respectively. The input locations to the coarse network are obtained by sampling regularly along the rays, whereas the input locations to the fine network are sampled conditioned on the radiance field of the coarse network. The networks are\ntrained by minimizing the distance from their renderings to the ground truth image:\nL = N∑ n=1 ∥∥X̂cn(θc;Vn)−Xn∥∥22︸ ︷︷ ︸ Lc(θc) + N∑ n=1 ∥∥X̂fn(θf ;Vn, θc)−Xn∥∥22︸ ︷︷ ︸ Lf (θf ; θc)\n(1)\nWhere || · ||2 is the Euclidean norm and the X̂n are the rendered views. Note that the rendered view from the fine network X̂fn relies on both the camera pose Vn and the coarse network to determine the spatial locations to query the radiance field. We drop the explicit dependence of Lf on θc in the rest of the paper to avoid cluttering the notation. During training, we render only a minibatch of pixels rather than the full image. We give a more detailed description of the NeRF model and the rendering process in Appendix Sec. A.\nModel Compression through Entropy Penalized Reparameterization The model compression work of Oktay et al. (2020) reparameterizes the model weights Θ into a latent space as Φ. The latent weights are decoded by a learned function F , i.e. Θ = F(Φ). The latent weights Φ are modeled as samples from a learned prior q, such that they can be entropy coded according to this prior. To minimize the rate, i.e. length of the bit string resulting from entropy coding these latent weights, a differentiable approximation of the self-information I(φ) = − log2(q(φ)) of the latent weights is penalized. The continuous Φ are quantized before being applied in the model, with the straight-through estimator (Bengio et al., 2013) used to obtain surrogate gradients of the loss function. Following Ballé et al. (2017), uniform noise is added when learning the continuous prior q(φ + u) where ui ∼ U(− 12 , 1 2 ) ∀ i. This uniform noise is a stand-in for the quantization, and results in a good approximation for the self-information through the negative log-likelihood of the noised continuous latent weights. After training, the quantized weights Φ̃ are obtained by rounding, Φ̃ = bΦe, and transmitted along with discrete probability tables obtained by integrating the density over the quantization intervals. The continuous weights Φ and any parameters in q itself can then be discarded." }, { "heading": "3 METHOD", "text": "To achieve a compressed representation of a scene, we propose to compress the neural scene representation function itself. In this paper we use the NeRF model as our representation function. To compress the NeRF model, we build upon the model compression approach of Oktay et al. (2020) and jointly train for rendering as well as compression in an end-to-end trainable manner. We subsequently refer to this approach as cNeRF. The full objective that we seek to minimize is:\nL(Φ,Ψ) = Lc(Fc(Φ̃c)) + Lf (Ff (Φ̃f ))︸ ︷︷ ︸ Distortion\n+λ ∑\nφ∈Φ I(φ)︸ ︷︷ ︸\nRate\n(2)\nwhere Ψ denotes the parameters of F as well any parameters in the prior distribution q, and we have explicitly split Φ into the coarse Φc and fine Φf components such that Φ = {Φc,Φf}. λ is a trade-off parameter that balances between rate and distortion. A rate–distortion (RD) plot can be traced by varying λ to explore the performance of the compressed model at different bitrates.\nCompressing a single scene When training cNeRF to render a single scene, we have to choose how to parameterize and structure F and the prior distribution q over the network weights. Since the networks are MLPs, the model parameters for a layer l consist of the kernel weights and biases {Wl, bl}. We compress only the kernel weights Wl, leaving the bias uncompressed since it is much smaller in size. The quantized kernel weights W̃l are mapped to the model weights by Fl, i.e. Wl = Fl(W̃l). Fl is constructed as an affine scalar transformation, which is applied elementwise to W̃l:\nFl(W̃l,ij) = αlW̃l,ij + βl (3) We take the prior to be factored over the layers, such that we learn a prior per linear kernel ql. Within each kernel, we take the weights in W̃l to be i.i.d. from the univariate distribution ql, parameterized by a small MLP, as per the approach of Ballé et al. (2017). Note that the parameters of this MLP can be discarded after training (once the probability mass functions have been built).\nCompressing multiple scenes While the original NeRF model is trained for a single scene, we hypothesize that better rate–distortion performance can be achieved for multiple scenes, especially if they share information, by training a joint model. For a dataset of M scenes, we parameterize the kernel weights of model m, layer l as:\nWml = Fml (W̃ml , S̃l) = αml W̃ m l + β m l + γlS̃l (4)\nCompared to Eqn. 3, we have added a shift, parameterized as a scalar linear transformation of a discrete shift S̃l , that is shared across all models m ∈ {1, ...,M}. S̃l has the same dimensions as the kernel Wml , and as with the discrete latent kernels, S̃l is coded by a learned probability distribution. The objective for the multi-scene model becomes:\nL(Φ,Ψ) = M∑ m=1 [ Lmc (Fmc (Φ̃mc , Φ̃sc)) + Lmf (Fmf (Φ̃mf , Φ̃sf )) + λ ∑ φ∈Φm I(φ) ] + λ ∑ φ∈Φs I(φ) (5)\nwhere Φs is the set of all discrete shift S̃ parameters, and the losses, latent weights and affine transforms are indexed by scene and model m. Note that this parameterization has more parameters than the total of the M single scene models, which at first appears counter-intuitive, since we wish to reduce the overall model size. It is constructed as such so that the multi-scene parameterization contains the M single scene parameterizations - they can be recovered by setting the shared shifts to zero. If the shifts are set to zero then their associated probability distributions can collapse to place all their mass at zero. So we expect that if there is little benefit to using the shared shifts then they can be effectively ignored, but if there is a benefit to using them then they can be utilized. As such, we can interpret this parameterization as inducing a soft form of parameter sharing." }, { "heading": "4 EXPERIMENTS", "text": "Datasets To demonstrate the effectiveness of our method, we evaluate on two sets of scenes used by Mildenhall et al. (2020):\n• Synthetic. Consisting of 800× 800 pixel2 views taken from either the upper hemisphere or entire sphere around an object rendered using the Blender software package. There are 100 views taken to be in the train set and 200 in the test set.\n• Real. Consisting of a set of forward facing 1008 × 756 pixel2 photos of a complex scene. The number of images varies per scene, with 1/8 of the images taken as the test images.\nSince we are interested in the ability of the receiver to render novel views, all distortion results (for any choice of perceptual metric) presented are given on the test sets.\nArchitecture and Optimization We maintain the same architecture for the NeRF model as Mildenhall et al. (2020), consisting of 13 linear layers and ReLU activations. For cNeRF we use Adam (Kingma & Ba, 2015) to optimize the latent weights Φ and the weights contained in the decoding functions F . For these parameters we use initial learning rate of 5× 10−4 and a learning rate decay over the course of learning, as per Mildenhall et al. (2020). For the parameters of the learned probability distributions q, we find it beneficial to use a lower learning rate of 5 × 10−5, such that the distributions do not collapse prematurely. We initialize the latent linear kernels using the scheme of Glorot & Bengio (2010), the decoders F near the identity. Baseline We follow the general methodology exhibited in light field compression and take the compressed representation of the scene to be a compressed subset of the views. The receiver then decodes these views, and renders novel views conditioned on the reconstructed subset. We use the video codec HEVC to compress the subset of views, as is done by Jiang et al. (2017). To render novel views conditioned on the reconstructed set of views, we choose the Local Light Field Fusion (LLFF) approach of Mildenhall et al. (2019). LLFF is a state-of-the-art learned approach in which a novel view is rendered by promoting nearby views to multiplane images, which are then blended. We refer to the full baseline subsequently as HEVC + LLFF." }, { "heading": "4.1 RESULTS", "text": "Single scene compression To explore the frontier of achievable rate–distortion points for cNeRF, we evaluate at a range of entropy weights λ for four scenes – two synthetic (Lego and Ficus) and two real (Fern and Room). To explore the rate–distortion frontier for the HEVC + LLFF baseline we evaluate at a range of QP values for HEVC. We give a more thorough description of the exact specifications of the HEVC + LLFF baseline and the ablations we perform to select the hyperparameter values in Appendix Sec. B. We show the results in Fig. 4. We also plot the performance of\nthe uncompressed NeRF model – demonstrating that by using entropy penalization the model size can be reduced substantially with a relatively small increase in distortion. For these scenes we plot renderings at varying levels of compression in Fig. 2 and Fig. 8. The visual quality of the renderings does not noticeably degrade when compressing the NeRF model down to bitrates of roughly 5-6 bits per parameter (the precise bitrate depends on the scene). At roughly 1 bit per parameter, the visual quality has degraded significantly, although the renderings are still sensible and easily recognisable. We find this to be a surprising positive result, given that assigning a single bit per parameter is extremely restrictive for such a complex regression task as rendering. Indeed, to our knowledge no binary neural networks have been demonstrated to be effective on such tasks.\nAlthough the decoding functions F (Eqn. 3) are just relatively simple scalar affine transformations, we do not find any benefit to using more complex decoding functions. With the parameterization given, most of the total description length of the model is in the coded latent weights, not the parameters of the decoders or entropy models. We give a full breakdown in Tab. 5.\nFig. 4 shows that cNeRF clearly outperforms the HEVC + LLFF baseline, always achieving lower distortions at a (roughly) equivalent bitrate. Reconstruction quality is reported as peak signal-tonoise ratios (PSNR). The results are consistent with earlier demonstrations that NeRF produces much better renderings than the LLFF model (Mildenhall et al., 2020). However, it is still interesting\nto see that this difference persists even at much lower bitrates. To evaluate on the remaining scenes, we select a single λ value for cNeRF and QP value for HEVC + LLFF. We pick the values to demonstrate a reasonable trade-off between rate and distortion. The results are shown in Tab. 1. For every scene the evaluated approaches verify that cNeRF achieves a lower distortion at a lower bitrate. We can see also that cNeRF is consistently able to reduce the model size significantly without seriously impacting the distortion. Further, we evaluate the performance of cNeRF and HEVC + LLFF for other perceptual quality metrics in Tab. 3 and 4. Although cNeRF is trained to minimize the squared error between renderings and the true images (and therefore maximize PSNR), cNeRF also outperforms HEVC + LLFF in both MS-SSIM (Wang et al., 2003) and LPIPS (Zhang et al., 2018). This is significant, since the results of Mildenhall et al. (2020) indicated that for SSIM and LPIPS, the LLFF model had a similar performance to NeRF when applied to the real scenes. We display a comparison of renderings from cNeRF and HEVC + LLFF in Fig. 3.\nMulti-scene compression For the multi-scene case we compress one pair of synthetic scenes and one pair of real scenes. We train the multi-scene cNeRF using a single shared shift per linear kernel, as per Eqn. 4. To compare the results to the single scene models, we take the two corresponding single scene cNeRFs, sum the sizes and average the distortions. We plot the resulting rate–distortion frontiers in Fig. 5. The results demonstrate that the multi-scene cNeRF improves upon the single scene cNeRFs at low bitrates, achieving higher PSNR values with a smaller model. This meets our expectation, since the multi-scene cNeRF can share parameters via the shifts (Eqn. 4) and so decrease the code length of the scene-specific parameters. At higher bitrates we see no benefit to using the multi-scene parameterization, and in fact see slightly worse performance. This indicates that in the unconstrained rate setting, there is no benefit to using the shared shifts, and that they may slightly harm optimization." }, { "heading": "5 RELATED WORK", "text": "Scene Compression A 3D scene is typically represented as a set of images, one for each view. For a large number of views, compressing each image individually using a conventional compression method can require a large amount of space. As a result, there is a body of compression research which aims to exploit the underlying scene structure of the 3D scene to reduce space requirements. A lot of research has been focused on compressing light field image (LFI) data (Astola & Tabus, 2018; Jiang et al., 2017; Bakir et al., 2018; Jia et al., 2019; Zhao et al., 2018). LFI data generally consists of multiple views with small angular distances separating them. This set of views can be used to reconstruct a signal on the 4D domain of rays of the light field itself, thus permitting post-processing tasks such as novel view synthesis and refocusing. A majority of works select a representative subset of views to transmit from the scene. These are compressed and transmitted, typically using a video codec, with the receiver decoding these images and then rendering any novel view for an unobserved (during training) camera pose. Reconstruction for novel camera poses can be performed using traditional methods, such as optical flow (Jiang et al., 2017), or by using recent learned methods that employ convolutional neural networks (Zhao et al., 2018) and generative adversarial networks (Jia et al., 2019). A contrasting approach to multi-view image compression is proposed by Liu et al. (2019), in which a pair of images from two viewpoints is compressed by con-\nditioning the coder of the second image on the coder of the first image. It is important to emphasise that we are not studying this kind of approach in this work, since we wish the receiver to have the ability to render novel views.\nNeural Rendering is an emerging research area which combines learned components with rendering knowledge from computer graphics. Recent work has shown that neural rendering techniques can generate high quality novel views of a wide range of scenes (Mildenhall et al., 2020; Sitzmann et al., 2019; Liu et al., 2020; Schwarz et al., 2020). In this work we build upon the method of Mildenhall et al. (2020), coined as a Neural Radiance Field (NeRF), for single scene compression and then extend it with a novel reparameterization for jointly compressing multiple scenes. Training neural representation networks jointly across different scenes (without compression) has been explored by Sitzmann et al. (2019) and Liu et al. (2020), who use a hypernetwork (Ha et al., 2017) to map a latent vector associated with each scene to the parameters of the representation network. Liu et al. (2020) note that the hypernetwork approach results in significant degradation of performance when applied to the NeRF model (a loss of more than 4 dB PSNR). In contrast, our approach of shared reparameterization is significantly different from these methods.\nModel Compression There is a body of research for reducing the space requirements of deep neural networks. Pruning tries to find a sparse set of weights by successively removing a subset of weights according to some criterion (Han et al., 2016; Li et al., 2017). Quantization reduces the precision used to describe the weights themselves (Courbariaux et al., 2016; Li et al., 2016). In this work we focus instead on weight coding approaches (Havasi et al., 2019; Oktay et al., 2020) that code the model parameters to yield a compressed representation." }, { "heading": "6 DISCUSSION AND CONCLUSION", "text": "Our results demonstrate that cNeRF produces far better results as a compressed representation than a state-of-the-art baseline, HEVC+LLFF, which follows the paradigm of compressing the original views. In contrast, our method compresses a representation of the radiance field itself. This is important for two reasons:\n• Practically, compressing the views themselves bars the receiver from using more complex and better-performing rendering methods such as NeRF, because doing this would require training to be performed at the receiving side after decompression, which is computationally infeasible in many applications.\n• Determining the radiance field and compressing it on the sending side may have coding and/or representational benefits, because of the data processing inequality: the cNeRF parameters are a function of the original views, and as such must contain equal to or less information than the original views (the training data). The method is thus relieved of the need to encode information in the original views that is not useful for the rendering task.\nIt is difficult to gather direct evidence for the latter point, as the actual entropy of both representations is difficult to measure (we can only upper bound it by the compressed size). However, the substantial performance improvement of our method compared to HEVC+LLFF suggests that the radiance field is a more economical representation for the scene.\nLinked to our choice is also the fact that we adopt a more realistic evaluation methodology than many scene compression techniques. Rather reporting the bitrate and reconstruction quality of the original views, we evaluate our method (and the baseline) by reporting the reconstruction quality of a held-out set of views of each scene, which was not used for training. Since in a free-viewpoint scenario, the vast majority of rendered views will not correspond to one of the original ones, we believe this more accurately measures success of the compared methods.\nThe encoding time for cNeRF is long, given that a new scene must be trained from scratch. Importantly though, the decoding time is much less, as it is only required to render the views using the decompressed NeRF model. cNeRF enables neural scene rendering methods such as NeRF to be used for scene compression, as it shifts the complexity requirements from the receiver to the sender. In many applications, it is more acceptable to incur high encoding times than high decoding times, as one compressed data point may be decompressed many times, allowing amortization of the encoding time, and since power-constrained devices are often at the receiving side. Thus, our method represents a big step towards enabling neural scene rendering in practical applications." }, { "heading": "A NEURAL RADIANCE FIELDS", "text": "The neural rendering approach of Mildenhall et al. (2020) uses a neural network to model a radiance field. The radiance field itself is a learned mapping gθ : R5 → (R3,R+), where the input is a 3D spatial coordinate p = (x, y, z) ∈ R3 and a 2D viewing direction d = (θ, φ) ∈ R2. The NeRF model also makes use of a positional encoding into the frequency domain, applied elementwise to spatial and directional inputs\nγ(p) = (sin(20πp), cos(20πp), ..., sin(2L−1πp), cos(2L−1πp)) (6) This type of encoding has been shown to be important for implicit models, which take as input low dimensional data which contains high frequency information (Tancik et al., 2020; Sitzmann et al., 2020).\nThe network output is an RGB value c = (r, g, b) ∈ R3 and a density element σ ∈ R+. To render a particular view, the RGB values are sampled along the relevant rays and accumulated according to their density elements. In particular, the color c(r) of a ray r = {o + td : t ≥ 0}, in direction d from the camera origin o, is computed as\nc(r) = K∑ i=1 Ti(1− exp(−σiδi))ci, where Ti = exp ( − ∑i−1 j=1 σjδj ) , (7)\nwhere (ci, σi) is the output of the mapping evaluated at (pi,d), where pi = o+tid, ti is the distance of sample i from the origin along the ray, and δi = ti+1 − ti is the distance between samples. The color c(r) can be interpreted as the expected color of the point along the ray in the scene closest to the camera, if the points in the scene are distributed along the ray according to an inhomogeneous Poisson process. Since in a Poisson process with density σi, the probability that there are no points in an interval of length δi is exp(−σiδi). Thus Ti is the probability that there are no points between t1 and ti, and (1 − exp(−σiδi)) is the probability that there is a point between ti and ti+1. The rendered view X̂ comprises pixels whose colors c(r) are evaluated at rays emanating from the same camera origin o but having slightly different directions d, depending on the camera pose V ." }, { "heading": "B HEVC + LLFF SPECIFICATION AND ABLATIONS", "text": "There are many hyperparameters to select for the HEVC + LLFF baseline. The first we consider is the number of images to compress with HEVC. If too many images are compressed with HEVC then at some point the performance of LLFF will saturate and an unnecessary amount of space will be used. On the other hand, if too few images are compressed with HEVC, then LLFF will find it difficult to blend these (de)compressed images to form high quality renders. To illustrate this effect, we run an ablation on the Fern scene where we vary the number of images we compress with HEVC, rendering a held out set of images conditioned on the reconstructions. The results are displayed in Fig. 6. We can clearly see the saturation point at around 10 images, beyond which there is no benefit to compressing extra images. Thus when picking the number of images to compress for new scenes, we do not use more than 4 per test image (which corresponds to compressing 12 images in our ablation).\nThe second effect we study is the order in which images are compressed with HEVC, which affects the performance as HEVC is a video codec and thus sensitive to image ordering. It stands to reason that the more the sequence of images resemble a natural video, the better coding will be. As such, we consider two orderings: firstly the “snake scan” ordering, in which images are ordered vertically by their camera pose, going alternately left to right then right to left. The second is the “lozenge” ordering (Jiang et al., 2017), in which images are ordered by the camera pose in a spiral outwards from their centre. Both orderings appear sensible since they always step from a given camera pose to an adjacent pose. We compare results compressing and reconstructing a set of images using HEVC across a range of Quantization Parameter (QP) values for the Fern scene in Tab. 2. The difference between the two orderings is very small. Since snake scan is simpler to implement, we use this in all our experiments.\nThe effect of changing QP is demonstrated in Fig. 7, and we select QP=30 for the experiments in which we choose one rate–distortion point to evaluate, since it achieves almost the same performance as QP=20 and QP=10 with considerably less space.\n6 8 10 12 14 16 Number images compressed with HEVC\n20\n21\n22\n23\nPS N\nR\nSize (MB)\nFigure 7: Full rate–distortion curves for HEVC + LLFF, with labels showing the effect of the QP parameter. To avoid clutter, only the Lego QP labels are given, and the other scenes are similarly ordered from QP=10 on the right to QP=50 on the left." }, { "heading": "C EXTRA RESULTS", "text": "Here we present some further results from our experiments, including results on different perception metrics, a breakdown of the cNeRF model size and extra comparisons of renderings." } ]
2,020
null
SP:f9b9968f2228687032c18ac27887a3c70cdfbc1d
[ "This paper proposes a semi-supervised architecture for clustering time-series data based on Convolutional Autoencoders (AEs). The model combines the regular reconstruction loss usually employed in AEs with two new losses based on the intrinsic clustering evaluation metrics, Silhouette and DBIndex. The experiments show that this setup can achieve good clustering results (ARI) even when very few labeled examples of each class (4-28) are provided." ]
Time series data is abundantly available in the real world, but there is a distinct lack of large, labeled datasets available for many types of learning tasks. Semisupervised models, which can leverage small amounts of expert-labeled data along with a larger unlabeled dataset, have been shown to improve performance over unsupervised learning models. Existing semi-supervised time series clustering algorithms suffer from lack of scalability as they are limited to perform learning operations within the original data space. We propose an autoencoder-based semisupervised learning model along with multiple semi-supervised objective functions which can be used to improve the quality of the autoencoder’s learned latent space via the addition of a small number of labeled examples. Experiments on a variety of datasets show that our methods can usually improve k-Means clustering performance. Our methods achieve a maximum average ARI of 0.897, a 140% increase over an unsupervised CAE model. Our methods also achieve a maximum improvement of 44% over a semi-supervised model.
[]
[ { "authors": [ "Wei Bao", "Jun Yue", "Yulei Rao" ], "title": "A deep learning framework for financial time series using stacked autoencoders and long-short term memory", "venue": "PloS one,", "year": 2017 }, { "authors": [ "Donald J Berndt", "James Clifford" ], "title": "Using dynamic time warping to find patterns in time series", "venue": "In KDD workshop,", "year": 1994 }, { "authors": [ "Hoang Anh Dau", "Nurjahan Begum", "Eamonn Keogh" ], "title": "Semi-supervision dramatically improves time series clustering under dynamic time warping", "venue": "In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management,", "year": 2016 }, { "authors": [ "Hoang Anh Dau", "Eamonn Keogh", "Kaveh Kamgar", "Chin-Chia Michael Yeh", "Yan Zhu", "Shaghayegh Gharghabi", "Chotirat Ann Ratanamahatana", "Yanping", "Bing Hu", "Nurjahan Begum", "Anthony Bagnall", "Abdullah Mueen", "Gustavo Batista", "Hexagon-ML" ], "title": "The ucr time series classification", "venue": null, "year": 2018 }, { "authors": [ "Nat Dilokthanakul", "Pedro AM Mediano", "Marta Garnelo", "Matthew CH Lee", "Hugh Salimbeni", "Kai Arulkumaran", "Murray Shanahan" ], "title": "Deep unsupervised clustering with gaussian mixture variational autoencoders", "venue": "arXiv preprint arXiv:1611.02648,", "year": 2016 }, { "authors": [ "Hui Ding", "Goce Trajcevski", "Peter Scheuermann", "Xiaoyue Wang", "Eamonn Keogh" ], "title": "Querying and mining of time series data: experimental comparison of representations and distance measures", "venue": "Proceedings of the VLDB Endowment,", "year": 2008 }, { "authors": [ "Vincent Fortuin", "Matthias Hüser", "Francesco Locatello", "Heiko Strathmann", "Gunnar Rätsch" ], "title": "Som-vae: Interpretable discrete representation learning on time series", "venue": "arXiv preprint arXiv:1806.02199,", "year": 2018 }, { "authors": [ "Guoliang He", "Yanzhou Pan", "Xuewen Xia", "Jinrong He", "Rong Peng", "Neal N Xiong" ], "title": "A fast semisupervised clustering framework for large-scale time series data", "venue": "IEEE Transactions on Systems, Man, and Cybernetics: Systems,", "year": 2019 }, { "authors": [ "Daniel Holden", "Jun Saito", "Taku Komura", "Thomas Joyce" ], "title": "Learning motion manifolds with convolutional autoencoders", "venue": "In SIGGRAPH Asia 2015 Technical Briefs, pp", "year": 2015 }, { "authors": [ "Lawrence Hubert", "Phipps Arabie" ], "title": "Comparing partitions", "venue": "Journal of classification,", "year": 1985 }, { "authors": [ "Eamonn Keogh", "Chotirat Ann Ratanamahatana" ], "title": "Exact indexing of dynamic time warping", "venue": "Knowledge and information systems,", "year": 2005 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "arXiv preprint arXiv:1412.6980,", "year": 2014 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "arXiv preprint arXiv:1312.6114,", "year": 2013 }, { "authors": [ "Daniel Lemire" ], "title": "Faster retrieval with a two-pass dynamic-time-warping lower bound", "venue": "Pattern recognition,", "year": 2009 }, { "authors": [ "Ujjwal Maulik", "Sanghamitra Bandyopadhyay" ], "title": "Performance evaluation of some clustering algorithms and validity indices", "venue": "IEEE Transactions on pattern analysis and machine intelligence,", "year": 2002 }, { "authors": [ "Ryan McConville", "Raul Santos-Rodriguez", "Robert J Piechocki", "Ian Craddock" ], "title": "N2d:(not too) deep clustering via clustering the local manifold of an autoencoded embedding", "venue": null, "year": 1908 }, { "authors": [ "John Paparrizos", "Luis Gravano" ], "title": "k-shape: Efficient and accurate clustering of time series", "venue": "In Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data,", "year": 2015 }, { "authors": [ "John Paparrizos", "Luis Gravano" ], "title": "Fast and accurate time-series clustering", "venue": "ACM Transactions on Database Systems (TODS),", "year": 2017 }, { "authors": [ "F. Pedregosa", "G. Varoquaux", "A. Gramfort", "V. Michel", "B. Thirion", "O. Grisel", "M. Blondel", "P. Prettenhofer", "R. Weiss", "V. Dubourg", "J. Vanderplas", "A. Passos", "D. Cournapeau", "M. Brucher", "M. Perrot", "E. Duchesnay" ], "title": "Scikit-learn: Machine learning in Python", "venue": "Journal of Machine Learning Research,", "year": 2011 }, { "authors": [ "Mengye Ren", "Eleni Triantafillou", "Sachin Ravi", "Jake Snell", "Kevin Swersky", "Joshua B Tenenbaum", "Hugo Larochelle", "Richard S Zemel" ], "title": "Meta-learning for semi-supervised few-shot classification", "venue": "arXiv preprint arXiv:1803.00676,", "year": 2018 }, { "authors": [ "Peter J Rousseeuw" ], "title": "Silhouettes: a graphical aid to the interpretation and validation of cluster analysis", "venue": "Journal of computational and applied mathematics,", "year": 1987 }, { "authors": [ "Jake Snell", "Kevin Swersky", "Richard Zemel" ], "title": "Prototypical networks for few-shot learning", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In International conference on machine learning,", "year": 2015 }, { "authors": [ "Jesin Zakaria", "Abdullah Mueen", "Eamonn Keogh" ], "title": "Clustering time series using unsupervisedshapelets", "venue": "IEEE 12th International Conference on Data Mining,", "year": 2012 } ]
[ { "heading": "1 INTRODUCTION", "text": "Time series data can be defined as any data which contains multiple sequentially ordered measurements. Real world examples of time series data are abundant throughout many domains, including finance, weather, and medicine. One common learning task is to partition a set of time series into clusters. This unsupervised learning task can be used to learn more about the underlying structure of a dataset, without the need for a supervised learning objective or ground-truth labels. Clustering time series data is a challenging problem because time series data may be high-dimensional, and is not always segmented cleanly, leading to issues with alignment and noise.\nThe most basic methods for time series clustering apply general clustering algorithms to raw time series data. Familiar clustering algorithms like hierarchical clustering or k-Means clustering algorithms may be applied using Euclidean Distance (ED) for comparisons. Although ED can perform well in some cases, it is susceptible to noise and temporal shifting. The improved Dynamic Time Warping (DTW) (Berndt & Clifford, 1994) metric provides invariance to temporal shifts, but is expensive to compute for clustering tasks. A more scalable alternative to DTW exists in k-Shape, a measure based on the shape-based distance (SBD) metric for comparing whole time series (Paparrizos & Gravano, 2017). Shapelet-based approaches such as Unsupervised Shapelets (Zakaria et al., 2012) can mitigate issues with shifting and noise but are limited to extracting a single pattern/feature from each time series.\nOne alternative approach for clustering time series data is to apply dimensionality reduction through the use of an autoencoder. Autoencoders are capable of learning low-dimensional projections of high-dimensional data. Both LSTM and convolutional autoencoders have been shown to be successful at learning latent representations of time series data. These models can extract a large number of features at each time step. After training an autoencoder model, the learned low-dimensional latent representation can then be fed to an arbitrary clustering algorithm to perform the clustering task. Because autoencoder models reduce the dimensionality of the data, they naturally avoid issues with noise, and provide a level of invariance against temporal shifting.\nRecently, the field of semi-supervised learning has shown great success at boosting the performance of unsupervised models using small amounts of labeled data. Dau et al. (2016) proposes a solution for semi-supervised clustering using DTW. However, this solution is still based on DTW, and as\nsuch suffers from scalability issues. He et al. (2019) proposes a constraint-propagation approach for semi-supervised clustering, which may (but is not required to) be used in conjunction with DTW. However, this solution still performs time series comparisons within the raw data space, which may cause issues with scalability for large datasets.\nIn this paper, we present a semi-supervised deep learning model based on a convolutional autoencoder (CAE), which may be used to perform clustering on time series datasets. We also present new semi-supervised learning objectives, adapted from well-known internal clustering metrics, which can significantly improve clustering performance when provided with a small number of labeled time series. We perform experiments to show that our semi-supervised model can improve performance relative to an unsupervised model when applied for clustering tasks. We also implement a lightly modified batch-based version of the semi-supervised learning solution shown presented in Ren et al. (2018), and show that our proposed solutions are competitive. In the best case, our model semi-supervised model shows a best-case improvement in ARI of 140% over an unsupervised CAE model when applying k-Means clustering, and a best-case improvement of 44% over a similar model.\nIn the remainder of this paper, Section 2 reviews the related work on time series clustering, Section 3 presents our proposed method for semi-supervised time series clustering, and in Section 4.1 we discuss our experimental methodology and present our experimental results. Finally, Section 5 details our conclusions and avenues for future research." }, { "heading": "2 RELATED WORK", "text": "One of the most common ways to perform time series clustering is to apply the k-Means algorithm. By default, k-Means uses Euclidean Distance (ED). ED is efficient to calculate, and in many cases shows good results (Ding et al., 2008). However, ED comparison will fail when two similar time series are shifted temporally relative to one another. Additionally, ED comparisons are sensitive to noisy data. The Dynamic Time Warping (DTW) metric (Berndt & Clifford, 1994) improves on ED by computing a warping path between a pair of time series. This approach solves issues with temporal shifting, but requiresO(N2) time to compute for two time series of lengthN . Recent work has provided bounds for this computation (Keogh & Ratanamahatana, 2005), (Lemire, 2009), but the scalability of DTW remains an issue for large datasets and long time series. The k-Shape algorithm (Paparrizos & Gravano, 2015) is a scalable and performant alternative to DTW, and offers similar performance to DTW at a lower computational cost. The Unsupervised Shapelets (Zakaria et al., 2012) clustering method operates by forming clusters around common subsequnces extracted from the data. This approach provides invariance against shifts since the shapelet may appear anywhere within each time series, and also provides some invariance against noise or outliers within the data, since elementwise comparisons only occur between shapelets, rather than the full time series. In this regard, the UShapelet algorithm has some advantages over DTW and k-Shape. However, this method is constrained to extracting a single shapelet/feature from each time series.\nRecently, semi-supervised learning has shown the benefit of augmenting a large unlabeled dataset with a small amount of labeled data. There is some existing work for applying semi-supervised learning to time series clustering. The semi-supervised time series clustering solution presented in Dau et al. (2016) proposes a modified version of DTW, which operates in a semi-supervised manner using supervised constraints. However, this method still relies on performing DTW comparison within the original data space, and as such is not a scalable solution for large datasets or long time series. Another methodology for semi-supervised time series clustering is He et al. (2019) which is a graph-based approach using supervised examples to generate positive and negative constraints between points. This approach does not rely on DTW, but the algorithm still performs comparisons in the original data space, which can be problematic as the length of the time series grows.\nPopular deep learning frameworks such as LSTMs and CNNs may also be applied to time series data. Both LSTM and CNN networks may be arranged as autoencoders, allowing for unsupervised feature learning for clustering, compression, or anomaly detection tasks. Holden et al. (2015) use a Convolutional Autoencoder (CAE) model to learn a featurized representation of gait data. Autoencoder architectures may also be applied for anomaly detection, as is shown in Bao et al. (2017), where the authors use autoencoders for anomaly detection. Performing comparisons on embedded samples avoids many of the issues of direct pairwise comparisons. Since autoencoders reduce di-\nmensionality of the data, distance comparisons will be performed in a lower-dimensional space than the original data, which improves scalability. Embedding the data before comparison also reduces sensitivity to noise, since raw data are not compared directly. One work that takes advantage of this approach is McConville et al. (2019), which learns a manifold for clustering on the trained latent space of an autoencoder. Recent work in the generative model field has produced the Variational Autoencoder (VAE) architecture (Kingma & Welling, 2013), which learns probability distributions for latent features within the data and may also be applied for clustering as in Dilokthanakul et al. (2016). This architecture has also been successfully applied in Fortuin et al. (2018), which uses a VAE based method to learn feature representations of time series. We base our model on a CAE, as the CAE architecture is simple and well-known, and provides a good platform for evaluating the performance of our proposed objective functions." }, { "heading": "3 SEMI-SUPERVISED LEARNING OF LATENT SPACE", "text": "Our proposed framework is a semi-supervised CAE model, which combines an unsupervised reconstruction loss Lrc with a semi-supervised loss Lsup, as shown in Figure 3.1. We show how an existing semi-supervised learning objective function may be adapted to fit into our model, and also propose two new objective functions based on well-known internal clustering metrics. We focus on optimizing the autoencoder’s latent space for distance-based clustering algorithms, and specifically perform our experimentation using the k-Means algorithm. The spherical, centroid-based clusters generated by k-Means are a good fit for the proposed semi-supervised losses, which encourage each cluster to converge around a single high-density point." }, { "heading": "3.1 CAE MODEL ARCHITECTURE", "text": "We base our model off of a two-layer Convolutional Autoencoder (CAE) architecture. The CAE uses two 1-D convolutional layers to featurize the time series. The filter widths for both layers are calculated at runtime from the length of the time series. Each layer of the encoder contains a 1D convolution operation, followed by a non-linear activation. These layers are paired with transpose convolution layers in the decoder. After each convolution layer in the encoder, we also apply a max-pooling operation to further reduce the length of the featurized sequence. Each max-pooling operation is “reversed“ using a nearest-neighbor upsampling operation in the decoder. Alternatively, large strides in the convolutional layer may be used instead of pooling operations. This accomplishes the same goal of reducing the length of the featurized sequence, but does not require any up-sampling operations in the decoder, since the transpose convolution operation with a stride will perform the upsampling. We found that the max-pooling and large stride methods produced similar results in practice." }, { "heading": "3.2 SEMI-SUPERVISED LOSS FUNCTIONS", "text": "" }, { "heading": "3.2.1 PROTOTYPE LOSS", "text": "Snell et al. (2017) present a system for performing few-shot learning, where a small number of labeled examples for each class in the dataset are embedded using an encoder network. These points are divided into two sets, query examples and support examples. In the latent space, the support embeddings are averaged to determine a centroid or ”prototype” for each class. Training occurs by\nmeasuring the distance from each query example to the support centroid for the class. Each query point is labeled with the class of the closest support prototype. The probability of a query point i belonging to Class J is calculated as:\npij = e−d(i,Cj)∑ j′ e −d(i,Cj′ )\n(1)\nIn Equation 1, −d(i, j) is the negative distance between query point i and the support prototype for Class J . Any distance metric may be used for d, but for the remainder of this paper, we treat d as the squared euclidean distance. Training proceeds by minimizing the cross entropy between the labels for the query points and the probability distribution p. The objective of this approach is to learn a latent space where similarities between examples are quantifiable using a distance metric like Euclidean distance. Examples from the same class will be close to one another in the embedded space, while examples from separate classes will have larger distances. In addition to the few-shot learning scenario, a later work by Ren et al. (2018) demonstrates that the prototypical network objective can be modified to support semi-supervised few-shot classification as well. The first objective function proposed in Section 3.1.1 of Ren et al. (2018) is calculated in a two-stage process. In the first stage, the labeled data prototypes are calculated as the centroid of the Nj embedded labeled data points h(x) for each class J , as shown in Equation 2a. The second stage assigns each unlabeled point a soft cluster weighting, based on the distances to each of the calculated prototypes Cj , shown in Equation 2b. Finally, the original prototypes are updated to include the unlabeled data using the soft cluster assignments from Equation 2b. In Equation 2c, the final prototype calculation is performed by adding the embedded values for the labeled class J along with the weighted unlabeled values from set U . This sum is then divided by the sum of Nj and ∑ j wi,j which capture the number of labeled examples, and the sum of weights for class J , respectively.\nCj = 1\nNj ∑ i∈J h(xi) wij = e−||h(xi)−Cj || 2∑ j′ e −||h(xi)−C′j ||2\nĈj =\n∑ i∈J h(xi) + ∑ i∈U h(xi)wij\nNj + ∑ j wij\n(2a,b,c)\nWe extend our vanilla CAE model into a semi-supervised learning context by calculating refined prototypes from the labeled and unlabeled embeddings within the CAE’s embedded space. The semi-supervised loss objective Lproto can be written as Equation 3. In Equation 3, N represents the total number of samples in the batch (unlabeled and labeled), yij is the ground truth indicator for Class j, and p̂ij is probability that sample i belongs to class J by performing the calculation in Equation 1 using the refined prototypes from Equation 2c. Figure 1 presents the full architecture of our model.\nLproto = 1\nN N∑ i K∑ j yij ∗ log(p̂ij) (3)" }, { "heading": "3.2.2 SILHOUETTE LOSS", "text": "When applying k-Means clustering to an unlabeled dataset, one must first choose the correct k, or number of clusters to fit using the model. One metric for determining the correct number of clusters is the Silhouette score Rousseeuw (1987). Silhouette score belongs to the family of internal clustering metrics (Maulik & Bandyopadhyay, 2002), which provide a method for evaluating the performance of a clustering algorithm when no ground truth labels are available. In the absence of labels, internal clustering metrics instead evaluate the partitioning’s ability to separate data into clusters which have low intra-cluster distance, but high inter-cluster distance. Silhouette is a persample metric calculated using the following formulae:\na(i) = 1 |Ck| − 1 ∑ l∈Ck d(i, l) b(i) = min k 6=i 1 |Ck| ∑ l∈Ck d(i, l) s(i) = b(i)− a(i) max a(i), b(i) (4a,b,c)\nAs mentioned in Section 3.2.1, d is an arbitrary distance metric, but we set d as the squared euclidean distance for all experiments. Equation 4a represents the average intra-cluster distance from point i to all other points with the same cluster label. Equation 4b represents the average distance from i to the second closest cluster, or the inter-cluster distance. The second closest cluster is defined as the cluster having the second lowest average distance. The silhouette score is then calculated as\nthe difference between the inter-cluster distance and the intra-cluster distance in Equation 4c. To normalize the term, the difference is divided by the maximum of a(i) and b(i). In our configuration, we use silhouette score as a semi-supervised training objective, by providing ground-truth cluster assignments for the labeled points, and calculating cluster assignments for the unlabeled points. In this configuration, Silhouette score will represent the separability of our labeled points within the embedded space. Labeled points will be encouraged to have similar embeddings in the latent space by Equation 4a, and will also be encouraged to separate themselves from embeddings with a different label by Equation 4b. The silhouette values for the labeled points can be generated directly from equations 4a,b,c. To calculate the silhouette values for the unlabeled points, we calculate the closest embedded centroid for each unlabeled point, then calculate Equations 4a and 4b using distances between the unlabeled point and the labeled points in the closest and second-closest clusters respectively. After the silhouette scores are calculated for both the labeled and unlabeled points, we concatenate both groups and take the mean over the batch. We can rewrite Equation 4c as as training objective:\nLsilh = 1\nN N∑ i 1− s(i) (5)\nTo formulate Equation 5, we take l(i) = 1−s(i) as a loss term, which has a minimum at s(i) = 1 and a maximum at s(i) = −1. We take the mean over all l(i) to produce a scalar loss value, representing the separability of our labeled examples within the embedded space. We can train on Equation 5 using gradient descent methods either alone, or in combination with a CAE reconstruction method." }, { "heading": "3.2.3 DB INDEX LOSS", "text": "The Davies-Bouldin index is another example of a internal clustering metric. Similar to the Silhouette score, the DB Index value is a measure of cluster quality, with lower values indicating higher quality clusters. Like the Silhoutte index, the DB Index is comprised of two terms, which are combined to form the metric value.\nS(Ck) = 1\nN ∑ i∈Ck d(i, C̄k) M(Ck, Cl) = d(C̄k, C̄l) R(Ck, Cl) = S(Ck) + S(Cl) M(Ck, Cl) (6a,b,c)\nEquations 6a and 6b capture notions of intra- and inter-cluster similarity, respectively. Equation 6c is a metric which captures the quality of the separation between clusters Ck and Cl as defined by their individual densities, as well as the distances between their centroids C̄k and C̄l. Lower values of R indicate a higher quality of separation between clusters Ck amd Cl, thus R should be minimized. DB Index differs from Silhouette in that the DB Index methods are calculated on each pair of clusters, whereas the Silhouette index is calculated for each sample individually. Equation 7 forms our trainable loss function. As with our implementation for Silhouette loss, we calculate the Equation 7 for both labeled and unlabeled points by assigning unsupervised points a label based on the closest labeled cluster centroid.\nLdb = ∑ i 6=j R(Ci, Cj) (7)" }, { "heading": "4 PERFORMANCE EVALUATION", "text": "In this following section, we evaluate the performance of our method using our unsupervised CAE as the unsupervised baseline result, and the lightly modified Prototype Loss as a comparable semisupervised result." }, { "heading": "4.1 EXPERIMENTAL METHODOLOGY", "text": "" }, { "heading": "4.1.1 MODEL SETUP", "text": "For our experimental setup, we use a two-layer CAE model, where convolutional operations are applied along the temporal dimension to featurize the data. In order to perform a fair comparison on multiple datasets, we chose the hyper-parameters for the convolutional layers as follows. For both convolutional layers, we set the filter size as f = b T10c, where T represents the length of the\ntime series. In this way, we determined the filter widths for both layers automatically from the data. For optimization, we use the Adam optimizer (Kingma & Ba, 2014) as implemented in Tensorflow 2.1.0. We use the default learning rate of lr = 0.001 for all experiments, and train for 200 epochs. In experiments which apply two loss functions simultaneously, such as the experiments using a semisupervised loss along with the reconstruction loss, we optimize the sum of the losses. In practice, we found that weighting the losses was not necessary as the model was able to optimize both objectives simultaneously." }, { "heading": "4.1.2 DESIGN OF EXPERIMENTS", "text": "In our experiments, we integrate both the adapted Prototype Loss from Section 3.2.1 and the proposed Silhouette and DB Index Losses from Sections 3.2.2 and 3.2.3 into our CAE architecture as presented in Figure 1 and measure the clustering performance as indicated by the Adjusted Rand Index (ARI) (Hubert & Arabie, 1985), using the scikit-learn implementation of ARI (Pedregosa et al., 2011) for our experiments. In order to perform a fair comparison, we performed tests using the proposed ”combined” architectures, which integrate both the semi-supervised losses Lsup and the CAE’s reconstruction loss Lrc and also measured the effect of disabling the CAE’s reconstruction loss, and training solely on the proposed semi-supervised architectures. This allows us to isolate the performance improvement for the semi-supervised architectures. Finally, we present a baseline comparison against the CAE’s unsupervised performance, using only the reconstruction loss value. The goal of these tests is to demonstrate the performance of each model as the number of supervised examples per-class increases. We perform 4 groups of tests, with each group using a fixed number of supervised examples per-class in the range [4, 28]. We initialize 5 training runs for each model within each group. Within a group we use the same 5 random seeds for each model initialization to ensure that the supervised examples chosen, as well as the parameter initializations are all identical within the group. After training, we use the latent space of the trained model to perform k-Means clustering, and record the ARI. We use the k-Means algorithm because the centroid-based nature of k-Means is a natural fit for the proposed losses. Notably, the Prototype Loss corresponds almost exactly to a k-Means objective, and both Silhouette and DB Index loss also rely on notions of cluster density around a centroid. However, any other general clustering method may be applied. Labeled examples are included when fitting k-Means, but are not included in the ARI metric, to avoid inflating ARI artificially as the number of labeled examples increases." }, { "heading": "4.1.3 DATASETS", "text": "For our testing, we utilize three datasets chosen from the UCR Archive (Dau et al., 2018). All UCR Archive datasets are labeled, which is useful for our evaluation since we may experiment with differing amounts of labeled data. In a real-world scenario with unlabeled data, domain experts provide label information for a small subset of the data. The three datasets that we chose are some of the largest within the UCR Archive. In the case of trainable architecture like our proposed model, large datasets are advantageous, as larger numbers of samples will increase the quality of the latent featurization, and help to improve generalization of the features for unseen samples. All three datasets contains samples which are of the same length. In general , the CAE architecture requires that all samples be the same length, although datasets with variable-length samples can still be used by first applying interpolation or zero-padding to normalize the samples to a consistent length. FacesUCR is a dataset containing face outlines from 14 different individuals, represented as 1D time series. ECG5000 is a dataset containing ECG readings of heartbeats from a congestive heart-failure patient. UWaveGestureLibraryAll is a dataset containing accelerometer recordings of subjects performing different types of hand gestures. Table 2 in Appendix A presents a summary of the characteristics of each dataset." }, { "heading": "4.2 EXPERIMENTAL RESULTS", "text": "The results for the tests on all three datasets are presented in Figure 3. To provide a reference for the performance of our models relative to the unsupervised models, we also present Table 1, which provides ARI performance figures for the k-Means and k-Shape unsupervised clustering algorithms, as applied to the raw data for each of our chosen datasets." }, { "heading": "4.2.1 OVERALL PERFORMANCE", "text": "The results for FacesUCR are presented in Figure 2a. As shown in the figure, the semi-supervised approaches significantly improve the performance of the model, relative to the baseline CAE ARI of 0.35. According to Table 1, the CAE’s performance here is much better than the k-Means performance on raw data, but slightly worse than k-Shape, which achieves an average ARI of 0.441. Out of all the semi-supervised models, the Silhouette Loss + AE model performs slightly better on average than the other models, including the Silhouette Loss without AE. This dataset is an excellent candidate for this type of semi-supervised model, as even providing 4 labels per class can achieve an ARI of 0.7 when using the Silhouette Loss. We can see that as the number of supervised examples per-class increases, the ARI achieved for all of the semi-supervised models also increases, approaching a 0.9 ARI. All semi-supervised models perform well on this dataset, although the two Silhouette Loss models seem to have a slight edge over the others.The results for ECG5000 in Figure 2b show that both DB Index methods perform poorly on this dataset. Both the DB Index and DB Index + AE methods perform worse than the baseline CAE at all numbers of labeled examples. The baseline CAE model performs similarly to k-Means clustering on ECG5000, and better than k-Shape. The Prototype and Prototype + AE Methods do show some improvement over the baseline CAE, but these improvements are relatively minor. The Silhouette and Silhouette + AE methods outperform all other methods here, achieving a ARI of 0.8 for all numbers of labeled examples. However, two of the Silhouette Loss trials at 28 examples seem to diverge, and produce poor results. The Silhouette Loss + AE models do not suffer this same divergence, and provide a stable ARI of around 0.8 for all trials at 28 examples per-class. We suspect that the AE model and associated Lrc was able to help mediate the effect of the divergence in the Silhouette + AE model. The Silhouette + AE model does encounter one minor divergence at 4 examples per-class, where it only achieves a ARI of 0.5. However, we expect that smaller numbers of labeled examples will tend to be noisier, since the model performance depends heavily on the choice of supervised examples. In this case, the Silhouette models are the winners, but do suffer from some divergence issues as mentioned before. We believe that this issue is caused in part by the extremely unbalanced nature of the ECG5000 dataset. Table 3 in Appendix A shows the distribution of classes. Two of the classes are very sparse, and must be over-sampled during training in order to provide the correct number of labeled examples. Additionally, most of the cluster points are concentrated in Clusters 1 and 2. During training, the clusters with smaller numbers of ground-truth labels tend to ”steal” some of the true members of Clusters 1 and 2, leading to a converged result where Clusters 3, 4, and 5 are much\nlarger than in the ground-truth data. The results for UWaveGestureLibraryAll in Figure 2c show that DB index performs consistently well over all numbers of labeled examples. At 4 labeled examples per-class, the DB Index and DB Index + AE methods are the only methods which outperform the standard baseline CAE. Starting at 12 examples per-class, the Silhouette and Prototype models do start to outperform the CAE baseline, but perform noticeably worse than the DB Index models until 28 examples per-class. For this dataset, even the baseline CAE outperforms k-Means on the raw data, which performs the best of the three unsupervised models in Table 1. In this experiment, the DB Index models are the clear winner since they provide excellent performance for any number of labeled examples tested." }, { "heading": "4.2.2 HYPERPARAMETER STUDY", "text": "In the performance evaluation results from Section 4.2.1, we perform parameter updates at the end of each batch. In order to better understand how the frequency of parameter updates affects the overall performance, we also experiment by applying the update for the semi-supervised loss at the end of each epoch, while still updating parameters from the autoencoder loss at the end of each batch. In order to accomplish this, we calculate the semi-supervised gradients at each batch, accumulating them and applying the sum as the gradient update at the end of each epoch. For this experiment, we train the model on the UWaveGestureLibrary dataset and choose 12 supervised examples per class. Since autoencoder updates are performed at the end of each batch, the autoencoder result does not change based on gradient update method, and the results are only provided for comparison. We train all versions of the model using the same random seed, only varying the gradient update method between the two model instances. Figure 3a shows the result of the experiment. As expected, the AE model obtains identical performance between the two update methods. The DB Index and Silhouette methods see a marginal improvement when training using the per-batch methodology. The Prototype Loss method sees marginally better performance when updating at the end of the epoch. In Section 4.1, we describe our method for determining the convolutional filter size dynamically based on data input. In this experiment, we test the same model setup as the gradient test, but vary the filter size. Gradient updates are performed at the end of each epoch. Figure 3b shows the result of this experiment. Most models perform their best with the filter size of 50, but in general performance does not differ much with different choices for filter size. In the real-world use case, a sub-optimal choice for convolutional filter size should not degrade the performance of the model.\nWe also explored the usability of the learned latent space for classification by applying a KNN classification. For this series of tests, we treat the randomly chosen ”labeled” examples for each class as the training points for a KNN classifier, then predict the class of the unlabeled points. We calculate the accuracy of the KNN classifier for each of the three datasets described in 4.1.3.\nIn the results for FacesUCR as seen in Figure 4a, we see a clear distinction in performance between the semi-supervised models and the unsupervised autoencoder. Even for low numbers of supervised examples, all semi-supervised models outperform the autoencoder by a large margin, and the autoencoder never closes the gap in performance, even for larger numbers of supervised examples. The results for ECG5000 show that all models struggle to perform better than the standard CAE. We suspect that the model performs poorly here because of the imbalanced class size, as mentioned previously. The results for UWaveGesture as shown in Figure 4c show more mixed performance. At low numbers of samples, the Sihouette and Prototype losses perform marginally better than the baseline AE, but show a large variance in performance. This demonstrates that both these models’ performance is highly sensitive to the choice for labeled examples. The DB Index models perform distinctly better than any other on this dataset, and show little variance in performance even for lower numbers of labeled examples. We suspect that this is because the embedding clusters in UWaveGesture are more distinct from each other, so the DB Index approach, which is based on optimizing distances between pairs of clusters, performs the best. The baseline AE also shows a large improvement here as the number of labeled examples increases, although it does not outperform the semi-supervised models." }, { "heading": "5 CONCLUSION", "text": "In this paper, we proposed a framework for semi-supervised time series clustering, including three alternative semi-supervised loss functions. Our experiments show that all three implemented semisupervised models can improve clustering performance after training. Experiments also show that training the semi-supervised losses in combination with the reconstruction loss from the autoencoder does provide a slight boost in performance, although this difference is usually small. Although all solutions have generally stable performance across multiple parameter initialization and choices for supervised examples, we do see occasional model divergences. Because these models rely on the labeled examples for training, the quality of these labels is exceedingly important. In a real-world usage scenario, we expect a data domain expert providing labels would be able to choose the most relevant examples to label. Additionally, datasets with significantly unbalanced class sizes may cause performance issues, as are exhibited by our models’ performance on the ECG5000 dataset. The results in Tables 4-6 show that Silhouette Loss on average outperforms the baseline CAE, except in the 4-sample case on the UWave dataset. In addition, the DB Index loss on average outperforms the CAE on both the UWave and FacesUCR datasets.\nIn future work, we plan to explore methods for combining the predictions of these models by training multiple instances of the same model in parallel, then using a consensus clustering system to generate the final set of labels. We expect that this will reduce the severity of model divergences. In a similar vein, we will explore a way to determine an optimal weighting between the reconstruction and semi-supervised losses, since our method currently applies no weighting. Additionally, we believe that training multiple models simultaneously and applying mutli-view learning constraints like those proposed in Wang et al. (2015) could improve the quality of the model’s generated latent space." }, { "heading": "A APPENDIX", "text": "" } ]
2,020
null
SP:d15b10f7790bd3370607b620d28b13b790abd1ec
[ "This paper presents two main theoretical results as its main contributions. First, the authors provide a bound on the model generalization in terms its local explainability. This bound relates the model generalization, the training accuracy, local explainability, and the complexity of the explanations. Second the authors provide a bound on local explainability generalization. This bound describes the relation between training local explainability and test time local explainability. Both these bounds make concrete intuitive notions around local explainability generalization --- for instance, as we increase the size of the local explanation, generalization will decrease. Last, the authors provide short empirical evaluation to demonstrate the properties they describe in their bounds appear in practice." ]
In this paper, we explore connections between interpretable machine learning and learning theory through the lens of local approximation explanations. First, we tackle the traditional problem of performance generalization and bound the testtime predictive accuracy of a model using a notion of how locally explainable it is. Second, we explore the novel problem of explanation generalization which is an important concern for a growing class of finite sample-based local approximation explanations. Finally, we validate our theoretical results empirically and show that they reflect what can be seen in practice.
[ { "affiliations": [], "name": "Jeffrey Li" }, { "affiliations": [], "name": "Vaishnavh Nagarajan" }, { "affiliations": [], "name": "Gregory Plumb" }, { "affiliations": [], "name": "Ameet Talwalkar" } ]
[ { "authors": [ "Sylvain Arlot", "Robin Genuer" ], "title": "Analysis of purely random forests bias", "venue": null, "year": 2014 }, { "authors": [ "Sanjeev Arora", "Rong Ge", "Behnam Neyshabur", "Yi Zhang" ], "title": "Stronger generalization bounds for deep nets via a compression approach", "venue": "Proceedings of Machine Learning Research,", "year": 2018 }, { "authors": [ "Dheeru Dua", "Casey Graff" ], "title": "UCI machine learning repository, 2017", "venue": "URL http://archive. ics.uci.edu/ml", "year": 2017 }, { "authors": [ "Gintare Karolina Dziugaite", "Daniel M. Roy" ], "title": "Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data", "venue": "In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence,", "year": 2017 }, { "authors": [ "Jianqing Fan" ], "title": "Local linear regression smoothers and their minimax efficiencies", "venue": "The Annals of Statistics, 21,", "year": 1993 }, { "authors": [ "Jianqing Fan", "Irène Gijbels" ], "title": "Local polynomial modelling and its applications. Number 66 in Monographs on statistics and applied probability series", "venue": null, "year": 1996 }, { "authors": [ "John Langford", "Rich Caruana" ], "title": "Not) bounding the true error", "venue": "Advances in Neural Information Processing Systems", "year": 2002 }, { "authors": [ "John Langford", "John Shawe-Taylor" ], "title": "Pac-bayes & margins", "venue": "Advances in Neural Information Processing Systems", "year": 2003 }, { "authors": [ "David McAllester" ], "title": "Simplified pac-bayesian margin bounds", "venue": "Learning Theory and Kernel Machines,", "year": 2003 }, { "authors": [ "David A McAllester" ], "title": "Some pac-bayesian theorems", "venue": "In 11th annual conference on Computational learning theory,", "year": 1998 }, { "authors": [ "Vaishnavh Nagarajan", "J. Zico Kolter" ], "title": "Uniform convergence may be unable to explain generalization in deep learning", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Behnam Neyshabur", "Ryota Tomioka", "Nathan Srebro" ], "title": "In search of the real inductive bias: On the role of implicit regularization in deep learning", "venue": null, "year": 2014 }, { "authors": [ "Gregory Plumb", "Denali Molitor", "Ameet S Talwalkar" ], "title": "Model agnostic supervised local explanations", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Gregory Plumb", "Maruan Al-Shedivat", "Angel Alexander Cabrera", "Adam Perer", "Eric Xing", "Ameet Talwalkar" ], "title": "Regularizing black-box models for improved interpretability, 2020", "venue": "URL https: //arxiv.org/abs/1902.06787", "year": 1902 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Why should i trust you?”: Explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": "Anchors: High-precision model-agnostic explanations", "venue": "In AAAI Conference on Artificial Intelligence", "year": 2018 }, { "authors": [ "Lesia Semenova", "Cynthia Rudin", "Ronald Parr" ], "title": "A study in rashomon curves and volumes: A new perspective on generalization and model simplicity in machine learning, 2020", "venue": "URL https: //arxiv.org/abs/1908.01755", "year": 1908 }, { "authors": [ "Shai Shalev-Shwartz", "Shai Ben-David" ], "title": "Understanding Machine Learning - From Theory to Algorithms", "venue": null, "year": 2014 }, { "authors": [ "Jinsung Yoon", "Sercan O. Arik", "Tomas Pfister" ], "title": "RL-LIM: Reinforcement learning-based locally interpretable modeling, 2019", "venue": "URL https://arxiv.org/abs/1909.12367", "year": 1909 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "There has been a growing interest in interpretable machine learning, which seeks to help people better understand their models. While interpretable machine learning encompasses a wide range of problems, it is a fairly uncontroversial hypothesis that there exists a trade-off between a model’s complexity and general notions of interpretability. This hypothesis suggests a seemingly natural connection to the field of learning theory, which has thoroughly explored relationships between a function class’s complexity and generalization. However, formal connections between interpretability and learning theory remain relatively unstudied.\nThough there are several notions of conveying interpretability, one common and flexible approach is to use local approximations. Formally, local approximation explanations (which we will refer to as “local explanations”) provide insight into a model’s behavior as follows: for any black-box model f ∈ F and input x, the explanation system produces a simple function, gx(x′) ∈ Glocal, which approximates f in a chosen neighborhood, x′ ∼ Nx. Crucially, the freedom to specify both Glocal andNx grants local explanations great versatility. In this paper, we provide two connections between learning theory and how well f can be approximated locally (i.e. the fidelity of local explanations).\nOur first result studies the standard problem of performance generalization by relating test-time predictive accuracy to a notion of local explainability. As it turns out, our focus on local explanations leads us to unique tools and insights from a learning theory point of view. Our second result identifies and addresses an unstudied – yet important – question regarding explanation generalization. This question pertains to a growing class of explanation systems, such as MAPLE (Plumb et al., 2018) and RL-LIM (Yoon et al., 2019), which we call finite sample-based local explanations1. These methods learn their local approximations using a common finite sample drawn from data distribution D (in contrast to canonical local approximation methods such as LIME (Ribeiro et al., 2016)) and, as a result, run the risk of overfitting to this finite sample. In light of this, we answer the following question: for these explanation-learning systems, how well do the quality of local explanations generalize to data not seen during training?\n∗Denotes equal contribution 1This terminology is not to be confused with “example-based explanations” where the explanation itself is\nin the form of data instances rather than a function.\nWe address these questions with two bounds, which we outline now. Regarding performance generalization, we derive our first main result, Theorem 1, which bounds the expected test mean squared error (MSE) of any f in terms of its MSE over the m samples in the training set, S = {(xi, yi)}mi=1:\nE(x,y)∼D[(f(x)− y)2]︸ ︷︷ ︸ Test MSE\n≤ Õ ( 1 m m∑ i=1\n(f(xi)− yi)2︸ ︷︷ ︸ Train MSE +E x∼D, x′∼Nx\n[ (gx′(x)− f(x))2 ] ︸ ︷︷ ︸\nInterpretability Term (MNF)\n+ ρSR̂S(Glocal)︸ ︷︷ ︸ Complexity Term\n)\nRegarding explanation generalization for finite sample-based explanation-learning systems, we apply a similar proof technique to obtain Theorem 2, which bounds the quality of the system’s explanations on unseen data in terms of their quality on the data on which the system was trained:\nE x∼D, x′∼Nx\n[ (gx′(x)− f(x))2 ] ︸ ︷︷ ︸\nTest MNF\n≤ 1 m m∑ i=1 Ex′∼Nxi [ (f(xi)− gx′(xi))2 ] ︸ ︷︷ ︸\nTrain MNF\n+ Õ ( ρSR̂S(Glocal) ) ︸ ︷︷ ︸\nComplexity Term\nBefore summarizing our contributions, we discuss the key terms and their relationships.\n• Interpretability terms: The terms involving MNF correspond to Mirrored Neighborhood Fidelity, a metric we use to measure local explanation quality. As we discuss in Section 3, this is a reasonable modification of the commonly used Neighborhood Fidelity (NF) metric (Ribeiro et al., 2016; Plumb et al., 2018). Intuitively, we generally expect MNF to be larger when the neighborhood sizes are larger since each gx′ is required to extrapolate farther.\n• Complexity term: This term measures the complexity of the local explanation system g in terms of (a) the complexity of the local explanation class Glocal and (b) ρS , a quantity that we define and refer to as the neighborhood disjointedness factor. As we discuss in Section 4, ρS is a value in [1, √ m] (where m = |S|) that is proportional to the level of disjointedness of the neighborhoods\nfor points in the sample S. Intuitively, we expect ρS to be larger when the neighborhoods sizes are smaller since smaller neighborhoods will overlap less.\nNotably, both our bounds capture the following key trade-off: as neighborhood widths increase, MNF increases but ρS decreases. As such, our bounds are non-trivial only if the neighborhoods Nx can be chosen such that MNF remains small but ρS grows slower than Õ( √ m) (since R̂S(Glocal) typically decays as Õ(1/ √ m)).\nWe summarize our main contributions as follows:\n(1) We make a novel connection between performance generalization and local explainability, arriving at Theorem 1. Given the relationship between MNF and ρS , this bound roughly captures that an easier-to-interpret f enjoys better generalization guarantees, a potentially valuable result when reasoning about F is difficult (e.g. for neural networks). Further, our proof technique may be of independent theoretical interest as it provides a new way to bound the Rademacher complexity of a particular class of randomized functions (see Section 4).\n(2) We motivate and explore an important generalization question about expected explanation quality. Specifically, we arrive at Theorem 2, a bound for test MNF in terms of train MNF. This bound suggests that practitioners can better guarantee good local explanation quality (measured by MNF) using methods which encourage the neighborhood widths to be wider (see Section 5).\n(3) We verify empirically on UCI Regression datasets that our results non-trivially reflect the two types of generalization in practice. First, we demonstrate that ρS can indeed exhibit slower than Õ( √ m) growth without significantly increasing the MNF terms. Also, for Theorem 2, we show\nthat the generalization gap indeed improves with larger neighborhoods (see Section 6).\n(4) Primarily to aid in our theoretical results, we propose MNF as a novel yet reasonable measure of local explainability. Additionally, we argue that this metric presents a promising avenue for future study, as it may naturally complement NF and offer a unique advantage when evaluating local explanations on “realistic” on-distribution data (see Section 3)." }, { "heading": "2 RELATED WORK", "text": "Interpretability meets learning theory. Semenova et al. (2020) study the performance generalization of models learned from complex classes when they can be globally well-approximated by simpler (e.g. interpretable) classes. In such cases, their theory argues that if the complex class has many models that perform about as optimally on training data, generalization from the complex class can be more closely bounded using the simpler class’s complexity. In our corresponding results, we similarly aim to avoid involving the larger class’s complexity. However, we directly study generalization via a model’s local explainability, rather than instantiate ”complex” and “simple” classes for global approximations. The two are fundamentally different technical problems; standard learning theory results cannot be directly applied as they are done for single-function global approximations.\nStatistical localized regression. (Fan, 1993; Fan & Gijbels, 1996) are canonical results which bound the squared error of a nonparametric function defined using locally fit models. These local models are both simple (e.g. linear) and similarly trained by weighting real examples with a kernel (i.e. an implied neighborhood). However, each local model is only used to make a prediction at its source point and the theory requires shrinking the kernel width towards 0 as the sample size grows. We instead fit local models as explanations for a trained model (i.e. which is considered the “true regression function”) and more importantly, care about the fidelity of each local model over whole (non-zero) neighborhoods. Unlike localized regression, this allows us to use uniform convergence to bound test error with empirical and generalization terms. While the previous results do not have empirical terms, the learning rates are exponential in the number of samples.\nLearning Theory. Another line of related work also studies how to explain generalization of overparameterized classes. As standard uniform convergence on these classes often leads to vacuous bounds, a general approach that has followed from (Nagarajan & Kolter, 2019; Zhang et al., 2017; Neyshabur et al., 2014) has been to study implications of different biases placed on learned models. We study what would happen if an overparameterized model had an unexplored type of bias, one that is inspired by local explainability. Additionally, our work’s technical approach parallels another line of existing results which likewise try to apply uniform convergence on separate surrogate classes. This includes PAC-Bayesian bounds, a large family of techniques that come from looking at a stochastic version of a model in parameter space (McAllester, 1998; 2003; Langford & Caruana, 2002; Langford & Shawe-Taylor, 2003). In a different vein, some results in deep learning look at compressed, sparsified, or explicitly regularized surrogates of neural networks (Arora et al., 2018; Dziugaite & Roy, 2017). In our case, the surrogate class is a collection of local explanations." }, { "heading": "3 MIRRORED NEIGHBORHOOD FIDELITY", "text": "In order to connect local explanations to generalization, recall that we study a measure of local interpretability which we call “mirrored neighborhood fidelity” (MNF). As we explain below, this quantity comes from a slight modification to an existing metric for local approximation explanations, namely, that of neighborhood fidelity (NF).\nTo define these quantities, we use the following notation. Let X be an input space and D be a distribution over X × Y where Y ⊆ R. Then, let F be a class of functions f : X → Y . For our theoretical results, we specifically assume that Y is bounded as Y = [−B,B] for some B > 0 (though this does not matter for the remainder of this section). In order to provide local explanations, we need to fix a nearby region around each x ∈ X . To this end, for any x, letNx correspond to some distribution denoting a local neighborhood at x (e.g. typically chosen to be a distribution centered at x). For any distribution N , we use pN (x) to denote its density at x. Finally, let G be a class of local explainers g : X × X → Y such that for each x ∈ X , the local explanation g(x, ·) : X → Y belongs to a class of (simple) functions (e.g. linear), Glocal. For convenience, we denote g(x, ·) as gx(·) and use it to locally approximate f in the neighborhood defined by Nx. The accuracy of the local explanation system g is usually quantified by a term called “neighbhorhood fidelity” which is defined as follows (Ribeiro et al., 2016; 2018; Plumb et al., 2018; 2020):\nNF(f, g) := Ex∼D [ Ex′∼Nx [ (f(x′)− gx(x′))2 ]] .\nTo verbally interpet this, let us call x as the “source” point which gives rise to a local explanation gx(·) and x′ the “target” points that we try to fit using g. To compute NF(f, g), we need to do the\nfollowing: for each source point x, we first compute the average error in the fit of gx(·) over draws of nearby target points x′ ∼ Nx; then, to get a more overall measure of g’s quality, we globally average this error across draws of the source point x ∼ D. Now, to define MNF, we take the same expression as for NF but swap x and x′ within the innermost expectation (without modifying how each variable is distributed). In other words, we now sample a target point x from D and sample source points x′ from a distribution over points near a given x. Since this local distribution is over source points rather than target points, just for the sake of distinguishing, we’ll refer to this as a mirrored neighborhood distribution and denote it as Nmirx . We define MNF more formally below, following which we explain how to better understand it: Definition 3.1. (Mirrored Neighborhood Fidelity) We define MNF : F × G → R as\nMNF(f, g) := Ex∼D [ Ex′∼Nmirx [ (f(x)− gx′(x))2 ]] .\nand with an abuse of notation, we let MNF(f, g, x) := Ex′∼Nmirx [ (f(x)− gx′(x))2 ] .\nUnderstanding MNF. It is helpful to parse the expression for MNF in two different ways. First, we can think of it as measuring the error in approximating every target point x ∈ X through a randomized locally-approximating function gx′(·), where x′ is randomly drawn from the neighborhood Nmirx . A second way to parse this is in a manner similar to how we parsed NF. To do this, first we note that the expectations in MNF can be swapped around and rewritten as:\nMNF(f, g) = Ex′∼D† [ Ex∼N†\nx′\n[ (f(x)− gx′(x))2 ]] ,\nwhere D† and N†x′ are suitably defined distributions (derived in Appendix A) that can be thought of as modified counterparts of D and Nmirx′ respectively. With this rewritten expression, one can read MNF like NF: for each source point (now x′), we compute the average error in the fit of the corresponding local function (gx′(·)) over target points (x) drawn from the source’s local neighborhood (N†x′ ); this error is then globally averaged over different draws of the source point (x\n′ ∼ D†). Why MNF? While both NF and MNF are closely related measures of local explanation quality on f , studying MNF allows us to make connections between local explainability and different notions of generalization (Sections 4 and 5). At a high-level, our results don’t apply to NF because the overall distribution of the points that are “fit” to calculate NF (i.e., the target points) is not the same as the test data distribution D, instead being D perturbed by Nx. Rather, we prefer this target distribution to be D, which is the case for MNF, for us to be able to neatly bound test-time quantities via a local explainability term. Otherwise, we would have to end up introducing many cumbersome terms.\nFurthermore, MNF may also be of practical interest to the interpretability community, as it potentially offers a unique advantage over NF when the intended usage of local explanations is centered around understanding how f works on the specific learning task it was trained on. Specifically, we present as an exploratory argument that selecting the target point distribution to be D rather than D perturbed byNx (as for NF) may better emphasize the ability of g to approximate f at realistic input points. This is relevant for ML (and deep learning especially) because (a) high-dimensional datasets often exhibit significant feature dependencies and adherence to lower dimensional manifolds; (b) f can often be highly unpredictable and unstable when extrapolating beyond training data. Thus, when one measures NF with standard neighborhood choices that ignore feature dependencies (e.g. most commonly Nx = N (x, σI)), the resulting target distribution may concentrate significantly on regions that are non-relevant to the original learning task at hand. As we illustrate in a toy example, this can lead to overemphasis on fitting noisy off-manifold behavior, deteriorating the fit of explanations relative to task-relevant input regions (we defer a more detailed discussion of this point and of other trade-offs between NF and MNF to Appendix A).\n4 GENERALIZATION OF MODEL PERFORMANCE VIA MNF\nThe generalization error of the function f is typically bounded by some notion of the representational complexity of F . While standard results bound complexity in terms of parameter count, there is theoretical value in deriving bounds involving other novel terms. By doing so, one might understand how regularizing for those terms can affect the representation capacity, and in turn, the generalization\nerror of f . Especially when f ’s complexity may be intractable to bound on its own, introducing these terms provides a potentially useful new way to understand f ’s generalization.\nHere specifically, we are interested in establishing a general connection between the representation complexity and the local explainability of any f . This naturally requires coming up with a notion that appropriately quantifies the complexity of G, which we discuss in the first part of this section. As we shall see, G’s complexity can be expressed in terms of Glocal, which is generally less complex and more amenable to standard analysis than F in practical settings where interpretability is desired. In the second part, we then relate this quantity to the generalization of f to derive our first main result.\nKey technical challenge: bounding the complexity of G. The overall idea behind how one could tie the notions of generalization and local explainability is fairly intuitive. For example, consider a simplified setting where we approximate f by dividingX intoK disjoint pieces, i.e. neighborhoods, and then approximating each neighborhood via a simple (say, linear) model. Then, one could bound the generalization error of f as the sum of two quantities: first, the error in approximating f via the piecewise linear model, and second, a term involving the complexity of said piecewise model. It is straightforward to show that this complexity term grows polynomially with the piece-count, K, and also the complexity of the simple local approximator (see Appendix C.0.1). Similarly, one could hope to bound the generalization error of f in terms of MNF(f, g) and the complexity of G. However, the key challenge here is that the class G is a much more complex class than the above class of piecewise linear models. For example, a straightforward piece-count-based complexity bound would be infinitely large since there are effectively infinitely many unique pieces in g.\nOur core technical contribution here is to bound the (Rademacher) complexity of G in this more flexible local explanation setting. At a high level, the resulting bound (to be stated shortly) grows inversely with “the level of overlap” between the neighborhoods {Nmirx |x ∈ X}, quantified as: Definition 4.1. (Neighborhood disjointedness factor) Given a dataset S ∈ (X × Y)m, we define the neighborhood disjointedness factor ρS as\nρS := ∫ x′∈X √√√√ 1 m m∑ i=1 (pNmirxi (x′))2dx′\nUnderstanding the disjointedness factor. ρS can be interpreted as bounding the “effective number” of pieces induced by the set of neighborhood distributions {Nmirx |x ∈ X}. This turns out to be a quantity that lies in [1, √ m] (shown formally in Appendix Fact B.1). To reason more intuitively about this quantity, it is helpful to consider its behavior in extreme scenarios. First, consider the case where Nmirx is the same distribution (say N ) regardless of x; i.e., neighborhoods are completely overlapping. Then, ρS = ∫ x′∈X (pN (x\n′))dx′ = 1. In the other extreme, consider if neighborhoods centered on the training data are all disjoint with supports X1, . . . ,X|S|. Here, the integral splits into m summands as: ρS = ∑m i=1 ∫ x′∈Xi 1√ m pNmirxi (x′)dx′ = √ m. Thus, ρS grows from 1 to √ m as the level of overlap between the neighborhoods Nmirx1 , . . . , N mir x|S|\nreduces. For intuition at non-extreme values, we show in Appendix B.2 that in a simple setting, ρ = √ m1−k (where 0 ≤ k ≤ 1) if every neighborhood is just large enough to encompass a 1/m1−k fraction of the mass of D.\nRademacher complexity of G. We now use ρS to bound the empirical Rademacher complexity of G. Recall that the empirical Rademacher complexity of a function classH consisting of h : X → R is defined as R̂S(H) := E~σ [ 1 m suph∈H σih(xi) ] , where the σi’s are i.i.d. and drawn uniformly from {−1, 1}. Roughly, this captures the complexity ofH by measuring how well it can fit random labels. Standard results allow us to then use R̂S(H) to bound the generalization error for h ∈ H. Now, in order to define the Rademacher complexity of G (which consists of a different kind of functions whose domain is X ×X instead of X ), it is useful to think of g as a randomized function. Specifically, at any target point x, the output of g is a random variable gx′(x) where the randomness comes from x′ ∼ Nmirx . Then, in Lemma 4.1, we take this stochasticity into account to define and bound the complexity of G. To keep our statement general, we consider a generic loss function L : R×R→ R (e.g., the squared error loss is L(y, y′) = (y − y′)2). Indeed, whenever L satisfies a standard Lipschitz assumption, we can bound the complexity of G (composed with the loss function L) in terms of ρS , the complexity of Glocal, and the Lipschitzness of L:\nLemma 4.1. (see Appendix Lemma D.1 for full, precise statement) Let L(·, y′) be a c-Lipschitz function w.r.t. y′ in that for all y1, y2 ∈ [−B,B], |L(y1, y′) − L(y2, y′)| ≤ c|y1 − y2|. Then, the empirical Rademacher complexity of G under the loss function L is defined and bounded as:\nR̂S(L ◦ G) := E~σ [ sup g∈G 1 m m∑ i σiEx′∼Nmirxi [L(gx′(xi), yi)] ] ≤ O ( cρSR̂S(Glocal) · lnm ) where ~σ is uniformly distributed over {−1, 1}m.\nWe note that the proof technique employed here may be of independent theoretical interest as it provides a novel way to bound the complexity of a (particular type of) randomized function. Although techniques like PAC-Bayes provide ways to bound the complexity of randomized functions, they only apply to functions where the randomization comes from stochasticity in the parameters, which is not the case here.\nMain result. With the above key lemma in hand, we are now ready to prove our main result, which bounds the generalization error of f in terms of the complexity of G, thereby establishing a connection between model generalization and local explainability. Theorem 1. (see Appendix Theorem 3 for full, precise statement) With probability over 1 − δ over the draws of S = {(x1, y1), . . . , (xm, ym)} ∼ Dm, for all f ∈ F and for all g ∈ G, we have (ignoring ln 1/δ factors):\nE(x,y)∼D[(f(x)− y)2] ≤ 4\nm m∑ i=1 (f(xi)− yi)2 + 2Ex∼D[Ex′∼Nmirx [ (f(x)− gx′(x))2 ] ]︸ ︷︷ ︸\nMNF(f,g)\n+ 4\nm m∑ i=1 Ex′∼Nmirx [ (f(xi)− gx′(xi))2 ]︸ ︷︷ ︸ MNF(f,g,xi) +O(BρSR̂S(Glocal) lnm).\nThis result decomposes the test error of f into four quantities. The first quantity corresponds to the training error of f on the training set S. The second and the third correspond to the MNF of f with respect to g, computed on test and training data respectively. The fourth and final quantity corresponds to a term that bounds the complexity of G in terms of the “disjointedness factor” ρS and the complexity of the local function class Glocal. Takeaway. A key aspect of this bound is the trade-off that it captures when varying neighborhood widths. Consider shrinking the neighborhood widths to smaller and smaller values, in turn creating less and less overlap between the neighborhoods of the training examples in S. Then, on the one hand, we’d observe that the complexity term (the fourth term on the R.H.S) increases. Specifically, since R̂S(Glocal) typically scales as O(1/ √ m), as we go from the one extreme of full overlap to\nthe other extreme of complete disjointedness, the complexity term increases from O(1/ √ m) to O(1). At this upper extreme, the bound becomes trivial, as such a constant upper bound would directly follow from just theO(1) bounds assumed on Y . On the other hand, as the widths decrease, the fidelity terms (the second and third terms) would likely decrease – this is because the simple functions in Glocal would find it increasingly easier to approximate f on the shrinking neighborhoods. This trade-off is intuitive. A function f that is hardly amenable to being fit by local explanations would require extremely tiny neighborhoods for Glocal to locally approximate it (i.e. make the MNF terms small). For example, in an extreme case, when the neighborhoodsNmirx are set be point masses at x, it is trivially easy to find gx′(·) ∈ Glocal with no approximation error. Thus, the complexity term would be too large in this case, implying that a hard-to-interpret f results in bad generalization. On the other hand, when f is easier to interpret, then we’d expect it to be well-approximated by Glocal even with wider neighborhoods. This allows one to afford smaller values for both the complexity and MNF terms. In other words, an easy-to-interpret f enjoys better generalization guarantees.\nCaveats. Our bound has two limitations worth noting. First, for high-dimensional datasets (like image datasets), practical choices of Nmirx can lead to almost no overlap between neighborhoods, rendering the bound trivial in practice. This potentially poor dimension-dependence is a caveat similarly shared by bounds for non-parametric local regression, whereby increasing d results in an exponential increase in the required sample size (Fan, 1993; Fan & Gijbels, 1996). Nevertheless, we\nshow later in our experiments that for lower-dimensional datasets and for practical choices of Nmirx , there can be sufficient neighborhood overlap to achieve values of ρS that are o( √ m).\nA second caveat is that the second quantity, MNF(f, g), requires unlabeled test data to be computed, which may be limiting if one is interested in numerically computing this bound in practice. It is however possible to get a bound without this dependence, although only on the test error of g rather than f (see Appendix Theorem 4). Nevertheless, we believe that the above bound has theoretical value in how it establishes a connection between the interpretability of f and its generalization." }, { "heading": "5 GENERALIZATION OF LOCAL EXPLAINABILITY", "text": "We now turn our attention to a more subtle kind of generalization that is both unstudied yet important. Typically, the way gx′ is learned at any source point x′ is by fitting a finite set of points sampled near x′, with the hope that this fit generalizes to unseen, neighboring target points. Naturally, we would want to ask: how well do the explanations gx′ themselves generalize in this sense?\nThis question is straightforward to answer in settings like for LIME (Ribeiro et al., 2016). For instance, assume that we learn gx′ by sampling a set Sx′ of points from a Gaussian centered at x′, and that we care about the fit of gx′ generalizing to the same Gaussian. Here, we can make a standard arugment based on Rademacher complexity to bound the error of gx′ on the Gaussian by its training error on Sx′ and R̂Sx′ (Glocal). We can apply the same argument individually for the explanation at each source point x′, because we have a fresh dataset Sx′ to generate each gx′ . If one has the resources to sample more points to create larger Sx′ , then these bounds will also naturally tighten.\nHowever, consider finite sample-based local explanation settings like MAPLE (Plumb et al., 2018) and RL-LIM (Yoon et al., 2019) where the training procedure is vastly different: in these procedures, the goal is to learn local explanations gx′ in a way that is sensitive to the local structure of the (unknown) underlying data distribution D. So, instead of fitting the gx′ to samples drawn from an arbitrarily defined (e.g. Gaussian) distribution, here one first draws a finite sample S from D and then labels it using f . Then, across all x′ ∈ X , one reuses the same dataset S, but learns each gx′ on a correspondingly reweighted version of S (typically, points in S that are nearer to x′ are weighted more heavily). Contrast this with the former setting, where for each x′, one has access to a fresh dataset (namely, Sx′) to learn gx′ . This distinction then makes it interesting to wonder when the reuse of a common dataset S could cause the explanations to generalize poorly.\nMotivated by this question, we present Theorem 2. By using Lemma 4.1, we provide a bound on the “test MNF” (which corresponds to the fit of gx′ on the unseen data averaged across D) in terms of the “train MNF” (which corresponds to the fit of gx′ on S) and the complexity term from Lemma 4.1. We must however caution the reader that this theorem does not answer the exact question posed in the above paragraph; it only addresses it indirectly as we discuss shortly.\nTheorem 2. (see Appendix Theorem 2-full for full, precise statement) For a fixed function f , with high probability 1− δ over the draws of S ∼ Dm, for all g ∈ G, we have (ignoring ln 1/δ factors):\nE x∼D, x′∼Nx\n[ (f(x)− gx′(x))2 ] ︸ ︷︷ ︸\nTest MNF i.e., MNF(f,g)\n≤ 1 m m∑ i=1 Ex′∼Nmirxi [ (f(xi)− gx′(xi))2 ] ︸ ︷︷ ︸\nTrain MNF\n+O(ρSR̂S(Glocal) lnm).\nUnderstanding the overall bound. We first elaborate on how this bound provides an (indirect) answer to our question about how well explanations generalize. Consider a procedure like MAPLE that learns g using the finite dataset S. For each x′ ∈ X , we would expect this procedure to have learned a gx′ that fits well on at least those target points x in S that are near x′. In doing so, it’s reasonable to expect the training procedure to have implicitly controlled the “train MNF” term. The reasoning for this is that the train MNF computes the error in the fit of gx′ on S for different values of x′, and sums these up in a way that errors corresponding to nearby values of (x, x′) are weighted more (i.e., the weight is given by pNmirx (x\n′)). Now, our bound suggests that when the train MNF is minimized, this carries over to test MNF too, provided the complexity term is not large. That is, we can say that the fit of gx′ generalizes well to unseen, nearby target points x that lie outside of S.\nThe indirectness of our result. Existing finite sample-based explainers do not explicitly minimize the train MNF term (e.g., MAPLE minimizes an error based upon NF). However, as argued above, they have implicit control over train MNF. Hence, our bound essentially treats MNF as a surrogate for reasoning about the generalization of the explanations learned by an arbitrary procedure. As such, our bound does not comment on how well the exact kind of fidelity metric used during training generalizes to test data. Nevertheless, we hope that this result offers a concrete first step towards quantifying the generalization of local explanations. Furthermore, we also note that one could also imagine a novel explanation-learning procedure that does explicitly minimize the train MNF term to learn g; in such a case our bound would provide a direct answer to how well its explanations generalize. Indeed, we derive such a theoretically-principled algorithm in Appendix A.\nTakeaway. While the above bound captures a similar trade-off involving neighborhood width as Theorem 1, it is worth pausing to appreciate the distinct manner in which this trade-off arises here. In particular, when the width is too small, we know that the complexity term approaches O(1) and generalization is poor. Intuitively, this is because in this case, the procedure for learning gx′ would have been trained to fit very few datapoints from S that happened to have fallen in the small neigbhorhood of x′. On the other hand, when the neighborhoods are large, many datapoints from S are likely to fall in each neighborhood, thus allowing each gx′ to be trained on an effectively larger dataset. However, with large neighborhoods, it may also be hard to find functions in Glocal that fit so many points in S. Overall, one practical takeaway from this bound is that it is important to not excessively shrink the neighborhood widths if one wants explanations that generalize well for predicting how f behaves at unseen points (see Section 6).\nCaveats. We remark that this particular bound applies only when the dataset S is solely used to learn g; i.e., both f and the neighborhoods must be learned beforehand with separate data2. This sort of a framework is typical when deriving theoretical results for models like random forests, where it greatly aids analysis to assume that the trees’ splits and their decisions are learned from independent datasets (e.g. two halves of an original dataset) (Arlot & Genuer, 2014). Now, if one is interested in a bound where S is also used to simultaneously learn f , one would have to introduce a term corresponding to the complexity of F . Another caveat is that our bound only tells us how well the explanations gx′ generalize on average over different values of x′. This does not tell us anything about the generalization of the quality of gx′ for an arbitrary value of x′. That being said, just as average accuracy remains a central metric for performance (despite its lack of sensitivity to discrepancies across inputs), average MNF can still be a useful quantity for evaluating an explainer’s overall performance." }, { "heading": "6 EMPIRICAL RESULTS", "text": "We present two sets of empirical results to illustrate the the usefulness of our bounds. First, we demonstrate that ρS grows much slower than O( √ m) which, as stated before, establishes that our bounds yield meaningful convergence rates. Second, we show that Theorem 2 accurately reflects the relationship between explanation generalization (Theorem 2) and the width of Nmirx used to both generate and evaluate explanations.\nSetup. For both experiments, we use several regression datasets from the UCI collection (Dua & Graff, 2017) and standardize each feature to have mean 0 and variance 1. We train neural networks as our “black-box” models with the same setup as in (Plumb et al., 2020), using both their nonregularized and ExpO training procedures. The latter explicitly regularizes for NF during training, which we find also decreases MNF on all datasets. For generating explanations, we define Glocal to be linear models and optimize each gx using the empirical MNF minimizer (see Appendix A). Finally, we approximate ρS using a provably accurate sampling-based approach (see Appendix E).\nGrowth-rate of ρS . In Figure 1 (top), we track the sample dependence of ρS for various neighborhoods of width σ , setting Nmirx = N (x, σI). We specifically approximate the growth rate as polynomial, estimating the exponent by taking the overall slope of a log-log plot of ρS over m. To cover a natural range for each dataset, σ is varied to be between the smallest and half the largest inter-example l2 distances. In these plots, while small σ result in a large exponent for ρS and large\n2It is due to this reason that we can’t plug Theorem 2 into the right hand side of Theorem 1 (in which f depends on S) to replace its test MNF term with a train MNF term.\nσ cause g to intuitively saturate towards a global linear model, we observe that there do exist values of σ, where both these terms are in control i.e., we observe that we can achieve a growth rate of approximately O(m0.2) without causing g to saturate and MNF metrics to rise sharply. Generalization and neighborhood size. As per the setting of Theorem 2, we generate all explanations using data not used to learn the black-box model. Specifically, we split the original test data into two halves, using only the first half for explanation training and the second for explanation testing. We plot MNF as measured over these two subsets of examples in Figure 1 (bottom). From the results, it is evident that a generalization gap between train and test MNF exists. Further, recall that Theorem 2 predicts that this gap decreases as wider neighborhoods are used, a phenomena reflected in most of these plots. As a result, while training MNF monotonically increases with larger neighborhoods, test MNF always decreases at certain ranges of σ." }, { "heading": "7 CONCLUSION AND FUTURE WORK", "text": "In this work, we have studied two novel connections between learning theory and local explanations. We believe these results may be of use in guiding the following directions of future work: (1) developing new local explanation algorithms inspired by our theory and the metric of MNF; (2) resolving caveats or otherwise strengthening the theory presented in this paper; and (3) exploring applications of our techniques beyond interpretability, such as the general problem of deep learning generalization or others that require reasoning about the complexity of randomized functions." }, { "heading": "ACKNOWLEDGMENTS", "text": "This work was supported in part by DARPA FA875017C0141, the National Science Foundation grants IIS1705121 and IIS1838017, an Okawa Grant, a Google Faculty Award, an Amazon Web Services Award, a JP Morgan A.I. Research Faculty Award, and a Carnegie Bosch Institute Research Award. Vaishnavh Nagarajan was supported by a grant from the Bosch Center for AI. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA, the National Science Foundation, or any other funding agency." }, { "heading": "A MORE ON MIRRORED NEIGHBORHOOD FIDELITY", "text": "" }, { "heading": "A.1 AN ALTERNATIVE INTERPRETATION", "text": "Here we elaborate on how the expression for MNF can be parsed in the same way as NF after juggling some terms around. Recall that MNF is defined as:\nMNF(f, g) := Ex∼D [ Ex′∼Nmirx [ (f(x)− gx′(x))2 ]] .\nand with an abuse of notation, we let MNF(f, g, x) = Ex′∼Nmirx [ (f(x)− gx′(x))2 ] .\nHere the outer expectation is over the target points x that the explanations try to fit, and the inner expectation is over the source points x′ which give rise to the explanations gx′ .\nIf we can swap these expectations around, we can afford a similar parsing as NF. To get there, first consider the joint distribution over x and x′ that is induced by generating x ∼ D and then picking x′ ∼ Nmirx . Under this joint distribution, we need an expression for the marginal distribution of x′. This distribution, which we denote by D†, is given by:\npD†(x ′) = ∫ X pD(x)pNmirx (x ′)dx.\nTo get a sense of what D† looks like, imagine that Nmirx is a Gaussian centered at x. Then D † corresponds to convolving D with a Gaussian i.e., a smoother version of D.\nNext, under the same joint distribution, we also need an expression for the distribution of x conditioned on x′. This distribution, denoted as N†x′ , is given by:\npN† x′ (x) =\npD(x)pNmirx (x ′)∫\nX pD(x)pNmirx (x ′)dx\n.\nIntuitively, N†x′ is distribution that is centered around x ′ and is also weighted by the distribution D i.e., points that are both close to x′ and realistic under D have greater weight under N†x′ . This is because the term pNmirx (x\n′) in the numerator prioritizes points that are near x′ (imagine Nmirx being a Gaussian centered at x), and the term pD(x) prioritizes realistic points.\nWith these definitions in hand, we can swap the expectations around and get: MNF(f, g) = Ex′∼D† [ Ex∼N†\nx′\n[ (f(x)− gx′(x))2 ]] ,\nThis then has the same structure as NF in that the outer expectation is over the source points and the inner distribution over target points, and hence can be interpreted similarly. The key insight here is that from the perspective of each individual explanation gx′ , the relevant distribution it needs to fit (to achieve low MNF) is N†x′ , which takes into account both the local neighborhoods and the real data distribution D. In comparison, only the former matters for NF." }, { "heading": "A.2 ALGORITHM FOR MINIMIZING EMPIRICAL MIRRORED NEIGHBORHOOD FIDELITY", "text": "We now consider how one might actually fit explanations to minimize MNF. Recall from the above discussion that from the point of view of each source point x′, MNF measures how well gx′ fits f on\nthe distribution with density pN† x′ (x) =\npD(x)pNmirx (x′)∫\nX pD(x)pNmirx (x′)dx\n. Note that one generally does not have\naccess to samples from this distribution due to the dependence on D. However as we argue, one can minimize the empirical version of MNF given access to a finite sample S drawn i.i.d. from D by solving the following weighted regression problem:\ngx′ = argmin gx′∈Glocal\n1\n|S| |S|∑ i=1 (f(xi)− gx′(xi))2pNmirxi (x ′)\nTo see what the empirical version of MNF is, we can replace the outer expectation (over x ∼ D) in the original definition of MNF with an empirical expectation over S = {xi}|S|i=1, giving us\nEmpirical MNF = 1\n|S| |S|∑ i=1 Ex′∼Nmirxi [ (f(xi)− gx′(xi))2 ] = 1\n|S| |S|∑ i=1 ∫ X (f(xi)− gx′(xi))2pNmirxi (x ′)dx′\n= 1\n|S| ∫ X |S|∑ i=1 (f(xi)− gx′(xi))2pNmirxi (x ′)dx′\nTo minimize the overall empirical MNF, one can thus choose a gx′ for each x′ such that it minimizes the above integrand, which is akin to performing one weighted least squares regression. Thus, one notable difference between minimizing empirical MNF and NF is that we need to use real examples to fit gx′ for MNF but not for NF since the target distribution of interest there can be user-defined (i.e. it may be chosen such that it can be easily sampled from).\nA.3 TRADE-OFFS BETWEEN MNF AND NF\nWe now discuss in further detail the comparison between MNF and NF, listing both the relative advantages and disadvantages of each. It should be noted that this discussion is of a somewhat more exploratory nature; we do not aim to make definitive value judgments (i.e. one metric is always more useful than the other), but rather to provide a better qualitative understanding of how these two metrics might be expected to behave. We hope that this discussion prompts a more careful consideration of fidelity metrics in future works involving local explanations.\nA.3.1 ADVANTAGES OF MNF\nIn many practical situations (esp. for i.i.d. cases), it is reasonable to assume that practitioners will care significantly about generating explanations for predictions at realistic on-distribution points and hoping that those (local) models correctly approximate what the model will do at nearby points which are also realistic. Our core argument for the usefulness of MNF compared to NF is that it can be used to come closer to characterizing performance relative to the second part of this goal (i.e. predicting what the model will do at realistic points).\nTo reiterate Section 3, this is an especially important concern for modern ML settings, which often involve significant feature dependencies (i.e. lower dimensional data manifolds) and models that behave unstably when extrapolating beyond the given task and training data. As we illustrate below in a toy example, when one uses NF with standard neighborhood choices that don’t respect data dependencies (e.g. Nx = N (0, σI)), one may overemphasize the ability of explanations to fit this noisy behavior on regions that are off-manifold.\nToy example. We compare the abilities of MNF and NF to serve as the basis for generating local explanations. In what follows, we refer to gNF and gMNF as the linear local explainers that minimize NF and MNF respectively. We specifically consider a simple setup where the full input space has dimension d = 2 but the data exists on a manifold of dimension k = 1. Under task-distribution D, let x1 ∼ N (0, 1) while x2 = 0. Further consider the learned model f(x) = x1− βx1x22, where one may assume β 0. As an important note, on the task distribution D, f is functionally equivalent to a fairly simple function, i.e. f(x) ≡ x1.\nMinimizing NF: To learn gNFx , we may simply sample many x′ ∼ Nx and find a linear gNFx (·) that fits these points well. Now, we can expect this process to generalize in a way that Ex′∼Nx [(f(xi)− gx(x\n′))2] is minimized. In fact, one could consider the ideal scenario where we sample infinitely many unlabeled examples, and thus find the best possible linear approximation given this neighborhood distribution. However, observe that minimizing the above quantity provides absolutely no guarantee whatsoever regarding the error committed on D i.e., Ex′∼D[(f(xi) − gx(x′))2]. This is because D has zero measure when considering X = R2. This means that if f is arbitrarily volatile along the irrelevant x2 direction, gx can also be arbitrarily incorrect on D. Indeed, this is the case in the setting above. Letting gx(x′) = w1x′1 + w2x ′ 2 and Nx = N(0, I), it can be shown that\nNF(f, g, x) is minimized by w1 = 1 − β. Since β can be arbitrarily large, this explanation can be unboundedly poor at recovering a function equivalent to f(x) ≡ x1 on D. Indeed, it can even get the sign of the coefficient for the active feature x1 wrong when β > 1.\nMinimizing MNF: Note that none of the above is a problem when we learn gMNF, because we fit each gMNFx′ only on target points x that are from the real data manifold.\n3 This will ensure that gMNF is in line with a potentially important desiderata for local explanations i.e., that they can faithfully capture a function that is accurate along the task-relevant data directions (of course, only up to a linear approximation). To illustrate more completely, recall that gMNF is learned as follows: assuming access to S = {x1, . . . , xm} ∼ Dm, we have\ngMNFx′ = argmin gx′∈Glocal\n1\nm m∑ i=1 (f(xi)− gx′(xi))2pNmirxi (x ′)\nNow since S lies on the manifold of D, we have that x2 = 0 on all those points. Therefore, for each x, we find the solution which minimizes\ngMNFx′ = argmin w1∈R\n1\nm m∑ i=1 (xi,1 − w1xi,1)2pNmirxi (x ′)\nIt is easy to see that with just two distinct datapoints from S, we would recover w1 = 1, which leads to perfect predictions for how the function behaves on D.\nAs a remark, an even more natural version of the above setting would be one where f is non-linear even on the data manifold. But even here we can still argue that gMNF would be close to the best possible linear function within the manifold up to a 1/ √ m error (e.g., a generlization bound like our Theorem 2 guarantees this on average over x). On the other hand, regardless of how many unlabeled datapoints we fit gNF with, we would learn gNF that can behave arbitrarily poorly on the manifold.\nA.3.2 LIMITATIONS OF MNF\nBelow, we discuss some limitations of MNF as well as potential future directions for possibly addressing them. At a high-level, we believe while each represents a legitimate concern, they may arguably be (depending on context) “reasonable prices to pay” for the advantages of MNF compared to NF described previously.\nMNF explanations may lose local meaning: Using MNF to evaluate/generate explanations at lowprobability source points x′ may have little to do with how f actually behaves around x′. Because the target point distribution is x|x′ ∝ pD(x)pNx(x′), very little probability mass might be placed in the vicinity around x′ when pD(x) is small. This would be the case when x′ is off-manifold or in low-density regions on the support of the real data distribution. The former might be dismissable if one cares about i.i.d. settings (which is not always the case) but the latter could be very important in applications where rare cases correspond to high-stakes decisions (e.g. disease diagnostics). In these scenarios, the explanation might still be too biased towards how the model is behaving at higher density regions. However, some potential future directions to remedy this are:\n• It might help to allow Nmirx to have smaller width around lower probability points from D (allowing one to concentrate Nmirx around x despite the form of D). It remains a challenge to see how one would actually set these widths but it could be of help if a limit can be assumed on how quickly the value pD(x) are allowed to change around a given x.\n• There also could be some utility in considering a more general definition of MNF that lets one choose an arbitrary outer distribution x ∼ Q other than simply the task distribution D. That is, if one really cares about mutually consistent explanations in some arbitrary region (which could be on or off-manifold), then this would potentially allow one to able to customize a metric for that purpose.\n3Recall notation-wise that NF and MNF swap the usage of x and x′ for target points. Thus, when comparing how individual explanations are generated and evaluated, we refer to them as gMNFx′ and g NF x respectively.\nLess intuitive target point neighborhoods: Very closely related to the previous limitation, in interpreting MNF-based explanations, an end-user would have to understand that gx′ are not exactly approximations for the locality around x′ but rather for the true target distribution that captures in some sense “on-manifold points near x′ (modulated by the concentration of Nmirx ).” This makes it harder for a user to know the exact window in which their explanation is directly valid for (compared to a user-specified target neighborhood for NF). In practice, this shortcoming could be at least partially mitigated as long is it is carefully communicated to users that this limitation exists, i.e. they should focus on using MNF explanations only at and for predicting what happens at realistic points.\nUnnaturalness of source points: While MNF does emphasize realistic target points, it also focuses on explanations generated at potentially off-manifold source points. Further, one could argue that the advantages of MNF are partly because Nx is chosen naively for NF. For instance if one defined Nx = N † x in the definition for MNF, then gNF and gMNF would produce the same explanations because the relevant (inner) expectations would be the same (comparing NF and the reversed expectation form of MNF from Appendix A.1). However, in this proposal, now the average metric for NF seems more natural in an additional sense since it also only reflects caring about ensuring realistic source points (as the outer expectation is over x ∼ D).\nNF = Ex∼DEx′∼N†x [ [(gx(x ′)− f(x′)]2 ]\nMNF = Ex′∼D†Ex∼N† x′\n[ [(gx′(x)− f(x)]2 ] Given these two points, might MNF be less interesting on its own? For the first point, we argue that using standard “naive” settings of Nx with NF can also be considered “unnatural” in that it takes into account how explanations at on-manifold source points perform at off-manifold target points. Second, though the specification NF proposed above may be more ideal as a metric, it also becomes less clear how to evaluate it computationally, as the inner distribution cannot be sampled from easily. On the other hand, we can use the original form of writing out MNF (without the expecations flipped) to directly approximate MNF with relevant samples from D.\nInability to reflect what the model “causally” depends on: In the second toy-example, it was shown that if f(x) = x1 − βx1x22 but the data manifold is (x1, x2) = (x1, 0), one could get arbitrarily poor fidelity and feature relevancy (for x1) on this manifold using standard neighborhoods. But MNF runs into a new problem when the feature set actually includes a highly correlated third feature: for example, consider (x1, x2, x3) where the manifold is defined by points (x1, x2, x3) = (x1, 0, x1). Thus according to MNF, g(x) = x1, g(x) = x3, and indeed g(x) = −x1 + 2x3 are all equally good explanations (because MNF only cares about whether g(x) = f(x) on manifold). However, the output of f clearly only “depends” on the input value of x1 for its decisions (in a causal sense). On the other hand, because NF samples target points both on and off manifold, it would correctly see that x3 has no effect. More broadly, the argument here is that in any conversation involving manifolds, one inherently is speaking about some sort of feature dependencies. Thus, one may similarly suffer from the issues of explanations not being causal w.r.t. the output of f and also not being fully identifiable. On the other hand, we note that in the new toy-example, NF is not an ideal fix either because the cost is potentially an arbitrary coefficient for x1 and extremely poor fidelity on D. Overall, the intended usage of an explanation plays a large role in whether MNF or NF is indeed appropriate. An argument could be made that finding “what the model uses for its decision”, (while perhaps an important goal) is simply not what MNF explanations are trying capture. What one could describe MNF as actually looking at is “can I build a simpler local model relevant to the actual task at hand?”" }, { "heading": "B MORE ON THE DISJOINTEDNESS FACTOR", "text": "" }, { "heading": "B.1 BOUNDS", "text": "Recall that the disjointedness factor is defined as ρS := ∫ x′∈X √∑m j=1(pNmirxi (x′))2 m dx ′. Here, we show that it is bounded between 1 and √ m.\nFact B.1. The disjointedness factor ρS satisfies 1 ≤ ρS ≤ m.\nProof. For the lower bound, we note that since the arithmetic mean lower bounds the quadratic mean, we have:\n∫ x′∈X\n√∑m j=1(pNmirxi (x′))2\nm dx′ ≥ ∫ x′∈X ∑m j=1 pNmirxi (x′) m dx′\n≥ m∑ j=1 1 m ∫ x′∈X pNmirxi (x′)dx′\n≥ m∑ j=1 1 m = 1\nFor the upper bound, we make use of the fact that the `2 norm of a vector is smaller than its `1 norm to get:\n∫ x′∈X\n√∑m j=1(pNmirxi (x′))2\nm dx′ ≤ ∫ x′∈X ∑m j=1 pNmirxi (x′) √ m dx′\n≤ m∑ j=1 1√ m ∫ x′∈X pNmirxi (x′)dx′\n≤ m∑ j=1 1√ m = √ m\nB.2 VALUES OF ρS IN-BETWEEN 1 AND √ m\nWe know that the disjointedness factor ρS takes the values 1 and √ m in the two extreme cases where the neighborhoods are completely overlapping or disjoint respectively. We also know from Fact B.1 that the only other values it takes lie in between 1 and √ m. But when does it take these values?\nTo get a sense of how these in-between values can be realized, we present a toy example. Specifically, we can show that under some simplistic assumptions, ρS = √ m1−k (where 0 ≤ k ≤ 1) if every neighborhood is just large enough to encompass a 1 m1−k\nfraction of mass of the distribution D.\nOur main assumption is that Nmirxi is a uniform distribution over whatever support it covers. Further, to simplify the discussion, we assume that X is a discrete set containing M datapoints in total (think of M as very, very large). Our analysis should extend easily to settings where these assumptions are violated, but we choose to keep these assumptions for readability.\nThen, if every neighborhood contains 1 m1−k fraction of mass of the distribution D, this implies it contains M\nm1−k points. Therefore, since Nmirxi is a uniform distribution, we have that the probability\nmass of Nmirxi on any point x ′ in its support is 1 Mmk−1 . Plugging this in the definition of ρS , we get:\nρS = ∫ x′∈X √√√√ 1 m m∑ i=1 (pNmirxi (x′))2dx′ = M∑ j=1 √√√√ 1 m m∑ i=1 ( Pr x′∼Nmirxi [x′ = xj ] )2\n= M∑ j=1 √√√√ 1 m m∑ i=1 I[xj ∈ supp ( Nmirxi ) ] ( 1 Mmk−1 )2\n= M∑ j=1\n1\nMmk−0.5 √√√√ m∑ i=1 I[xj ∈ supp ( Nmirxi ) ]\nTo further simplify this, we need to compute the innermost summation, which essentially corresponds to the number of mirrored neighborhoods that each point belongs to. For simplicity, let’s assume that every point belongs to n neighborhoods. To estimate n, observe that for each of the m neighborhoods to contain M\nm1−k points, and for each of the M points to be in n neighborhoods, we\nmust have:\nMn = m M\nm1−k .\nThus, n = mk. Plugging this back in, we get ρS = m 1−k 2 ." }, { "heading": "C PIECE-WISE GLOBAL APPROXIMATION", "text": "" }, { "heading": "C.0.1 GENERALIZATION BOUND ASSUMING PIECEWISENESS", "text": "We now discuss the Rademacher complexity of a simpler class of local-approximation functions: a class of piecewise-simple functions g ∈ G with K pieces. In particular, one can show that the complexity of these functions grows with K as √ K.\nTo see why, first let us call the K disjoint regions that g is defined over as R1, . . . , RK . Correspondingly, the original training set S = {xi}mi can be divided into the subsets S1 = {x1,i} m1 i=1, . . . , Sk = {xK,i}mKi=1 that happen to fall into the respective regions and the pieces of g are g1, . . . , gK ∈ Glocal are simple functions. Then, one can split the Rademacher complexity over the whole dataset in terms of these subsets, to get:\nR̂S(G) = Eσ [ sup g∈G 1 m m∑ i=1 σig(xi) ]\n= Eσ [ sup g∈G K∑ k=1 mk m m∑ i=1 1 mk σigj(xi)I{xi ∈ Sj} ]\n= Eσ [ sup g∈G K∑ k=1 mk m mk∑ i=1 1 mk σk,igj(xk,i) ]\n≤ K∑ k=1 mk m Eσ [ sup gj∈G̃ 1 mk σk,igj(xk) ]\n≤ K∑ k=1 mk m R̂Sk(Glocal)\nNow, assuming each R̂Sk(Glocal) is O ( 1√ mk ) , in the worst-case, each subset has the same number\nof points mk = m/K, the sum in the last expression can be bounded as O (√ K m ) ." }, { "heading": "D PROOFS", "text": "Below, we present the full statement and proof of Lemma 4.1 which bounds the Rademacher complexity of G. The main difference between this statement and the version in the main paper is that we replace the Rademacher complexity of Glocal with a slightly more carefully defined version of it defined below:\nR̂∗S(Glocal) := max i≤m max T⊆S,|T |=i\nR̂T (Glocal) √ i\nm (1)\nThis quantity is essentially a bound on the empirical Rademacher complexity of Glocal on all possible subsets of S, with an appropriate scaling factor.\nWe note that although this quantity is technically larger than the original quantity namely R̂S(Glocal), for all practical purposes, it is reasonable to think of R̂∗S(Glocal) as being identical to R̂S(Glocal) modulo some constant factor. For example, if we have that for all h ∈ Glocal, h(x) = w · x where\n‖w‖2 ≤ α, then one would typically bound R̂S(Glocal) by O ( α √∑m i=1 ‖xi‖22/m√ m ) . The bound on\nR̂∗S(Glocal) however would resolve to O ( α √ maxi≤m ‖xi‖22√ m ) . Now, as long as we assume that ‖xi‖\nare all bounded by some constant, both these bounds are asymptotically the same, and have the same 1/ √ m dependence on m. Additionally, we also remark that that it is possible to write our results in terms of tighter definitions of R̂∗S(Glocal), however our statements read much cleaner with the above definition. Lemma D.1. (full, precise statement of Lemma 4.1) Let L(·, y′) be a c-Lipschitz function w.r.t. y′ in that for all y1, y2 ∈ [−B,B], |L(y1, y′) − L(y2, y′)| ≤ c|y1 − y2|. Let S = {(x1, y1), . . . , (xm, ym)} ∈ (X × Y)m. Then, the empirical Rademacher complexity of G under the loss function L is defined and bounded as:\nR̂S(L ◦ G) := E~σ [ sup g∈G 1 m m∑ i σiEx′∼Nmirxi [L(gx′(xi), yi)] ] ≤ cρS(lnm+ 1) · R̂∗S(Glocal).\nwhere recall that ρS := ∫ x′∈X\n√∑m j=1(pNmirxi (x′))2\nm dx ′ is the disjointedness factor.\nOur high level proof idea is to first construct a distribution D̃ over X such that each of the inner expectations over Nmirxi (for each i) can be rewritten as an expectation over x\n′ ∼ D̃. This removes the dependence on i from this expectation, which then allows us to pull this expectation all the way out. This further allows us to take each x′ and compute a Rademacher complexity corresponding to the loss of a single local explanation function gx′ , and then finally average that complexity over x′ ∼ D̃.\nProof. We begin by noting that the inner expectations in the Rademacher complexity are over m unique distributions Nmirxi . our first step is to rewrite these expectations in a way that they all apply on the same distribution. Let us call this distribution D̃ and define what it is later. As long as D̃ has a support that contains the supports of the above m distributions, we can write:\nR̂S(L ◦ G) = E~σ [ sup g∈G 1 m m∑ i σiEx′∼D̃ [ L(gx′(xi), yi) pNmirxi (x′) pD̃(x ′) ]]\nthis allows us to pull the inner expectation out in front of the summation and then the supremum (which now results in an inequality):\n≤ E~σ [ Ex′∼D̃ [ sup g∈G 1 m m∑ i σiL(gx′(xi), yi) pNmirxi (x′) pD̃(x ′) ]]\nwhich further allows us rewrite the supremum to be over Glocal instead of G:\n≤ E~σ [ Ex′∼D̃ [ sup h∈Glocal 1 m m∑ i σiL(h(xi), yi) pNmirxi (x′) pD̃(x ′) ]] and finally, let us simply interchange the two outer expectations to rewrite the expression as:\n≤ Ex′∼D̃ [ E~σ [ sup h∈Glocal 1 m m∑ i σiL(h(xi), yi) pNmirxi (x′) pD̃(x ′) ]] .\nWhat we now have is an inner expectation which boils down to an empirical Rademacher complexity for a fixed x′, and an outer expectation that averages this over x′ ∼ D̃. For the rest of the discussion, we will fix x′ and focus on bounding the inner term. For convenience, let us define wi := p Nmirxi (x′)\npD̃(x ′) .\nWithout loss of generality, assume that w1 ≤ w2 ≤ . . . ≤ wm. Also define w0 := 0. We then begin by expanding wi into a telescopic summation:\nE~σ [ sup h∈Glocal 1 m m∑ i=1 σiL(h(xi), yi)wi ] = E~σ sup h∈Glocal 1 m m∑ i=1 σiL(h(xi), yi) i∑ j=1 (wj − wj−1) then, we interchange the two summations while adjusting their limits appropriately:\n= E~σ sup h∈Glocal 1 m m∑ j=1 m∑ i=j σiL(h(xi), yi)(wj − wj−1) and we pull the outer summation out in front of the supremum and expectation, making it an upper bound:\n≤ m∑ j=1 E~σ sup h∈Glocal 1 m m∑ i=j σiL(h(xi), yi)(wj − wj−1) . Intuitively, the above steps have executed the following idea. The Rademacher complexity on the LHS can be thought of as involving a dataset with weights w1, w2, . . . , wm given to the losses on each of the m datapoints. We then imagine decomposing this “weighted” dataset into multiple weighted datasets while ensuring that the weights summed across these datasets equal w1, w2, . . . , wm on the respective datapoints. Then, we could compute the Rademacher complexity for each of these datasets, and then sum them up to get an upper bound on the complexity corresponding to the original dataset.\nThe way we decomposed the datasets is as follows: first we extract a w1 weight out of all the m data points (which is possible since it’s the smallest weight), giving rise to a dataset of m points all with equal weights w1. What remains is a dataset with weights 0, w2−w1, w3−w1, . . . , wm−w1. From this, we’ll extract a w2 − w1 weight out of all but the first data point to create a dataset of m−1 datapoints all equally weighted as w2−w1. By proceeding similarly, we can generatem such datasets of cardinality m, m − 1, . . ., 1 respectively, such that all datasets have equally weighted points, and weights that follow the sequence w1 − w0, w2 − w1, . . . and so on. As stated before, we will eventually sum up Rademacher complexity terms computed with respect to each of these datasets.\nNow, we continue simplifying the above term by pulling out (wj−wj−1) since it is only a constant:\nE~σ [ sup h∈Glocal 1 m m∑ i=1 σiL(h(xi), yi)wi ] ≤ m∑ j=1 (wj − wj−1)E~σ sup h∈Glocal 1 m m∑ i=j σiL(h(xi), yi) \nnext, we apply the standard contraction lemma (Lemma D.2) to make use of the fact h(xi) is composed with a c-Lipschitz function to get:\n≤ c m∑ j=1 (wj − wj−1)E~σ sup h∈Glocal 1 m m∑ i=j σih(xi) using Sj:m to denote the datapoints indexed from j to m, we can rewrite this in short as:\n≤ c m∑ j=1 (wj − wj−1) m+ 1− j m R̂Sj:m(Glocal)\nand finally, we make use of the definition ofR∗S(Glocal) in Equation 1 to get:\n≤ c m∑ j=1 (wj − wj−1) √ m+ 1− j√ m R̂∗S(Glocal).\nWhat remains now is to simplify the summation over w’s. To do this, we rearrange the telescopic summation as follows:\nm∑ j=1 (wj − wj−1) √ m+ 1− j = m∑ j=1 wj( √ m+ 1− j − √ m− j)\n= m∑ j=1 wj · 1√ m+ 1− j + √ m− j\n≤ m∑ j=1 wj 1√ m+ 1− j\n≤ √√√√ m∑ j=1 w2j · √√√√ m∑ j=1 1 j\n≤ √√√√ m∑ j=1 w2j · (lnm+ 1)\nNote that in the penultimate step we’ve used the Cauchy-Schwartz inequality and in the last step, we have made use of the standard logarithmic upper bound on the m-th harmonic number. Plugging this back on the Rademacher complexity bound, we get:\nR̂S(L ◦ G) ≤ Ex′∼D̃\nc √√√√ m∑\nj=1\nw2j · (lnm+ 1) · R̂∗S(Glocal)√\nm plugging in the values of wj , we get:\n≤ Ex′∼D̃\nc √∑m j=1(pNmirxi (x′))2\n(pD̃(x ′))2\n· (lnm+ 1) · R̂ ∗ S(Glocal)√ m . ≤ cEx′∼D̃ √√√√∑mj=1 (pNmirxi (x′))2m\n(pD̃(x ′))2\n (lnm+ 1) · R̂∗S(Glocal).\nNow we finally set D̃ such that pD̃(x ′) =\n√∑m j=1 (p Nmirxi (x′))2 m\nρS where ρS is a normalization constant\nsuch that ρS = ∫ x′∈X √∑m j=1 (p Nmirxi (x′))2 m dx ′. Then, the above term would simplify as:\nR̂S(L ◦ G) ≤ cEx′∼D̃ [ρS ] (lnm+ 1) · R̂ ∗ S(Glocal)\n≤ cρS(lnm+ 1) · R̂∗S(Glocal).\nNext, we state and prove the full version of Theorem 1 which provided a generalization guarantee for the test error of f in terms of its local interpretability. This result follows by applying the previous lemma for the squared error loss. For this we need to show that the squared error loss is Lipschitz, which follows from our assumption that the range of the functions in F and Glocal and also the labels y are bounded in [−B,B]. Theorem 3. (full, precise version of Theorem 1) With probability over 1 − δ over the draws of S = {(x1, y1), . . . , (xm, ym)} ∼ Dm, for all f ∈ F and for all g ∈ G, we have (ignoring ln 1/δ factors):\nE(x,y)∼D[(f(x)− y)2] ≤ 4\nm m∑ i=1 (f(xi)− yi)2 + 2Ex∼D[Ex′∼Nmirx [ (f(x)− gx′(x))2 ] ]︸ ︷︷ ︸\nMNF(f,g)\n+ 4\nm m∑ i=1 Ex′∼Nmirx [ (f(xi)− gx′(xi))2 ]︸ ︷︷ ︸ MNF(f,g,xi) +16BρSR̂∗S(Glocal)(lnm+ 1)\n+ 2\n√ ln 1/δ\nm ,\nwhere ρS denotes the disjointedness factor defined as ρS := ∫ x′∈X √ 1 m ∑m i=1(pNmirxi (x′))2dx′ and" }, { "heading": "R̂∗S(Glocal) is defined in Equation 1.", "text": "Proof. First, we split the test error into two terms by introducing the g function as follows:\nE(x,y)∼D[(f(x)− y)2] = E(x,y)∼D[Ex′∼Nmirx [(f(x)− y) 2]] ≤ 2 ( Ex∼D[Ex′∼Nmirx [(f(x)− gx′(x)) 2]] + Ex∼D[Ex′∼Nmirx [(gx′(x)− y) 2]] )\n(2)\nIn the first step, we have introduced a dummy expectation over x′, and in the next step, we have used the following inequality: for any a, b, c ∈ R, (a− b)2 ≤ (|a− c|+ |c− b|)2 ≤ 2(|a− c|2 + |c− b|2) (the first inequality in this line is the triangle inequality and the second inequality is the root mean square inequality).\nThe first term on the RHS above is MNF(f, g). To simplify the second term, we first apply a generalization bound based on Rademacher complexity. Specifically, we have that w.h.p 1− δ over the draws of S, for all g ∈ G,\nEx∼D[Ex′∼Nmirx [(gx′(x)− y) 2]] ≤ 1\nm m∑ i=1 Ex′∼Nmirxi [(gx′(xi)− yi) 2] + 2R̂S(G) +\n√ ln 1/δ\nm (3)\nNow, R̂S(G) can be bounded using Lemma 4.1 under Lipschitzness of the squared error loss. Specifically, we have that for h, h′ ∈ Glocal, and for all y ∈ [−B,B], |(h(x) − y)2 − (h′(x) − y)2| ≤\n4B|h(x) − h′(x)|, since all of h(x), h′(x) and y lie in [−B,B]. Therefore, from Lemma 4.1 we have that:\nR̂S(G) ≤ 4B(lnm+ 1)ρSR̂∗S(Glocal). (4)\nThe only term that remains to be bounded is the first term on the RHS. This can bounded again using the inequality that for any a, b, c ∈ R, (a− b)2 ≤ (|a− c|+ |c− b|)2 ≤ 2(|a− c|2 + |c− b|2):\n1\nm m∑ i=1 Ex′∼Nmirxi [(gx′(xi)− yi) 2)] ≤ 2 m m∑ i=1 Ex′∼Nmirxi [(gx′(xi)− f(xi)) 2] + 2 m m∑ i=1 (f(xi)− yi)2\n(5)\nBy combining the above three chains of inequalities, we get the final bound.\nBelow, we present an alternative version of Theorem 1 where the generalization bound does not involve the test MNF and hence does not require any unlabeled data from D; however the bound is not on the test error of f but the test error of g. Theorem 4. (an alternative version of Theorem 1) With probability over 1 − δ over the draws of S = {(x1, y1), . . . , (xm, ym)} ∼ Dm, for all f ∈ F and for all g ∈ G, we have:\nE(x,y)∼D[Ex′∼Nmirx [(gx′(x)− y) 2]] ≤ 2\nm m∑ i=1 (f(xi)− yi)2 + 2 m m∑ i=1 Ex′∼Nmirx [ (f(xi)− gx′(xi))2 ]︸ ︷︷ ︸ MNF(f,g,xi)\n+ 8BρSR̂S(Glocal)(lnm+ 1) + √ ln 1/δ\nm .\nProof. The proof follows directly from the proof of Theorem 3 starting from Equation 3.\nWe now state and prove the full version of Theorem 2 which provided a generalization guarantee for the quality of explanations. Theorem 5. (full, precise statement of Theorem 2) For a fixed function f , with high probability 1− δ over the draws of S ∼ Dm, for all g ∈ G, we have:\nEx∼D [ Ex′∼Nmirx [ (f(x)− gx′(x))2 ]] ︸ ︷︷ ︸\nTest MNF i.e., MNF(f, g)\n≤ 1 m m∑ i=1 Ex′∼Nmirx [ (f(xi)− gx′(xi))2 ] ︸ ︷︷ ︸\nTrain MNF + 8BρSR̂S(Glocal) lnm+ √ ln 1/δ\nm .\nwhere R̂∗S(Glocal) is defined in Equation 1.\nProof. For this result, we need to think of f as a fixed labeling function since it is independent of the dataset S that is used to train g. Then, one can apply a standard Rademacher complexity bound and invoke Lemma 4.1 to get the final result (as invoked in Equation 4).\nBelow, we state the standard contraction lemma for Rademacher complexity (e.g., Shalev-Shwartz & Ben-David (2014) Lemma 26.9). The lemma states that composing a function class with a cLipschitz function can scale up its Rademacher complexity by a multiplicative factor of at most c. Lemma D.2. (Contraction lemma) For each i = 1, 2, . . . ,m, let φi : R → R be a c-Lipschitz function in that for all t, t′ ∈ B ⊆ R, |φi(t)− φi(t′)| ≤ |t− t′|. Then, for any class H of functions h : R→ B, we have:\nE~σ [ m∑ i=1 σiφi(h(xi)) ] ≤ cE~σ [ m∑ i=1 σi(h(xi) ] ." }, { "heading": "E EXPERIMENT DETAILS", "text": "E.1 PROCEDURE FOR CALCULATING ρS\nAs a reminder, we define ρS to be an integral over X , which is not trivial to evaluate in practice, especially when dealing with higher dimensional variables.\nρS = ∫ x′∈X √√√√ 1 m m∑ i=1 (pNmirxi (x′))2dx′\nCommon numerical integration techniques usually incur significant computational costs due to the dimension of x. Though a variety of methods exist, one can intuit the inherent difficulty that causes this blow-up by considering the naive approach of simply constructing a Riemann sum across a rectangular meshgrid of points for a d-dimensional X . If one wants to create a grid of c points per dimension, then cd points (and thus evaluations of the integrand) must be processed.\nInstead, we can apply Monte-Carlo Integration to evaluate ρS . As we will see, a key feature of this approach is that error will not scale with data dimension and can be bounded probabilistically via a basic Hoeffding bound. Currently, the integral does not take the form of an expectation so we must introduce a dummy distribution q(x′) as follows\nρS = ∫ x′∈X\n√ 1 m ∑m i=1(pNmirxi (x′))2\nq(x′) q(x′)dx′ = Ex′∼q\n √ 1 m ∑m i=1(pNmirxi (x′))2\nq(x′) Now, we can estimate ρS with n independent samples from q.\nρ̂S,n = 1\nn n∑ j=1\n√ 1 m ∑m i=1(pNmirxi (x′j)) 2\nq(x′j)\nThis is obviously an unbiased estimate of ρS , but that in itself is not sufficient. It is only a feasible approach if we can choose q such that (1) we can actually sample from it, (2) we can calculate q(x′)\nfor arbitrary x′ and (3) we can control the variance of\n√ 1 m ∑m i=1(pNmirxi (x′))2\nq(x′) .\nIt can be shown by choosing q to be a uniform mixture of the m training set neighborhoods (one for each xi ∈ S), we can satisfy all 3 properties. (1) and (2) are trivial if those same properties being satisfied by Nmirx (which is the case for the Gaussian neighborhoods we consider). If N mir x can be sampled from, the mixture over m such distributions can obviously be sampled from. The same goes for calculating the density, which in this case is:\nq(x′) = m∑ i=1 1 m · pNmirxi (x ′) = 1 m m∑ i=1 pNmirxi (x′)\nWe observe that (3) can also be shown because we can upper and lower bound the quantity in question. To show this, we first re-write it as\n√ 1 m ∑m i=1(pNmirxi (x′))2\nq(x′) =\n√ 1 m ∑m i=1(pNmirxi (x′))2\n1 m ∑m i=1 ·pNmirxi (x ′)\n= √ m\n√∑m i=1(pNmirxi\n(x′))2∑m i=1 ·pNmirxi (x ′)\n= √ m ||pS(x′)||2 ||pS(x′)||1\nwhere pS(x′) is a m-dimensional vector of densities each evaluated at x′ (i.e. one for each of the m training points). Since for any vector v, ||v||2 ≤ ||v||1 ≤ √ m||v||2, the upper and lower bounds for\nthis quantity in question are √ m and 1 respectively. Thus we can bound the variance of this quantity by 14 ( √ m−1)2 ≤ m4 and Var(ρ̂S,n) ≤ m 4n . This does not scale with dimension but only the number of training points!\nTo be even more concrete, for a given m and n, we can now apply a Hoeffding bound to control the error.\nP(|ρ̂S,n − ρS | > t) ≤ 2e −2nt2 m\nIn our experiments we choose n to be 10m, meaning that the probability that ρS is off by more than 0.5 is capped at about 1% (recall that ρS scales from [1, √ m], which means this is a fairly reasonable degree of accuracy)." }, { "heading": "E.2 FULL SET OF RESULTS", "text": "Note that for the Day and Music datasets, the test MNF curves behave differently than for the other three datasets. Here, test MNF rises initially for small neighborhood widths but then drops and plateaus as neighborhoods get larger. As noted by Plumb et al. (2020), Day exhibits (globally) a fairly linear relationship between inputs and outputs. Thus, we hypothesize our results here can be explained on the basis that a global linear model (which is what the explanations saturate towards as σ increases) actually can do quite well at approximating f . That is, in the terms of the bound, the train MNF does not rise too sharply when saturation occurs but the complexity terms become smaller as neighborhoods get wider. We hypothesize a similar effect is at play for Music." } ]
2,021
A LEARNING THEORETIC PERSPECTIVE ON LOCAL EXPLAINABILITY
SP:f04f2cdfcdb5478771da0dc4f28df9f694739d3d
[ "The paper shows that a two-layer neural network (although an extension to deeper models seem unproblematic) may outperform a class of linear functions in terms of the excess risk learning rate, and in a minimax optimality analysis, and when approximating a target function from the neural network class. The paper essentially shows that linear functions have a problem with the non-convexity of the neural network class, and approximate the slow rate of 1/(n)^(1/2) for increasing dimension. A neural network trained with noisy stochastic gradient descent on the other hand has a faster rate, depending on several parameters." ]
Establishing a theoretical analysis that explains why deep learning can outperform shallow learning such as kernel methods is one of the biggest issues in the deep learning literature. Towards answering this question, we evaluate excess risk of a deep learning estimator trained by a noisy gradient descent with ridge regularization on a mildly overparameterized neural network, and discuss its superiority to a class of linear estimators that includes neural tangent kernel approach, random feature model, other kernel methods, k-NN estimator and so on. We consider a teacher-student regression model, and eventually show that any linear estimator can be outperformed by deep learning in a sense of the minimax optimal rate especially for a high dimension setting. The obtained excess bounds are so-called fast learning rate which is faster than O(1/ √ n) that is obtained by usual Rademacher complexity analysis. This discrepancy is induced by the non-convex geometry of the model and the noisy gradient descent used for neural network training provably reaches a near global optimal solution even though the loss landscape is highly non-convex. Although the noisy gradient descent does not employ any explicit or implicit sparsity inducing regularization, it shows a preferable generalization performance that dominates linear estimators.
[ { "affiliations": [], "name": "Taiji Suzuki" } ]
[ { "authors": [ "Z. Allen-Zhu", "Y. Li" ], "title": "What can ResNet learn efficiently, going beyond kernels", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Z. Allen-Zhu", "Y. Li" ], "title": "Backward feature correction: How deep learning performs deep learning", "venue": "arXiv preprint arXiv:2001.04413,", "year": 2020 }, { "authors": [ "Z. Allen-Zhu", "Y. Li", "Z. Song" ], "title": "A convergence theory for deep learning via over-parameterization", "venue": "In Proceedings of International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "A. Andersson", "R. Kruse", "S. Larsson" ], "title": "Duality in refined Sobolev–Malliavin spaces and weak approximation of SPDE", "venue": "Stochastics and Partial Differential Equations Analysis and Computations,", "year": 2016 }, { "authors": [ "S. Arora", "S.S. Du", "W. Hu", "Z. Li", "R. Wang" ], "title": "Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks", "venue": null, "year": 1901 }, { "authors": [ "F. Bach" ], "title": "Breaking the curse of dimensionality with convex neural networks", "venue": "Journal of Machine Learning Research,", "year": 2017 }, { "authors": [ "Y. Bai", "J.D. Lee" ], "title": "Beyond linearization: On quadratic and higher-order approximation of wide neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "A.R. Barron" ], "title": "Universal approximation bounds for superpositions of a sigmoidal function", "venue": "IEEE Transactions on Information theory,", "year": 1993 }, { "authors": [ "P. Bartlett", "O. Bousquet", "S. Mendelson" ], "title": "Local Rademacher complexities", "venue": "The Annals of Statistics,", "year": 2005 }, { "authors": [ "C.-E. Bréhier", "M. Kopec" ], "title": "Approximation of the invariant law of SPDEs: error analysis using a Poisson equation for a full-discretization scheme", "venue": "IMA Journal of Numerical Analysis,", "year": 2016 }, { "authors": [ "Y. Cao", "Q. Gu" ], "title": "A generalization theory of gradient descent for learning over-parameterized deep ReLU networks", "venue": "arXiv preprint arXiv:1902.01384,", "year": 2019 }, { "authors": [ "A. Caponnetto", "E. de Vito" ], "title": "Optimal rates for regularized least-squares algorithm", "venue": "Foundations of Computational Mathematics,", "year": 2007 }, { "authors": [ "M. Chen", "Y. Bai", "J.D. Lee", "T. Zhao", "H. Wang", "C. Xiong", "R. Socher" ], "title": "Towards understanding hierarchical learning: Benefits of neural representations", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "L. Chizat" ], "title": "Sparse optimization on measures with over-parameterized gradient descent", "venue": "arXiv preprint arXiv:1907.10300,", "year": 2019 }, { "authors": [ "L. Chizat", "F. Bach" ], "title": "A note on lazy training in supervised differentiable programming", "venue": "arXiv preprint arXiv:1812.07956,", "year": 2018 }, { "authors": [ "L. Chizat", "F. Bach" ], "title": "Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss", "venue": "arXiv preprint arXiv:2002.04486,", "year": 2020 }, { "authors": [ "G. Da Prato", "J. Zabczyk" ], "title": "Non-explosion, boundedness and ergodicity for stochastic semilinear equations", "venue": "Journal of Differential Equations,", "year": 1992 }, { "authors": [ "G. Da Prato", "J. Zabczyk" ], "title": "Ergodicity for Infinite Dimensional Systems", "venue": null, "year": 1996 }, { "authors": [ "D.L. Donoho", "R.C. Liu", "B. MacGibbon" ], "title": "Minimax risk over hyperrectangles, and implications", "venue": "The Annal of Statistics, 18(3):1416–1437,", "year": 1990 }, { "authors": [ "D.L. Donoho", "I.M. Johnstone", "G. Kerkyacharian", "D. Picard" ], "title": "Density estimation by wavelet thresholding", "venue": "The Annals of Statistics,", "year": 1996 }, { "authors": [ "S. Du", "J. Lee", "H. Li", "L. Wang", "X. Zhai" ], "title": "Gradient descent finds global minima of deep neural networks", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "S.S. Du", "X. Zhai", "B. Poczos", "A. Singh" ], "title": "Gradient descent provably optimizes over-parameterized neural networks", "venue": "International Conference on Learning Representations", "year": 2019 }, { "authors": [ "W. E", "C. Ma", "L. Wu" ], "title": "A priori estimates of the population risk for two-layer neural networks", "venue": "Communications in Mathematical Sciences,", "year": 2019 }, { "authors": [ "W. E", "C. Ma", "L. Wu" ], "title": "A comparative analysis of optimization and generalization properties of two-layer neural network and random feature models under gradient descent dynamics", "venue": "Science China Mathematics,", "year": 2019 }, { "authors": [ "M.A. Erdogdu", "L. Mackey", "O. Shamir" ], "title": "Global non-convex optimization with discretized diffusions", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "B. Ghorbani", "S. Mei", "T. Misiakiewicz", "A. Montanari" ], "title": "Linearized two-layers neural networks in high dimension", "venue": "arXiv preprint arXiv:1904.12191,", "year": 2019 }, { "authors": [ "B. Ghorbani", "S. Mei", "T. Misiakiewicz", "A. Montanari" ], "title": "When do neural networks outperform kernel methods", "venue": "arXiv preprint arXiv:2006.13409,", "year": 2020 }, { "authors": [ "E. Giné", "V. Koltchinskii" ], "title": "Concentration inequalities and asymptotic results for ratio type empirical processes", "venue": "The Annals of Probability,", "year": 2006 }, { "authors": [ "S. Gunasekar", "J.D. Lee", "D. Soudry", "N. Srebro" ], "title": "Implicit bias of gradient descent on linear convolutional networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "M. Hairer" ], "title": "Exponential mixing properties of stochastic PDEs through asymptotic coupling", "venue": "Probab. Theory Related Fields,", "year": 2002 }, { "authors": [ "S. Hayakawa", "T. Suzuki" ], "title": "On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces", "venue": "Neural Networks,", "year": 2020 }, { "authors": [ "K. Hornik", "M. Stinchcombe", "H. White" ], "title": "Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks", "venue": "Neural Networks,", "year": 1990 }, { "authors": [ "M. Imaizumi", "K. Fukumizu" ], "title": "Deep neural networks learn non-smooth functions effectively", "venue": "Proceedings of Machine Learning Research,", "year": 2019 }, { "authors": [ "B. Irie", "S. Miyake" ], "title": "Capabilities of three-layered perceptrons", "venue": "In IEEE 1988 International Conference on Neural Networks, pp", "year": 1988 }, { "authors": [ "A. Jacot", "F. Gabriel", "C. Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "S. Jacquot", "G. Royer" ], "title": "Ergodicité d’une classe d’équations aux dérivées partielles stochastiques", "venue": "Comptes Rendus de l’Académie des Sciences. Série I. Mathématique,", "year": 1995 }, { "authors": [ "J.M. Klusowski", "A.R. Barron" ], "title": "Risk bounds for high-dimensional ridge function combinations including neural networks", "venue": "arXiv preprint arXiv:1607.01434,", "year": 2016 }, { "authors": [ "V. Koltchinskii" ], "title": "Local Rademacher complexities and oracle inequalities in risk minimization", "venue": "The Annals of Statistics,", "year": 2006 }, { "authors": [ "Y. Li", "T. Ma", "H.R. Zhang" ], "title": "Learning over-parametrized two-layer neural networks beyond ntk", "venue": "Proceedings of Machine Learning Research,", "year": 2020 }, { "authors": [ "B. Maslowski" ], "title": "Strong Feller property for semilinear stochastic evolution equations and applications", "venue": "Notes Control Inf. Sci.,", "year": 1989 }, { "authors": [ "S. Mei", "A. Montanari", "P.-M. Nguyen" ], "title": "A mean field view of the landscape of two-layer neural networks", "venue": "Proceedings of the National Academy of Sciences,", "year": 2018 }, { "authors": [ "S. Mei", "T. Misiakiewicz", "A. Montanari" ], "title": "Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit", "venue": "Proceedings of the Thirty-Second Conference on Learning Theory,", "year": 2019 }, { "authors": [ "S. Mendelson" ], "title": "Improving the sample complexity using global data", "venue": "IEEE Transactions on Information Theory,", "year": 2002 }, { "authors": [ "M. Mohri", "A. Rostamizadeh", "A. Talwalkar" ], "title": "Foundations of Machine Learning", "venue": null, "year": 2012 }, { "authors": [ "B. Muzellec", "K. Sato", "M. Massias", "T. Suzuki" ], "title": "Dimension-free convergence rates for gradient Langevin dynamics in RKHS", "venue": null, "year": 2003 }, { "authors": [ "A. Nitanda", "T. Suzuki" ], "title": "Stochastic particle gradient descent for infinite ensembles", "venue": "arXiv preprint arXiv:1712.05438,", "year": 2017 }, { "authors": [ "M. Raginsky", "A. Rakhlin", "M. Telgarsky" ], "title": "Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis", "venue": "arXiv e-prints, pp", "year": 2017 }, { "authors": [ "G. Rotskoff", "E. Vanden-Eijnden" ], "title": "Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "G.M. Rotskoff", "E. Vanden-Eijnden" ], "title": "Trainability and accuracy of neural networks: An interacting particle system approach", "venue": "arXiv preprint arXiv:1805.00915,", "year": 2019 }, { "authors": [ "W. Rudin" ], "title": "Real and Complex Analysis (third edition)", "venue": "Mathematics series. McGraw-Hill,", "year": 1987 }, { "authors": [ "J. Schmidt-Hieber" ], "title": "Nonparametric regression using deep neural networks with ReLU activation function", "venue": "The Annals of Statistics,", "year": 2020 }, { "authors": [ "T. Shardlow" ], "title": "Geometric ergodicity for stochastic PDEs", "venue": "Stochastic Analysis and Applications,", "year": 1999 }, { "authors": [ "R. Sowers" ], "title": "Large deviations for the invariant measure of a reaction-diffusion equation with nonGaussian perturbations", "venue": "Probab. Theory Related Fields,", "year": 1992 }, { "authors": [ "T. Suzuki" ], "title": "Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "T. Suzuki" ], "title": "Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional langevin dynamics", "venue": "In Advances in Neural Information Processing Systems", "year": 2020 }, { "authors": [ "T. Suzuki", "A. Nitanda" ], "title": "Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space", "venue": null, "year": 1910 }, { "authors": [ "M. Welling", "Y.-W. Teh" ], "title": "Bayesian learning via stochastic gradient Langevin dynamics", "venue": "In ICML, pp", "year": 2011 }, { "authors": [ "B. Woodworth", "S. Gunasekar", "J.D. Lee", "E. Moroshko", "P. Savarese", "I. Golan", "D. Soudry", "N. Srebro" ], "title": "Kernel and rich regimes in overparametrized models", "venue": "Proceedings of Machine Learning Research,", "year": 2020 }, { "authors": [ "P. Xu", "J. Chen", "D. Zou", "Q. Gu" ], "title": "Global convergence of langevin dynamics based algorithms for nonconvex optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "G. Yehudai", "O. Shamir" ], "title": "On the power and limitations of random features for understanding neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "S. Zhang", "M.-Y. Wong", "Z. Zheng" ], "title": "Wavelet threshold estimation of a regression function with random design", "venue": "Journal of Multivariate Analysis,", "year": 2002 }, { "authors": [ "D. Zou", "Y. Cao", "D. Zhou", "Q. Gu" ], "title": "Gradient descent optimizes over-parameterized deep ReLU networks", "venue": "Machine Learning,", "year": 2020 }, { "authors": [ "Zhang" ], "title": "Let μ be the Lebesgue measure. Suppose that the space Ω has even partition A such that |A| = 2 for an integer K ∈ N, each A has equivalent measure μ(A) = 2−K for all A ∈ A, andA is indeed a partition of Ω", "venue": null, "year": 2002 }, { "authors": [ "Hornik" ], "title": "exp(iw>x)f̂(w)dw = f(x), for every x ∈ Rd1. Then the Irie-Miyake itegral representation (Irie & Miyake (1988); see also the proof of Theorem", "venue": null, "year": 1990 } ]
[ { "heading": null, "text": "√ n) that is obtained by usual Rademacher\ncomplexity analysis. This discrepancy is induced by the non-convex geometry of the model and the noisy gradient descent used for neural network training provably reaches a near global optimal solution even though the loss landscape is highly non-convex. Although the noisy gradient descent does not employ any explicit or implicit sparsity inducing regularization, it shows a preferable generalization performance that dominates linear estimators." }, { "heading": "1 INTRODUCTION", "text": "In the deep learning theory literature, clarifying the mechanism by which deep learning can outperform shallow approaches has been gathering most attention for a long time. In particular, it is quite important to show that a tractable algorithm for deep learning can provably achieve a better generalization performance than shallow methods. Towards that goal, we study the rate of convergence of excess risk of both deep and shallow methods in a setting of a nonparametric regression problem. One of the difficulties to show generalization ability of deep learning with certain optimization methods is that the solution is likely to be stacked in a bad local minimum, which prevents us to show its preferable performances. Recent studies tackled this problem by considering optimization on overparameterized networks as in neural tangent kernel (NTK) (Jacot et al., 2018; Du et al., 2019a) and mean field analysis (Nitanda & Suzuki, 2017; Chizat & Bach, 2018; Rotskoff & Vanden-Eijnden, 2018; 2019; Mei et al., 2018; 2019), or analyzing the noisy gradient descent such as stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011; Raginsky et al., 2017; Erdogdu et al., 2018).\nThe NTK analysis deals with a relatively large scale initialization so that the model is well approximated by the tangent space at the initial solution, and eventually, all analyses can be reduced to those of kernel methods (Jacot et al., 2018; Du et al., 2019b; Allen-Zhu et al., 2019; Du et al., 2019a; Arora et al., 2019; Cao & Gu, 2019; Zou et al., 2020). Although this regime is useful to show its global\nconvergence, the obtained estimator looses large advantage of deep learning approaches because the estimation ability is reduced to the corresponding kernel methods. To overcome this issue, there are several “beyond-kernel” type analyses. For example, Allen-Zhu & Li (2019; 2020) showed benefit of depth by analyzing ResNet type networks. Li et al. (2020) showed global optimality of gradient descent by reducing the optimization problem to a tensor decomposition problem for a specific regression problem, and showed the “ideal” estimator on a linear model has worse dependency on the input dimensionality. Bai & Lee (2020) considered a second order Taylor expansion and showed that the sample complexity of deep approaches has better dependency on the input dimensionality than kernel methods. Chen et al. (2020) also derived a similar conclusion by considering a hierarchical representation. The analyses mentioned above actually show some superiority of deep learning, but all of these bounds are essentially Ω(1/ √ n) where n is the sample size, which is not optimal for regression problems with squared loss (Caponnetto & de Vito, 2007). The reason why only such a sub-optimal rate is considered is that the target of their analyses is mostly the Rademacher complexity of the set in which estimators exist for bounding the generalization gap. However, to derive a tight excess risk bound instead of the generalization gap, we need to evaluate so called local Rademacher complexity (Mendelson, 2002; Bartlett et al., 2005; Koltchinskii, 2006) (see Eq. (2) for the definition of excess risk). Moreover, some of the existing analyses should change the target function class as the sample size n increases, for example, the input dimensionality is increased against the sample size, which makes it difficult to see how the rate of convergence is affected by the choice of estimators.\nAnother promising approach is the mean field analysis. There are also some work that showed superiority of deep learning against kernel methods. Ghorbani et al. (2019) showed that, when the dimensionality d of input is polynomially increasing with respect to n, the kernel methods is outperformed by neural network approaches. Although the situation of increasing d explains well the modern high dimensional situations, this setting blurs the rate of convergence. Actually, we can show the superiority of deep learning even in a fixed dimension setting.\nThere are several studies about approximation abilities of deep and shallow models. Ghorbani et al. (2020) showed adaptivity of kernel methods to the intrinsic dimensionality in terms of approximation error and discuss difference between deep and kernel methods. Yehudai & Shamir (2019) showed that the random feature method requires exponentially large number of nodes against the input dimension to obtain a good approximation for a single neuron target function. These are only for approximation errors and estimation errors are not compared.\nRecently, the superiority of deep learning against kernel methods has been discussed also in the nonparametric statistics literature where the minimax optimality of deep learning in terms of excess risk is shown. Especially, it is shown that deep learning achieves better rate of convergence than linear estimators in several settings (Schmidt-Hieber, 2020; Suzuki, 2019; Imaizumi & Fukumizu, 2019; Suzuki & Nitanda, 2019; Hayakawa & Suzuki, 2020). Here, the linear estimators are a general class of estimators that includes kernel ridge regression, k-NN regression and Nadaraya-Watson estimator. Although these analyses give clear statistical characterization on estimation ability of deep learning, they are not compatible with tractable optimization algorithms.\nIn this paper, we give a theoretical analysis that unifies these analyses and shows the superiority of a deep learning method trained by a tractable noisy gradient descent algorithm. We evaluate the excess risks of the deep learning approach and linear estimators in a nonparametric regression setting, and show that the minimax optimal convergence rate of the linear estimators can be dominated by the noisy gradient descent on neural networks. In our analysis, the model is fixed and no explicit sparse regularization is employed. Our contributions can be summarized as follows:\n• A refined analysis of excess risks for a fixed model with a fixed input dimension is given to compare deep and shallow estimators. Although several studies pointed out the curse of dimensionality is a key factor that separates shallow and deep approaches, we point out that such a separation appears in a rather low dimensional setting, and more importantly, the non-convexity of the model essentially makes the two regimes different.\n• A lower bound of the excess risk which is valid for any linear estimator is derived. The analysis is considerably general because the class of linear estimators includes kernel ridge regression with any kernel and thus it also includes estimators in the NTK regime. • All derived convergence rate is a fast learning rate that is faster than O(1/ √ n). We show that\nsimple noisy gradient descent on a sufficiently wide two-layer neural network achieves a fast\nlearning rate by using a fact that the solution converges to a Bayes estimator with a Gaussian process prior, and the derived convergence rate can be faster than that of linear estimators. This is much different from such existing work that compared only coefficients with the same rate of convergence with respect to the sample size n.\nOther related work Bach (2017) analyzed the model capacity of neural networks and its corresponding reproducing kernel Hilbert space (RKHS), and showed that the RKHS is much larger than the neural network model. However, separation of the estimation abilities between shallow and deep is not proven. Moreover, the analyzed algorithm is basically the Frank-Wolfe type method which is not typically used in practical deep learning. The same technique is also employed by Barron (1993). The Frank-Wolfe algorithm is a kind of sparsity inducing algorithm that is effective for estimating a function in a model with an L1-norm constraint. It has been shown that explicit or implicit sparse regularization such as L1-regularization is beneficial to obtain better performances of deep learning under certain situations (Chizat & Bach, 2020; Chizat, 2019; Gunasekar et al., 2018; Woodworth et al., 2020; Klusowski & Barron, 2016). For example, E et al. (2019b;a) showed that the approximation error of a linear model suffers from the curse of dimensionality in a setting where the target function is in the Barron class (Barron, 1993), and showed anL1-type regularization avoids the curse of dimensionality. However, our analysis goes in a different direction where a sparse regularization is not required." }, { "heading": "2 PROBLEM SETTING AND MODEL", "text": "In this section, we give the problem setting and notations that will be used in the theoretical analysis. We consider the standard nonparametric regression problem where data are generated from the following model for an unknown true function fo : Rd → R:\nyi = f o(xi) + i (i = 1, . . . , n), (1)\nwhere xi is independently identically distributed from PX whose support is included in Ω = [0, 1]d, and i is an observation noise that is independent of xi and satisfies E[ i] = 0 and i ∈ [−U,U ] almost surely. The n i.i.d. observations are denoted by Dn = (xi, yi)ni=1. We want to estimate the true function fo through the training data Dn. To achieve this purpose, we employ the squared loss `(y, f(x)) = (y − f(x))2 and accordingly we define the expected and empirical risks as L(f) := EY,X [`(Y, f(X))] and L̂(f) := 1n ∑n i=1 `(yi, f(xi)) respectively. Throughout this paper, we are interested in the excess (expected) risk of an estimator f̂ defined by\n(Excess risk) L(f̂)− inf f :measurable L(f). (2)\nSince the loss function ` is the squared loss, the infimum of inff :measurable L(f) is achieved by fo: inff :measurable L(f) = L(fo). The population L2(PX)-norm is denoted by ‖f‖L2(PX) :=√\nEX∼PX [f(X) 2] and the sup-norm on the support of PX is denoted by ‖f‖∞ := supx∈supp(PX) |f(x)|. We can easily check that for an estimator f̂ , the L2-distance ‖f̂ − f o‖2L2(PX) between the estimator f̂ and the true function fo is identical to the excess risk: L(f̂) − L(fo) = ‖f̂ − fo‖2L2(PX). Note that the excess risk is different from the generalization gap L(f̂) − L̂(f̂). Indeed, the generalization gap typically converges with the rate of O(1/ √ n) which is optimal in a typical setting (Mohri et al., 2012). On the other hand, the excess risk can be faster than O(1/ √ n), which is known as a fast learning rate (Mendelson, 2002; Bartlett et al., 2005; Koltchinskii, 2006; Giné & Koltchinskii, 2006)." }, { "heading": "2.1 MODEL OF TRUE FUNCTIONS", "text": "To analyze the excess risk, we need to specify a function class (in other words, model) in which the true function fo is included. In this paper, we only consider a two layer neural network model, whereas the techniques adapted in this paper can be directly extended to deeper neural network models. We consider a teacher-student setting, that is, the true function fo can be represented by a neural network defined as follows. For w ∈ R, let w̄ be a “clipping” of w defined as w̄ :=\nR × tanh(w/R) where R ≥ 1 is a fixed constant, and let [x; 1] := [x>, 1]> for x ∈ Rd. Then, the teacher network is given by\nfW (x) = ∑∞ m=1 amw̄2,mσm(w > 1,m[x; 1]),\nwhere w1,m ∈ Rd+1 and w2,m ∈ R (m ∈ N) are the trainable parameters (where W = (w1,m, w2,m) ∞ m=1), am ∈ R (m ∈ N) is a fixed scaling parameter, and σm : R→ R is an activation function for the m-th node. The reason why we applied the clipping operation to the parameter of the second layer is just for a technical reason to ensure convergence of Langevin dynamics. The dynamics is bounded in high probability in practical situations and the boundedness condition would be removed if further theoretical development of infinite dimensional Langevin dynamics would be achieved.\nLet H be a set of parameters W such that its squared norm is bounded: H := {W = (w1,m, w2,m) ∞ m=1 | ∑∞ m=1(‖w1,m‖2 + w22,m) < ∞}. Define ‖W‖H := [ ∑∞ m=1(‖w1,m‖2 + w22,m)] 1/2 for W ∈ H. Let (µm)∞m=1 be a regularization parameter such that µm ↘ 0. Accordingly\nwe defineHγ := {W ∈ H | ‖W‖Hγ <∞} where ‖W‖Hγ := [ ∑∞ m=1 µ −γ m (‖w1,m‖2 + w22,m)]1/2 for a given 0 < γ. Throughout this paper, we analyze an estimation problem in which the true function is included in the following model:\nFγ = {fW |W ∈ Hγ , ‖W‖Hγ ≤ 1}.\nThis is basically two layer neural network with infinite width. As assumed later, am is assumed to decrease as m → ∞. Its decreasing rate controls the capacity of the model. If the first layer parameters (w1,m)m are fixed, this model can be regarded as a variant of the unit ball of some reproducing kernel Hilbert space (RKHS) with basis functions amσm(w>1,m[x; 1]). However, since the first layer (w1,m) is also trainable, there appears significant difference between deep and kernel approaches. The Barron class (Barron, 1993; E et al., 2019b) is relevant to this function class. Indeed, it is defined as the convex hull of w2σ(w>1 [x; 1]) with norm constraints on (w1, w2) where σ is an activation function. On the other hand, we will put an explicit decay rate on am and the parameter W has an L2-norm constraint, which makes the model Fγ smaller than the Barron class." }, { "heading": "3 ESTIMATORS", "text": "We consider two classes of estimators and discuss their differences: linear estimators and deep learning estimator with noisy gradient descent (NGD).\nLinear estimator A class of linear estimators, which we consider as a representative of “shallow” learning approach, consists of all estimators that have the following form:\nf̂(x) = ∑n i=1 yiϕi(x1, . . . , xn, x).\nHere, (ϕi)ni=1 can be any measurable function (and L2(PX)-integrable so that the excess risk can be defined). Thus, they could be selected as the “optimal” one so that the corresponding linear estimator minimizes the worst case excess risk. Even if we chose such an optimal one, the worst case excess risk should be lower bounded by our lower bound given in Theorem 1. It should be noted that the linear estimator does not necessarily imply “linear model.” The most relevant linear estimator in the machine learning literature is the kernel ridge regression: f̂(x) = Y >(KX + λI)−1k(x) where KX = (k(xi, xj))ni,j=1 ∈ Rn×n, k(x) = [k(x, x1), . . . , k(x, xn)]> ∈ Rn and Y = [y1, . . . , yn]\n> ∈ Rn for a kernel function k : Rd × Rd → R. Therefore, the ridge regression estimator in the NTK regime or the random feature model is also included in the class of linear estimators. The solution obtained in the early stopping criteria instead of regularization in the NTK regime under the squared loss is also included in the linear estimators. Other examples include the k-NN estimator and the Nadaraya-Watson estimator. All of them do not train the basis function in a nonlinear way, which makes difference from the deep learning approach. In the nonparametric statistics literature, linear estimators have been studied for estimating a wavelet series model. Donoho et al. (1990; 1996) have shown that a wavelet shrinkage estimator can outperform any linear estimator by showing suboptimality of linear estimators. Suzuki (2019) utilized such an argument to show superiority of deep learning but did not present any tractable optimization algorithm.\nNoisy Gradient Descent with regularization As for the neural network approach, we consider a noisy gradient descent algorithm. Basically, we minimize the following regularized empirical risk:\nL̂(fW ) + λ2 ‖W‖ 2 H1 .\nHere, we employ H1-norm as the regularizer. We note that the constant γ controls the relative complexity of the true function fo compared to the typical solution obtained under the regularization. Here, we define a linear operator A as λ‖W‖H1 = W>AW , that is, AW = (λµ−1m w1,m, λµ −1 m w2,m) ∞ m=1. The regularized empirical risk can be minimized\nby noisy gradient descent as Wk+1 = Wk − η∇(L̂(fWk) + λ2 ‖Wk‖ 2 H1) + √ 2η β ξk, where η > 0 is a step size and ξk = (ξk,(1,m), ξk,(2,m))∞m=1 is an infinite-dimensional Gaussian noise, i.e., ξk,(1,m) and ξk,(2,m) are independently identically distributed from the standard normal distribution (Da Prato & Zabczyk, 1996). Here, ∇L̂(fW ) = 1n ∑n i=1 2(fW (xi) − yi)(w̄2,mam[xi; 1]σ ′ m(w > 1,m[xi; 1]), am tanh ′(w2,m/R)σm(w > 1,m[xi; 1])) ∞ m=1. However, since ∇‖Wk−1‖2H1 is unbounded which makes it difficult to show convergence, we employ the semiimplicit Euler scheme defined by\nWk+1 =Wk−η∇L̂(fWk)−ηAWk+1+ √ 2η β ξk ⇔Wk+1 =Sη ( Wk−η∇L̂(fWk)+ √ 2η β ξk ) , (3)\nwhere Sη := (I+ηA)−1. It is easy to check that this is equivalent to the following update rule: Wk = Wk−1 − η ( ∇L̂(fWk−1) + SηAWk−1 + √ 2η β ξk−1 ) . Therefore, the implicit Euler scheme can be seen as a naive noisy gradient descent for minimizing the empirical risk with a slightly modified ridge regularization. This can be interpreted as a discrete time approximation of the following infinite dimensional Langevin dynamics:\ndWt = −∇(L̂(fWt) + λ2 ‖Wt‖ 2 H1)dt+ √ 2/βdξt, (4)\nwhere (ξt)t≥0 is the so-called cylindrical Brownian motion (see Da Prato & Zabczyk (1996) for the details). Its application and analysis for machine learning problems with non-convex objectives have been recently studied by, for example, Muzellec et al. (2020); Suzuki (2020).\nThe above mentioned algorithm is executed on an infinite dimensional parameter space. In practice, we should deal with a finite width network. To do so, we approximate the solution by a finite dimensional one: W (M) = (w1,m, w2,m)Mm=1 where M corresponds to the width of the network. We identify W (M) to the “zero-padded” infinite dimensional one, W = (w1,m, w2,m)∞m=1 with w1,m = 0 and w2,m = 0 for all m > M . Accordingly, we use the same notation fW (M) to indicate fW with zero padded vector W . Then, the finite dimensional version of the update rule is given by W (M) k+1 = S (M) η ( W\n(M) k − η∇L̂(fW (M)k ) + √ 2η β ξ (M) k ) , where ξ(M)k is the Gaussian noise vector\nobtained by projecting ξk to the first M components and S (M) η is also obtained in a similar way." }, { "heading": "4 CONVERGENCE RATE OF ESTIMATORS", "text": "In this section, we present the excess risk bounds for linear estimators and the deep learning estimator. As for the linear estimators, we give its lower bound while we give an upper bound for the deep learning approach. To obtain the result, we setup some assumptions on the model. Assumption 1.\n(i) There exists a constant cµ such that µm ≤ cµm−2 (m ∈ N). (ii) There exists α1 > 1/2 such that am ≤ µα1m (m ∈ N).\n(iii) The activation functions (σm)m is bounded as ‖σm‖∞ ≤ 1. Moreover, they are three times differentiable and their derivatives upto third order differentiation are uniformly bounded: ∃Cσ such that ‖σm‖1,3 := max{‖σ′m‖∞, ‖σ′′m‖∞, ‖σ′′′m‖∞} ≤ Cσ (∀m ∈ N).\nThe first assumption (i) controls the strength of the regularization, and combined with the second assumption (ii) and definition of the model Fγ , complexity of the model is controlled. If α1 and γ are large, the model is less complicated. Indeed, the convergence rate of the excess risk becomes\nfaster if these parameters are large as seen later. The decay rate µm ≤ cµm−2 can be generalized as m−p with p > 1 but we employ this setting just for a technical simplicity for ensuring convergence of the Langevin dynamics. The third assumption (iii) can be satisfied by several activation functions such as the sigmoid function and the hyperbolic tangent. The assumption ‖σm‖∞ ≤ 1 could be replaced by another one like ‖σm‖∞ ≤ C, but we fix this scaling for simple presentation." }, { "heading": "4.1 MINIMAX LOWER BOUND FOR LINEAR ESTIMATORS", "text": "Here, we analyze a lower bound of excess risk of linear estimators, and eventually we show that any linear estimator suffers from curse of dimensionality. To rigorously show that, we consider the following minimax excess risk over the class of linear estimators:\nRlin(Fγ) := inf f̂ :linear sup fo∈Fγ EDn [‖f̂ − fo‖2L2(PX)],\nwhere inf is taken over all linear estimators and EDn [·] is taken with respect to the training data Dn. This expresses the best achievable worst case error over the class of linear estimators to estimate a function in Fγ . To evaluate it, we additionally assume the following condition. Assumption 2. We assume that µm = m−2 and am = µα1m (m ∈ N) (and hence cµ = 1). There exists a monotonically decreasing sequence (bm)∞m=1 and s ≥ 3 such that bm = µα2m (∀m) with α2 > γ/2 and σm(u) = bsmσ(b −1 m u) (u ∈ R) where σ is the sigmoid function: σ(u) = 1/(1+e−u).\nIntuitively, the parameter s controls the “resolution” of each basis function σm, and the relation between parameter α1 and α2 controls the magnitude of coefficient for each basis σm. Note that the condition s ≥ 3 ensures ‖σm‖1,3 is uniformly bounded and 0 < bm ≤ 1 ensures ‖σm‖∞ ≤ 1. Our main strategy to obtain the lower bound is to make use of the so-called convex-hull argument. That is, it is known that, for a function class F , the minimax risk R(F) over a class of linear estimators is identical to that for the convex hull of F (Hayakawa & Suzuki, 2020; Donoho et al., 1990):\nRlin(F) = Rlin(conv(F)), where conv(F) = { ∑N i=1 λifi | fi ∈ F , ∑N i=1 λi = 1, λi ≥ 0, N ∈ N} and conv(·) is the closure of conv(·) with respect to L2(PX)-norm. Intuitively, since the linear estimator is linear to the observations (yi)ni=1 of outputs, a simple application of Jensen’s inequality yields that its worst case error on the convex hull of the function class F does not increase compared with that on the original one F (see Hayakawa & Suzuki (2020) for the details). This indicates that the linear estimators cannot distinguish the original hypothesis class F and its convex hull. Therefore, if the class F is highly non-convex, then the linear estimators suffer from much slower convergence rate because its convex hull conv(F) becomes much “fatter” than the original one F . To make use of this argument, for each sample size n, we pick up appropriate mn and consider a subset generated by the basis function σmn , i.e., F (n) γ := {amnw̄2,mnσm(w>1,mn [x; 1]) ∈ Fγ}. By applying the convex hull argument to this set, we obtain the relation Rlin(Fγ) ≥ Rlin(F (n)γ ) = Rlin(conv(F (n)γ )). Since F (n)γ is highly non-convex, its convex hull conv(F (n)γ ) is much larger than the original set F (n)γ and thus the minimax risk over the linear estimators would be much larger than that over all estimators including deep learning. More intuitively, linear estimators do not adaptively select the basis functions and thus they should prepare redundantly large class of basis functions to approximate functions in the target function class. The following theorem gives the lower bound of the minimax optimal excess risk over the class of linear estimators. Theorem 1. Suppose that Var( ) > 0, PX is the uniform distribution on [0, 1]d, and Assumption 2 is satisfied. Let β̃ = α1+(s+1)α2α2−γ/2 . Then for arbitrary small κ ′ > 0, we have that\nRlin(Fγ) & n − 2β̃+d 2β̃+2dn−κ ′ . (5)\nThe proof is in Appendix A. We utilized the Irie-Miyake integral representation (Irie & Miyake, 1988; Hornik et al., 1990) to show there exists a “complicated” function in the convex hull, and then we adopted the technique of Zhang et al. (2002) to show the lower bound. The lower bound is characterized by the decaying rate (α1) of am relative to that (α2) of the scaling factor bm. Indeed, the faster am decays with increasing m, the faster the rate of the minimax lower bound becomes.\nWe can see that the minimax rate of linear estimators is quite sensitive to the dimension d. Actually, for relatively high dimensional settings, this lower bound becomes close to a slow rate Ω(1/ √ n), which corresponds to the curse of dimensionality.\nIt has been pointed out that the sample complexity of kernel methods suffers from the curse of dimensionality while deep learning can avoid that by a tractable algorithms (e.g., Ghorbani et al. (2019); Bach (2017)). Among them, Ghorbani et al. (2019) showed that if the dimensionality d is polynomial against n, then the excess risk of kernel methods is bounded away from 0 for all n. On the other hand, our analysis can be applied to any linear estimator including kernel methods, and it shows that even if the dimensionality d is fixed, the convergence rate of their excess risk suffers from the curse of dimensionality. This can be accomplished thanks to a careful analysis of the rate of convergence. Bach (2017) derived an upper bound of the Rademacher complexity of the unit ball of the RKHS corresponding to a neural network model. However, it is just an upper bound and there is still a large gap from excess risk estimates. Allen-Zhu & Li (2019; 2020); Bai & Lee (2020); Chen et al. (2020) also analyzed a lower bound of sample complexity of kernel methods. However, their lower bound is not for the excess risk of the squared loss. Eventually, the sample complexities of all methods including deep learning take a form of O(C/ √ n) and dependency of coefficient C to the dimensionality or other factors such as magnitude of residual components is compared. On the other hand, our lower bound properly involves the properties of squared loss such as strong convexity and smoothness and the bound shows the curse of dimensionality occurs even in the rate of convergence instead of just the coefficient. Finally, we would like to point out that several existing work (e.g., Ghorbani et al. (2019); Allen-Zhu & Li (2019)) considered a situation where the target function class changes as the sample size n increases. However, our analysis reveals that separation between deep and shallow occurs even if the target function class Fγ is fixed." }, { "heading": "4.2 UPPER BOUND FOR DEEP LEARNING", "text": "Here, we analyze the excess risk of deep learning trained by NGD and its algorithmic convergence rate. Our analysis heavily relies on the weak convergence of the discrete time gradient Langevin dynamics to the stationary distribution of the continuous time one (Eq. (4)). Under some assumptions, the continuous time dynamics has a stationary distribution (Da Prato & Zabczyk, 1992; Maslowski, 1989; Sowers, 1992; Jacquot & Royer, 1995; Shardlow, 1999; Hairer, 2002). If we denote the probability measure onH corresponding to the stationary distribution by π∞, then it is given by\ndπ∞ dνβ (W ) ∝ exp(−βL̂(fW )),\nwhere νβ is the Gaussian measure in H with mean 0 and covariance (βA)−1 (see Da Prato & Zabczyk (1996) for the rigorous definition of the Gaussian measure on a Hilbert space). Remarkably, this can be seen as the Bayes posterior for a prior distribution νβ and a “log-likelihood” function exp(−βL̂(W )). Through this view point, we can obtain an excess risk bound of the solution Wk. The proofs of all theorems in this section are in Appendix B.\nUnder Assumption 1, the distribution ofWk derived by the discrete time gradient Langevin synamics satisfies the following weak convergence property to the stationary distribution π∞. This convergence rate analysis depends on the techniques by Bréhier & Kopec (2016); Muzellec et al. (2020). Proposition 1. Assume Assumption 1 holds and β > η. Then, there exist spectral gaps Λ∗η and Λ∗0 (defined in Eq. (10) of Appendix B.1) and a constant C0 such that, for any 0 < a < 1/4, the following convergence bound holds for almost sure observation Dn:\n|EWk [L(fWk)|Dn]− EW∼π∞ [L(fW )|Dn]| ≤ C0 exp(−Λ∗ηηk) + C1 √ β\nΛ∗0 η1/2−a =: Ξk, (6)\nwhere C1 is a constant depending only on cµ, R, α1, Cσ, U, a (independent of η, k, β, λ, n).\nThis proposition indicates that the expected risk of Wk can be almost identical to that of the “Bayes posterior solution” obeying π∞ after sufficiently large iterations k with sufficiently small step size η even though L̂(fW ) is not convex. The definition of Λ∗η can be found in Eq. (10). We should note that its dependency on β is exponential. Thus, if we take β = Ω(n), then the computational cost until a sufficiently small error could be exponential with respect to the sample size n. The same convergence holds also for finite dimensional oneW (M)k with a modified stationary distribution. The\nconstants appearing in the bound are independent of the model size M (see the proof of Proposition 1 in Appendix B). In particular, the convergence can be guaranteed even ifW is infinite dimensional. This is quite different from usual finite dimensional analyses (Raginsky et al., 2017; Erdogdu et al., 2018; Xu et al., 2018) which requires exponential dependency on the dimension, but thanks to the regularization term, we can obtain the model size independent convergence rate. Xu et al. (2018) also analyzed a finite dimensional gradient Langevin dynamics and obtained a similar bound where O(η) appears in place of the second term η1/2−a which corresponds to time discretization error. In our setting the regularization term is ‖W‖2H1 = ∑ m(‖w1,m‖2 + w22,m)/µm with µm . m−2, but\nif we employ ‖W‖2Hp/2 = ∑ m(‖w1,m‖2 + w22,m)/µ p/2 m for p > 1, then the time discretization error term would be modified to η(p−1)/p−a (Andersson et al., 2016). We can interpret the finite dimensional setting as the limit of p → ∞ which leads to η(p−1)/p → η that recovers the finite dimensional result (O(η)) as shown by Xu et al. (2018).\nIn addition to the above algorithmic convergence, we also have the following convergence rate for the excess risk bound of the finite dimensional solution W (M)k .\nTheorem 2. Assume Assumption 1 holds, assume η < β ≤ min{n/(2U2), n}, and 0 < γ < 1/2 + α1. Then, if the width satisfies M ≥ min { λ1/4γ(α1+1)β1/2γ , λ−1/2(α1+1), n1/2γ } , the expected excess risk of Wk is bounded as\nEDn [ E W\n(M) k\n[‖f W\n(M) k\n−fo‖2L2(PX)|Dn] ] ≤C max { (λβ) 1/γ 1+1/2γ n− 1 1+1/2γ, λ − 1 2(α1+1) β−1, λ γ 1+α1 } +Ξk,\nwhere C is a constant independent of n, β, λ, η, k. In particular, if we set β = min{n/(2U2), n} and λ = β−1, then for M ≥ n1/2(α1+1), we obtain\nEDn [ E W\n(M) k\n[‖f W\n(M) k\n− fo‖2L2(PX)|Dn] ] . n− γ α1+1 + Ξk.\nIn addition to this theorem, if we further assume Assumption 2, we obtain a refined bound as follows. Corollary 1. Assume Assumptions 1 and 2 hold and η < β, and let β = min{n/(2U2), n} and λ = β−1. Suppose that there exists 0 ≤ q ≤ s − 3 such that 0 < γ < 1/2 + α1 + qα2. Then, the excess risk bound of W (M)k for M ≥ n1/2(α1+qα2+1) can be refined as\nEDn [ E W\n(M) k\n[‖f W\n(M) k\n− fo‖2L2(PX)|Dn] ] . n− γ α1+qα2+1 + Ξk. (7)\nThese theorem and corollary shows that the tractable NGD algorithm achieves a fast convergence rate of the excess risk bound. Indeed, if q is chosen so that γ > (α1+qα2+1)/2, then the excess risk bound converges faster thanO(1/ √ n). Remarkably, the convergence rate is not affected by the input dimension d, which makes discrepancy from linear estimators. The bound of Theorem 2 is tightest when γ is close to 1/2 + α1 (γ ≈ 1/2 + α1 + 3α2 for Corollary 1), and a smaller γ yields looser bound. This relation between γ and α1 reflects misspecification of the “prior” distribution. When γ is small, the regularization λ‖W‖2H1 is not strong enough so that the variance of the posterior distribution becomes unnecessarily large for estimating the true function fo ∈ Fγ . Therefore, the best achievable bound can be obtained when the regularization is correctly specified. The analysis of fast rate is in contrast to some existing work (Allen-Zhu & Li, 2019; 2020; Li et al., 2020; Bai & Lee, 2020) that basically evaluated the Rademacher complexity. This is because we essentially evaluated a local Rademacher complexity instead." }, { "heading": "4.3 COMPARISON BETWEEN LINEAR ESTIMATORS AND DEEP LEARNING", "text": "Here, we compare the convergence rate of excess risks between the linear estimators and the neural network method trained by NGD using the bounds obtained in Theorem 1 and Corollary 1 respectively. We write the lower bound (5) of the minimax excess risk of linear estimators as R∗lin and the excess risk of the neural network approach (7) asR∗NN. To make the discussion concise, we consider a specific situation where s = 3, α1 = γ = 14α2. In this case, β̃ = 17/3 ≈ 5.667, which gives\nR∗lin & n − ( 1+ d 2β̃+d )−1 n−κ ′ ≈ n − ( 1+ d 11.3+d )−1 n−κ ′ .\nOn the other hand, by setting q = 0, we have\nR∗NN . n − α1α1+1 = n\n− (\n1+ 1 α1 )−1 .\nThus, as long as α1 > 11.3/d+ 1 ≈ 2β̃/d+ 1, we have that\nR∗lin & R ∗ NN, and limn→∞ R∗NN R∗lin = 0.\nIn particular, as d gets larger, R∗lin approaches to Ω(n −1/2) while R∗NN is not affected by d and it gets close to O(n−1) as α1 gets larger. Moreover, the inequality α1 > 11.3/d + 1 can be satisfied by a relatively low dimensional setting; for example, d = 10 is sufficient when α1 = 3. As α1 becomes large, the model becomes “simpler” because (am)∞m=1 decays faster. However, the linear estimators cannot take advantage of this information whereas deep learning can. From the convex hull argument, this discrepancy stems from the non-convexity of the model. We also note that the superiority of deep learning is shown without sparse regularization while several existing work showed favorable estimation property of deep learning though sparsity inducing regularization (Bach, 2017; Chizat, 2019; Hayakawa & Suzuki, 2020). However, our analysis indicates that sparse regularization is not necessarily as long as the model has non-convex geometry, i.e., sparsity is just one sufficient condition for non-convexity but not a necessarily condition. The parameter setting above is just a sufficient condition and the lower bound R∗lin would not be tight. The superiority of deep learning would hold in much wider situations." }, { "heading": "5 CONCLUSION", "text": "In this paper, we studied excess risks of linear estimators, as a representative of shallow methods, and a neural network estimator trained by a noisy gradient descent where the model is fixed and no sparsity inducing regularization is imposed. Our analysis revealed that deep learning can outperform any linear estimator even for a relatively low dimensional setting. Essentially, non-convexity of the model induces this difference and the curse of dimensionality for linear estimators is a consequence of a fact that the geometry of the model becomes more “non-convex” as the dimension of input gets higher. All derived bounds are fast rate because the analyses are about the excess risk with the squared loss, which made it possible to compare the rate of convergence. The fast learning rate of the deep learning approach is derived through the fact that the noisy gradient descent behaves like a Bayes estimator with model size independent convergence rate." }, { "heading": "ACKNOWLEDGMENTS", "text": "TS was partially supported by JSPS Kakenhi (18K19793, 18H03201, and 20H00576), Japan Digital Design and JST-CREST." }, { "heading": "A PROOF OF THEOREM 1", "text": "We basically combine the “convex hull argument” and the minimax optimal rate analysis for linear estimators developed by Zhang et al. (2002).\nZhang et al. (2002) essentially showed the following statement in their Theorem 1. Proposition 2 (Theorem 1 of Zhang et al. (2002)). Let µ be the Lebesgue measure. Suppose that the space Ω has even partition A such that |A| = 2K for an integer K ∈ N, each A has equivalent measure µ(A) = 2−K for all A ∈ A, andA is indeed a partition of Ω, i.e., ∪A∈A = Ω, A∩A′ = ∅ for A,A′ ∈ Ω and A 6= A′. Then, if K is chosen as n−γ1 ≤ 2−K ≤ n−γ2 for constants γ1, γ2 > 0 that are independent of n, then there exists an event E such that, for a constant C ′ > 0,\nP (E) ≥ 1 + o(1) and |{xi | xi ∈ A (i ∈ {1, . . . , n})}| ≤ C ′n/2K (∀A ∈ A). Moreover, suppose that, for a class F◦ of functions on Ω, there exists ∆ > 0 that satisfies the following conditions:\n1. There exists F > 0 such that, for any A ∈ A, there exists g ∈ F◦ that satisfies g(x) ≥ 1 2∆F for all x ∈ A,\n2. There exists K ′ and C ′′ > 0 such that 1n ∑n i=1 g(xi)\n2 ≤ C ′′∆22−K′ for any g ∈ F◦ on the event E .\nThen, there exists a constant F1 such that at least one of the following inequalities holds:\nF 2 4F1C ′′ 2K ′ n ≤ Rlin(F◦), (8a) F 3 32 ∆22−K ≤ Rlin(F◦), (8b)\nfor sufficiently large n.\nBefore we show the main assertion, we prepare some additional lemmas. For a sigmoid function σ, let F̃ (σ)C,τ := {x ∈ Rd 7→ aσ(τ(w>x+ b))) | |a| ≤ 2C, ‖w‖ ≤ 1, |b| ≤ 2 (a, b ∈ R, w ∈ Rd)} for C > 0, τ > 0. Lemma 1. Let ψ(x) = 12 (σ(x + 1) − σ(x − 1)) and ψ̂ be its Fourier transform: ψ̂(ω) := (2π)−1 ∫ e−iωxψ(x)dx. Let h > 0 and Dw > 0. Then, by setting τ = h−1(2 √ d + 1)Dw and C = (2 √ d+1)Dw\nπh|ψ̂(1)| , the Gaussian RBF kernel can be approximated by\ninf ǧ∈conv(F̃(σ)C,τ ) sup x∈[0,1]d\n∣∣∣∣ǧ(x)− exp(−‖x− c‖22h2 )∣∣∣∣\n≤ 4 |2πψ̂(1)|\n[ CdD 2(d−2) w exp(−D2w/2) + exp(−Dw) ] for any c ∈ [0, 1]d, where Cd is a constant depending only on d. In particular, the right hand side is O(exp(−nκ)) if Dw = nκ.\nProof of Lemma 1. Let ψh(x) = ψ(h−1x). Suppose that, for f ∈ L1(Rd), its Fourier transform f̂(ω) = (2π)−d ∫ e−iω\n>xf(x)dx (ω ∈ Rd) gives∫ Rd exp(iw>x)f̂(w)dw = f(x),\nfor every x ∈ Rd1. Then the Irie-Miyake itegral representation (Irie & Miyake (1988); see also the proof of Theorem 3.1 in Hornik et al. (1990)) gives\nf(x) = ∫ a∈Rd ∫ b∈R ψ(a>x+ b)dν(a, b) (a.e.),\n1If f̂ is integrable, this inversion formula holds for almost every x ∈ Rd (Rudin, 1987). However, we assume a stronger condition that it holds for every x ∈ Rd.\nwhere dν(a, b) is given by\ndν(a, b) = Re\n( |ω|de−iwb\n2πψ̂(ω)\n) f̂(wa)dadb\nfor any ω 6= 0. Since the characteristic function of the multivariate normal distribution gives that∫ Rd exp(iw>(x− c)) √ h2d (2π)d exp ( −h 2‖w‖2 2 ) ︸ ︷︷ ︸\n=f̂(w)\ndw = exp ( −‖x− c‖ 2\n2h2\n) =: f(x) (∀x ∈ Rd),\nwe have that\nexp ( −‖x− c‖ 2\n2h2\n) =\n∫ a∈Rd ∫ b∈R ψh(a >(x− c) + b)Re ( e−iwb 2πψ̂h(ω) )√ |ωh|2d (2π)d exp ( − (ωh) 2‖a‖2 2 ) dadb,\nfor all x ∈ Rd. Since ψh(·) = ψ(h−1·) and ψ̂h(·) = hψ̂(h·) by its definition, the right hand side is equivalent to∫\na∈Rd ∫ b∈R ψ(h−1[a>(x− c) + b])Re ( e−iwb 2πhψ̂(hω) )√ |ωh|2d (2π)d exp ( − (ωh) 2‖a‖2 2 ) dadb.\nHere, we set ω = h−1. Let Nσ2 be the probability measure corresponding to the multivariate normal with mean 0 and covariance σ2I, and let AD := {w ∈ Rd | ‖w‖ ≤ D}. Let Da > 0 and Db = Da( √ 2d+ 1), and define\nfDa(x) := 1\n2DbN1(ADa) ∫ ‖a‖≤Da,|b|≤Db ψ(h−1[a>(x− c) + b])Re ( e−ib/h 2πhψ̂(1) ) ×√\n1\n(2π)d exp\n( −‖a‖ 2\n2\n) dadb.\nThen, we can see that, for any x ∈ [0, 1]d, it holds that∣∣∣∣ 12DbN1(ADa)f(x)− fDa(x) ∣∣∣∣\n≤ 1 2DbN1(ADa)|2πhψ̂(1)|\n[ N1(A c Da) ∫ 2 exp(−h−1|x|)dx+ ∫ |b|>Db 2 exp(−[h−1(|b| − 2 √ dDa)])db ]\n≤ 1 2DbN1(ADa)|2πhψ̂(1)|\n[ 4hN1(A c Da) + 4h exp(−Da) ] = 4h\n2DbN1(ADa)|2πhψ̂(1)|\n[ CdD 2(d−2) a exp(−D2a/2) + exp(−Da) ] ,\nwhere Cd > 0 is a constant depending on only d, and we used |a>(x − c) + b| ≥ |b| − |a>(x − c)| ≥ |b| − 2 √ dDa and ψ(x) ≤ 2 exp(−|x|). Note that if Da = nκ, then the right hand side is O(h exp(−nκ)). Therefore, since N1(ADa) ≤ 1, by setting τ = h−1Db, C = Dbπh|ψ̂(1)| , we have that\ninf ǧ∈conv(F̃(σ)C,τ ) sup x∈[0,1]d\n∣∣∣∣ǧ(x)− exp(−‖x− c‖22h2 )∣∣∣∣\n≤ 4 |2πψ̂(1)|\n[ CdD 2(d−2) a exp(−D2a/2) + exp(−Da) ] .\nHence, by rewriting Dw ← Da, we obtain the assertion. As noted above, the right hand is O(exp(−nκ)) if Da = nκ.\nProof of Theorem 1. For a sample size n, we fix mn which will be determined later and use Proposition 2 with F◦ = F (n)γ . If w2,mn = b √ µγmn/2 with |b| ≤ 1 and w1,m = µ γ/2 mn [u;−u>c]/( √ 2(d+ 1)) for u ∈ Rd such that ‖u‖ ≤ 1 and c ∈ [0, 1]d, then ‖(w1,mn , w2,mn)‖2 ≤ µγmn(1/2 + (1 + |u >c|2)/2(d + 1)) ≤ µγmn . Therefore, ϕ̃u,c(x) = amnw̄2,mnσmn(w > 1,mn [x; 1]) = µ α1 mn(bµ γ/2 mn / √ 2)µsα2mn σ ( µ −α2+γ/2 mn u >(x− c)/ √ 2(d+ 1) ) ∈\nF (n)γ ⊂ Fγ for all b ∈ R with |b| ≤ 1, u ∈ Rd with ‖u‖ ≤ 1, and c ∈ [0, 1]d. In other words, µ α1+γ/2+sα2 mn (2C) −1F (σ)C,τ ⊂ F (n) γ for any C > 0 and τ = 1√ 2(d+1) µ −α2+γ/2 mn . Therefore, by setting C = ( √\n2d + 1)Dw/(πh|ψ̂(1)|) for Dw > 0, Lemma 1 yields that for any c ∈ [0, 1]d and given h > 0, there exists g ∈ conv(F (n)γ ) such that∥∥∥∥∥∥µα1+γ/2+sα2mn ( 2( √ 2d+ 1)Dw πh|ψ̂(1)| )−1 exp ( −‖ · −c‖ 2 2h2 ) − g ∥∥∥∥∥∥ ∞\n≤ µα1+γ/2+sα2mn\n( 2( √\n2d+ 1)Dw\nπh|ψ̂(1)|\n)−1 4\n|2πψ̂(1)|\n[ CdD 2(d−2) w exp(−D2w/2) + exp(−Dw) ] = µα1+γ/2+sα2mn h\n( √\n2d+ 1)Dw\n[ CdD 2(d−2) a exp(−D2w/2) + exp(−Dw) ] .\nWe let Dw = nκ for any κ > 0 and choose µmn as τ ' µ −α2+γ/2 mn = Dwh −1 = h−1nκ. We write ∆ := µ α1+γ/2+sα2 mn (2C) −1 ' h α1+sα2+γ/2 α2−γ/2 +1 n −κ(α1+sα2+γ/2\nα2−γ/2 +1). Then, it holds that∥∥∥∥∆ exp(−‖ · −c‖22h2 ) − g ∥∥∥∥ ∞ . ∆ exp(−nκ). (9)\nHere, we set h as h = 2−k with a positive integer k. Accordingly, we define a partition A of Ω so that any A ∈ A can be represented as A = [2−kj1, 2−k(j1 + 1)] × · · · × [2−kjd, 2−k(jd + 1)] by non-negative integers 0 ≤ ji ≤ 2k − 1 (i = 1, . . . , d). Note that |A| = 2dk = h−d. For each A ∈ A, we define cA as cA = (2−k(j1 + 1/2), . . . , 2−k(jd + 1/2))> where (j1, . . . , jd) is a set of indexes that satisfies A = [2−kj1, 2−k(j1 + 1)] × · · · × [2−kjd, 2−k(jd + 1)]. For each A ∈ A, we define gA ∈ conv(F (n)γ ) as a function that satisfies Eq. (9) for c = cA.\nNow, we apply Proposition 2 with F◦ = conv(F (n)γ ) and K = K ′ = dk. Let R∗ := Rlin(conv(F (n)γ )). First, we can see that there exits a constant F > 0 such that\ngA(x) ≥ F∆ (∀x ∈ A),\nwhere we used exp(−nκ) 1. Second, in the event E introduced in the statement of Proposition 2, there exists C such that |{i ∈ {1, . . . , n} | xi ∈ A′}| ≤ Cn/2−dk for all A′ ∈ A. In this case, we can check that\n1\nn n∑ i=1 [ ∆ exp ( −‖xi − cA‖ 2 2h2 )]2 . ∆2hd = ∆22−kd,\nby the uniform continuity of the Gaussian RBF. Therefore, we also have\n1\nn n∑ i=1 gA(xi) 2 ≤ 2 n n∑ i=1 [ ∆ exp ( −‖xi − cA‖ 2 2h2 )]2 + c∆2 exp(−2nκ)\n. ∆2(hd + exp(−2nκ)),\nwhere c > 0 is a constant. Thus, as long as h is polynomial to n like h = Θ(n−a), the right hand side is O(∆2hd).\nNow, if we write\nβ̃ = α1 + sα2 + γ/2\nα2 − γ/2 + 1 = α1 + (s+ 1)α2 α2 − γ/2 ,\nthen we have ∆ ' hβ̃n−κβ̃ by its definition.\nHere, we choose k as a maximum integer that satisfies F 3 32 ∆ 22−dk > R∗. In this situation, it holds that h2β̃+dn−2κβ̃ ' R∗.\nSince Eq. (8b) is not satisfied, Eq. (8a) must hold, and hence we have\nn−1h−d . R∗ ' h2β̃+dn−2κβ̃\n⇒ h ' n− 1−2κβ̃ 2β̃+2d .\nTherefore, we obtain that\nR∗ & n − 2β̃+d 2β̃+2dn − 2κdβ̃ 2β̃+2d\n≥ n− 2β̃+d 2β̃+2dn−κ ′ ,\nby setting κ′ = κ 2dβ̃ 2β̃+2d . This gives the assertion." }, { "heading": "B PROOFS OF PROPOSITION 1, THEOREM 2 AND COROLLARY 1", "text": "Proposition 1, Theorem 2 and Corollary 1 can be shown by using Propositions 3 and 4 given in Appendix B.1 shown below.\nLet TαW = (µαmw1,m, µ α mw2,m) ∞ m=1 for W = (w1,m, w2,m) ∞ m=1 for α > 0, and let us consider a model hW := fT−α/2W . Then, the training error can be rewritten as\nL̂(fW ) = L̂(hTα/2W ).\nFor notational simplicity, we let L̂(W ) := L̂(fW ).\nLet H(M) be {W (M) = (w1,m, w2,m)Mm=1 | w1,m ∈ Rd+1, w2,m ∈ R, 1 ≤ m ≤ M} and ι : H(M) → H be the zero padding of W (M), that is, ι(W (M)) = (w′1,m, w′2,m)∞m=1 ∈ H satisfies w′1,m = w1,m, w ′ 2,m = w2,m (m ≤ M) and w′1,m = 0, w′2,m = 0 (m > M). Moreover, we define ι∗ : H → H(M) as the map that extracts first M components. By abuse of notation, we write fW (M) for W (M) ∈ H(M) to indicate fι(W (M)). Finally, let A(M) : H(M) → H(M) be a linear operator such that A(M)W (M) = ι∗(Aι(W (M))), which is just a truncation of A. Similarly, let T aMW\n(M) for W (M) ∈ H(M) be the operator corresponding to T aW for W ∈ H, i.e., T aMW (M) = ι∗(T aι(W (M))).\nB.1 AUXILIARY LEMMAS\nFirst, we show some key propositions to show the main results. To do so, we utilize the result by Muzellec et al. (2020) and Suzuki (2020).\nAssumption 3.\n(i) There exists a constant cµ such that µm ≤ cµm−2.\n(ii) There exist B,U > 0 such that the following two inequalities hold for some a ∈ (1/4, 1) almost surely:\n‖∇L̂(W )‖H ≤ B (∀W ∈ H), ‖∇L̂(W )−∇L̂(W ′)‖H ≤ L‖W −W ′‖H−a (∀W,W ′ ∈ H).\n(iii) For any dataDn, L̂ is three times differentiable. Let∇3L̂(W ) be the third-order derivative of L̂(W ). This can be identified with a third-order linear form and∇3L̂(W )·(h, k) denotes the Riesz representor of l ∈ H 7→ ∇3L̂(W ) · (h, k, l). There exists α′ ∈ [0, 1), Cα′ ∈ (0,∞) such that ∀W,h, k ∈ H, ‖∇3L̂(W ) · (h, k)‖H−α′ ≤ Cα′‖h‖H‖k‖H, ‖∇\n3L̂(W ) · (h, k)‖H ≤ Cα′‖h‖Hα′‖k‖H (a.s.).\nRemark 1. In the analysis of Bréhier & Kopec (2016); Muzellec et al. (2020); Suzuki (2020), Assumption 3-(iii) is imposed for any finite dimensional projectionL(W (M)) as a function onH(M)) for all M ≥ 1 instead of L(W ) as a function of H. However, the condition on L(W ) gives a sufficient condition for any finite dimensional projection in our setting. Thus, we employed the current version.\nAssumption 4. For the loss function `(y, f(x)) = (y − f(x))2, the following conditions holds:\n(i) There exists C > 0 such that for any fW (W ∈ H), it holds that\nEX,Y [(`(Y, fW (X))− `(Y, f∗(X)))2] ≤ C(L(fW )− L(f∗)).\n(ii) β > 0 is chosen so that, for any h : Rd → R and x ∈ supp(PX), it holds that EY |X=x [ exp ( − βn (`(Y, h(x))− `(Y, f ∗(x))) )] ≤ 1.\n(iii) There exists Lh > 0 such that ‖∇W `(Y, hW (X)) − ∇W `(Y, hW ′(X))‖H ≤ Lh‖W − W ′‖H (∀W,W ′ ∈ H) almost surely.\n(iv) There exists Ch such that ‖hW − hW ′‖∞ ≤ Ch‖W −W ′‖H (W,W ′ ∈ H). Proposition 3. Assume Assumption 3 holds and β > η. Suppose that ∃R̄ > 0, 0 ≤ `(Y, fW (X)) ≤ R̄ for any W ∈ H (a.s.). Let ρ = 11+λη/µ1 and b = µ1 λ B + cµ βλ . Accordingly, let b̄ = max{b, 1}, κ = b̄+ 1 and V̄ = 4b̄/( √ (1+ρ1/η)/2−ρ1/η). Then, the spectral gap of the dynamics is given by\nΛ∗η = min\n( λ\n2µ1 , 12 ) 4 log(κ(V̄ + 1)/(1− δ)) δ (10)\nwhere 0 < δ < 1 is a real number satisfying δ = Ω(exp(−Θ(poly(λ−1)β))). We define Λ∗0 = limη→0 Λ ∗ η (i.e., V̄ is replaced by 4b̄/( √ (1+exp(− λµ1 ))/2−exp(− λ µ1 ))). We also define CW0 = κ[V̄ + 1] + √\n2(R̄+b)√ δ . Then, for any 0 < a < 1/4, the following convergence bound holds for almost sure\nobservation Dn: for either L = L or L = L̂,\n|EWk [L(Wk)|Dn]− EW∼π∞ [L(W )|Dn]| (11) ≤ C1 [ CW0 exp(−Λ∗ηηk) + √ β\nΛ∗0 η1/2−a\n] = Ξ′k, (12)\nwhere C1 is a constant depending only on cµ, B, L,Cα′ , a, R̄ (independent of η, k, β, λ).\nProposition 4. Assume that Assumptions 3 and 4 hold. Let α̃ := 1/{2(α + 1)} for a given α > 0 and θ be an arbitrary real number satisfying 0 < θ < 1 − α̃. Assume that the true function fo can be represented by hW∗ = fo for W ∗ ∈ Hθ(α+1). Then, if M ≥ min { λα̃/2[θ(α+1)]β1/2[θ(α+1)], λ−1/2(α+1), n1/2[θ(α+1)] } , the expected excess risk is bounded by\nEDn [ E W\n(M) k [L(h T α/2 M W (M) k\n)|Dn]− L(fo) ]\n≤ C max { (λβ) 2α̃/θ 1+α̃/θ n− 1 1+α̃/θ , λ−α̃β−1, λθ, 1/n }\n+ Ξ′k, (13)\nwhere C is a constant independent of n, β, λ, η, k.\nProof. Repeating the same argument in Proposition 1 and using the same notation, Proposition 3 gives\n|E W\n(M) k [L(W (M)k )|Dn]− EW∼π(M)∞ [L(W )|Dn]| ≤ Ξ ′ k,\nfor any 1 ≤ M ≤ ∞. Therefore, we just need to bound the following quantity:∣∣∣EDn [EW (M)∼π(M)∞ [L(hTα/2M W (M))|Dn]]− L(fo)∣∣∣. We define ‖W (M)‖H(M) := ‖ι∗(W (M))‖H for W (M) ∈ H(M). For a > 0, we define H (M) a be the projection of Ha to the first M components, H(M)a = {ι(W ) | W ∈ Ha}, and we define ‖W (M)‖H(M)a := ‖ι ∗(W (M))‖Ha (note that since H (M) a is a finite dimensional linear space, it is same as H as a set). Let ν(M)β be the Gaussian measure on H(M) with mean 0 and covariance (βA(M))−1, and ν̃(M)β be the Gaussian measure corresponding to the random variable T α/2 M W (M) with W (M) ∼ ν(M)β . Let the concentration function be\nφ (M) β,λ ( ) := inf\nW∈H(M)α+1: L(hW )−L(fo)≤ 2\nβλ‖W‖2 H(M)α+1 − log ν̃(M)β ({W ∈ H (M) : ‖W‖H(M) ≤ }) + log(2),\nwhere, if there does not exist W ∈ H(M)α+1 that satisfies the condition inf , then we define φ (M) β,λ ( ) = ∞, then Let ∗ > 0 be\n∗ := max{inf{ > 0 | φβ,λ( ) ≤ β 2}, 1/n}.\nThen, Suzuki (2020) showed the following bound:∣∣∣∣EDn [EW (M)∼π(M)∞ [L(hTα/2(M)W (M))|Dn]− L(fo) ]∣∣∣∣\n≤ C max { ∗2, ( β n ∗2 + n− 1 1+α̃/θ (λβ) 2α̃/θ 1+α̃/θ ) , 1\nn\n} . (14)\nThey also showed that, for M =∞, it holds that ∗2 . max { (λβ)−α̃β−(1−α̃), λθ, n−1 } = max { λ−α̃β−1, λθ, n−1 } .\nSubstituting this bound of ∗ to Eq. (14), we obtain Eq. (13) for M =∞. Moreover, in their proof, if M ≥ ( ∗)−1/[θ(α+1)], then\ninf W∈H(M)α+1:\nL(hW )−L(fo)≤ 2\nβλ‖W‖2 H(M)α+1 . β( ∗)2.\nFinally, since ν̃(M)β is a marginal distribution of ν̃ (∞) β , it holds that\n− log ν̃(M)β ({W ∈ H (M) : ‖W‖H(M) ≤ }) ≤ − log ν̃ (∞) β ({W ∈ H : ‖W‖H ≤ }).\nTherefore, as long as M ≥ ( ∗)−1/[θ(α+1)], the rate of ∗ is not deteriorated from M = ∞. In other words, if M ≥ min { λα̃/2[θ(α+1)]β1/2[θ(α+1)], λ−θ/2[θ(α+1)], n1/2[θ(α+1)] } , the bound (13) holds.\nRemark 2. Suzuki (2020) showed Proposition 4 under a condition α > 1/2. However, this is used only to ensure Assumption 3. In our setting, we can show Assumption 3 holds directly and thus we may omit the condition α > 1/2.\nB.2 PROOFS OF PROPOSITION 1, THEOREM 2 AND COROLLARY 1\nHere, we give the proofs of Proposition 1 and Theorem 2 simultaneously. Proof of Proposition 1 and Theorem 2. Let R̄ = (2 ∑∞ m=1 amR + U)\n2. Then, we can easily check that (yi − fW (xi))2 ≤ R̄. As stated above, we use Propositions 3 and 4 to show the statements.\nFirst, we show Proposition 1 for the dynamics of W (M)k for any 1 ≤ M ≤ ∞. However, it suffices to show the statement only for M = ∞ because the finite dimensional version can be seen as a\nspecific case of the infinite dimensional one. Actually, the dynamics of W (M)k is same as that of ι(W̃k) where W̃k ∈ H obeys the following dynamics:\nW̃k+1 = Sη ( W̃k − η∇L̂(fι(W̃k)) + √ 2η\nβ ξk\n) .\nThis is because fι(W̃k) is determined by only the first M components ι(W̃k), ι(∇L̂(fι(W̃k))) = ∇W (M)L̂(fW (M))|W (M)=ι(W̃k) and Sη is a diagonal operator. Since the components of W̃k with indexes higher than M does not affect the objective, smoothness of the objective is not lost. The stationary distribution π(M)∞ of the continuous dynamics corresponding to W (M) is a probability measure onH(M) that satisfies\ndπ (M) ∞\ndν (M) β\n(W (M)) ∝ exp(−βL̂(fW (M))),\nwhere ν(M)β is the Gaussian measure on RM×(d+2) with mean 0 and covariance (βA(M))−1. We can notice that this is the marginal distribution of the stationary distribution of the continuous time counterpart of W̃k: dπ̃∞(W̃ ) ∝ exp(−βL̂(fι(W̃ )))dνβ . Therefore, we just need to consider an infinite dimensional one. For this reasoning, we show the convergence for the original infinite dimensional dynamics (Wk)∞k=1. The convergence of the finite dimensional one (W (M) k ) ∞ k=1 can be shown by the same manner using the argument above.\nTo show Proposition 1, we use Propositions 3. To do so, we need to check validity of Assumptions 3. First, we check Assumption 3. Assumption 3-(i) is ensured by Assumption 1. Next, we check Assumption 3-(ii). The boundedness of the gradient can be shown as follows:\n‖∇L̂(fW )‖2H\n= ∞∑ m=1 (∥∥∥ 1 n n∑ i=1 2(fW (xi)− yi)w̄2,mam[xi; 1]σ′m(w>1,m[xi; 1]) ∥∥∥2\n+ ∣∣∣ 1 n n∑ i=1 2(fW (xi)− yi)am tanh′(w2,m/R)σm(w>1,m[xi; 1]))∞m=1 ∣∣∣2)\n≤ ∞∑ m=1 4R̄R2a2m(d+ 1)C 2 σ + 4R̄a 2 m\n(∵ |fW (xi)− yi| ≤ R̄, ‖σ′m‖∞ ≤ Cσ, ‖ tanh ′ ‖∞ ≤ 1)\n≤4R̄[R2C2σ(d+ 1) + 1] ∞∑ m=1 a2m <∞.\nSimilarly, we can show the Lipschitz continuity of the gradient as\n‖∇L̂(fW )−∇L̂(fW ′)‖2H\n≤ ∞∑ m=1 µ−2α1m µ 2α1 m { 4R̄a2m(d+ 1)C 2 σ[(w2,m − w′2,m)2 +R2‖w1,m − w′1,m‖2]\n+ 4R̄a2m[(w2,m − w′2,m)2/R2 + C2σ(d+ 1)‖w1,m − w′1,m‖2] } (∵ ‖ tanh′′ ‖∞ ≤ 1)\n≤ 4R̄[(d+ 1)C2σ(1 +R2) + 1/R2 + C2σ(d+ 1)] max m∈N {µ−2α1m a2m}\n× ∞∑ m=1 µ2α1m [(w2,m − w′2,m)2 + ‖w1,m − w′1,m‖2]\n. ‖W −W ′‖2H−α1 .\nWe can also verify Assumption 3-(iii) in a similar way. Then, we have verified Assumption 3. Therefore, we may apply Proposition 3, and then we obtain Proposition 1.\nNext, we show Theorem 2 by using Proposition 4. For that purpose, we need to we verify Assumption 4. The first condition can be verified as\nEX,Y [((Y − fW (X))2 − (Y − fo(X))2)2] = EX, [((f\no(X) + − fW (X))2 − 2)2] = EX [((f\no(X)− fW (X))2 + 2 (fo(X)− fW (X)))2] = EX [(f\no(X)− fW (X))4 + 2 (fo(X)− fW (X))(fo(X)− fW (X))2 + 2(fo(X)− fW (X))2] = ‖fo − fW ‖2∞EX [(fo(X)− fW (X))2] + U2EX [(fo(X)− fW (X))2] ≤ R̄EX [(fo(X)− fW (X))2] = R̄(L(fW )− L(fo)).\nThe second condition can be checked as follows. Note that\nEY |X=x\n( exp { −β n [(Y − fW (x))2 − (Y − fo(x))2] })\n= E ( exp [ −β n (fo(x)− fW (x))2 − 2 (fW (x)− fo(x))] })\n= exp [ −β n (fo(x)− fW (x))2 ] E { exp [ 2β n (fW (x)− fo(x)) ]} ≤ exp [ −β n (fo(x)− fW (x))2 ] exp [ 1 8 4β2 n2 4U2(fW (x)− fo(x))2 ] .\nThus, under the condition β ≤ n/(2U2), the right hand side can be upper bounded by\nexp [ −β n ( 1− 2U 2β n ) (fW (x)− fo(x))2 ] ≤ 1.\nNext, we check the third and fourth conditions. Noting that\n∇WhW (X) = ( am(µ −α/2 m w2,m)µ −α/2 m [xi; 1]σ ′ m(µ −α/2 m w > 1,m[xi; 1]),\namµ −α/2 m tanh ′(µ−α/2m w2,m/R)σm(µ −α/2 m w > 1,m[xi; 1])) ∞ m=1 )∞ m=1 ,\nwe have that\n‖∇WhW (X)‖2H\n≤ ∞∑ m=1 a2mµ −α m [(d+ 1)R 2C2σ + 1]\n≤ [(d+ 1)R2C2σ + 1] ∞∑ m=1 µ−α+2α1m\n≤ [(d+ 1)R2C2σ + 1]c−α+2α1µ ∞∑ m=1 m−2(−α+2α1) =: C1 <∞\n(∵ −α+ 2α1 = α1 > 1/2), and\n‖∇WhW (X)−∇WhW ′(X)‖2H\n≤ ∞∑ m=1 a2mµ −α m (d+ 1)[µ −α m (w2,m − w′2,m)2 +R2µ−αm ‖w1,m − w′1,m‖2]\n+ a2mµ −α m [µ −α m (w2,m − w′2,m)2/R2 + C2σ(d+ 1)µ−αm ‖w1,m − w′1,m‖2]\n≤ ∞∑ m=1 a2mµ −2α m [(d+ 1)(1 +R 2) + 1/R2 + C2σ(d+ 1)][‖w1,m − w′1,m‖2 + (w2,m − w′2,m)2]\n≤ c2α1µ max m {µ2(α1−α)m }[(d+ 1)(1 +R2) + 1/R2 + C2σ(d+ 1)]‖W −W ′‖2H =: C2‖W −W ′‖2H,\nfor a constant 0 < C2 <∞. Therefore, it holds that\n|hW (X)− hW ′(X)|2 ≤ C1‖W −W ′‖2H,\nwhich yields the forth condition, and we also have\n‖∇W `(Y, hW (X))−∇W `(Y, hW ′(X))‖2H =‖2(hW (X)− Y )∇WhW (X)− 2(hW ′(X)− Y )∇WhW ′(X)‖2H ≤2‖2(hW (X)− Y )(∇WhW (X)−∇WhW (X))‖2H\n+ 2‖2(hW (X)− hW ′(X))∇WhW ′(X)‖2H ≤ 8R̄C2‖W −W ′‖2H + 8C21‖W −W ′‖2H . ‖W −W ′‖2H,\nwhich yields the third condition.\nSince fo ∈ Fγ , there exists W ∗ ∈ Hγ such that fo = fW∗ . Therefore, applying Proposition 4 with α = α1 (α̃ = 1/[2(α1 + 1)]) and θ = γ/(1 + α1) (since γ < 1/2 + α1, the condition θ < 1 − α̃ is satisfied), we obtain that for M ≥ min { λ1/4γ(α1+1)β1/2γ , λ−1/2(α1+1), n1/2γ } , the following excess risk bound holds:\nEDn [ E W\n(M) k\n[L(W (M)k )|Dn]− L(f ∗) ] . max { (λβ) 2α̃/θ 1+α̃/θ n− 1 1+α̃/θ , λ−α̃β−1, λθ, 1/n } + Ξk.\nFinally, by noting L(W (M)k )− L(f∗) = ‖fW (M)k − f ∗‖2L2(PX), we obtain the assertion.\nFinally, we give the proof of Corollary 1.\nProof of Corollary 1. Note that\nfW (x)\n= ∞∑ m=1 amw̄2,mσm(w > 1,m[x; 1])\n= ∞∑ m=1 µα1m w̄2,mµ qα2 m µ −qα2 m µ sα2 m σ(µ −α2 m w > 1,m[x; 1]) (∵ am = µ α1 m , bm = µ α2 m )\n= ∞∑ m=1 µα1+qα2m w̄2,mµ −(s−q)α2 m σ(µ −α2 m w > 1,m[x; 1]).\nTherefore, we may redefine α′1 ← α1 +qα2 and s′ ← s−q so that we obtain another representation of the model Fγ :\nFγ = { fW (x) =\n∞∑ m=1 µ α′1 m w̄2,mσ̌m(w > 1,m[x; 1]) ∣∣∣W ∈ Hγ , ‖W‖Hγ ≤ 1 } ,\nwhere σ̌m(·) = µ−s ′α2 m σ(µ −α2 m ·). Note that the condition 0 ≤ q ≤ s−3 gives s− q ≥ 3. Therefore, Assumptions 3 and 4 are valid even for the redefined parameters α′1, s ′ and σ̌m instead of α1, s and σm. Therefore, we can apply Theorem 2 by simply replacing α1 by α′1 = α1 + qα2." } ]
2,021
BENEFIT OF DEEP LEARNING WITH NON-CONVEX NOISY GRADIENT DESCENT: PROVABLE EXCESS RISK BOUND AND SUPERIORITY TO KERNEL METHODS
SP:cc7b030c76352bfec247751d011c0a6d02c8147e
[ "The main goal of the paper is to establish theoretically some previous known results that for scale invariant networks the weight norm has a fixed point with ||w||^4=eta/lambda ||\\tilde{g}|| . They also discuss the angular update, which because of scale invariance is basically equivalent to arccos (1-eta lambda) |w_t|^2/|w_t+1|^2 and it thus comes mainly from the gradient. They have some experiments which they compare with the predicted equilibrium values for the angular update/ weight norm. " ]
In this work, we comprehensively reveal the learning dynamics of neural network with normalization, weight decay (WD), and SGD (with momentum), named as Spherical Motion Dynamics (SMD). Most related works study SMD by focusing on “effective learning rate” in “equilibrium” condition, i.e. assuming the convergence of weight norm. However, their discussions on why this equilibrium condition can be reached in SMD is either absent or less convincing. Our work investigates SMD by directly exploring the cause of equilibrium condition. Specifically, 1) we introduce the assumptions that can lead to equilibrium condition in SMD, and prove that weight norm can approach its theoretical value in a linear rate regime with given assumptions; 2) we propose “angular update” as a substitute for effective learning rate to measure the evolving of neural network in SMD, and prove angular update can also approach to its theoretical value in a linear rate regime; 3) we verify our assumptions and theoretical results on various computer vision tasks including ImageNet and MSCOCO with standard settings. Experiment results show our theoretical findings agree well with empirical observations.
[]
[ { "authors": [ "Sanjeev Arora", "Zhiyuan Li", "Kaifeng Lyu" ], "title": "Theoretical analysis of auto rate-tuning by batch normalization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Yoshua Bengio", "Yann LeCun" ], "title": "Scaling learning algorithms towards AI", "venue": "In Large Scale Kernel Machines. MIT Press,", "year": 2007 }, { "authors": [ "Yongqiang Cai", "Qianxiao Li", "Zuowei Shen" ], "title": "A quantitative analysis of the effect of batch normalization on gradient descent", "venue": "Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Vitaliy Chiley", "Ilya Sharapov", "Atli Kosson", "Urs Koster", "Ryan Reece", "Sofia Samaniego de la Fuente", "Vishal Subbiah", "Michael James" ], "title": "Online normalization for training neural networks", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Yonatan Dukler", "Quanquan Gu", "Guido Montúfar" ], "title": "Optimization theory for relu neural networks trained with normalization layers", "venue": "In International conference on machine learning,", "year": 2020 }, { "authors": [ "Priya Goyal", "Piotr Dollár", "Ross Girshick", "Pieter Noordhuis", "Lukasz Wesolowski", "Aapo Kyrola", "Andrew Tulloch", "Yangqing Jia", "Kaiming He" ], "title": "Accurate, large minibatch sgd: Training imagenet in 1 hour", "venue": "arXiv preprint arXiv:1706.02677,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask r-cnn", "venue": "In Proceedings of the IEEE international conference on computer vision,", "year": 2017 }, { "authors": [ "Kaiming He", "Ross Girshick", "Piotr Dollar" ], "title": "Rethinking imagenet pre-training", "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Elad Hoffer", "Ron Banner", "Itay Golan", "Daniel Soudry" ], "title": "Norm matters: efficient and accurate normalization schemes in deep networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Sergey Ioffe" ], "title": "Batch renormalization: Towards reducing minibatch dependence in batch-normalized models", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Sergey Ioffe", "Christian Szegedy" ], "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "venue": "In ICML, pp", "year": 2015 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Alex Krizhevsky", "Hinton Geoffrey" ], "title": "Learning multiple layers of features from tiny images", "venue": null, "year": 2009 }, { "authors": [ "Aitor Lewkowycz", "Guy Gur-Ari" ], "title": "On the training dynamics of deep networks with l 2 regularization", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Xiang Li", "Chen Shuo", "Yang Jian" ], "title": "Understanding the disharmony between weight normalization family and weight decay", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Zhiyuan Li", "Sanjeev Arora" ], "title": "An exponential learning rate schedule for deep learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Zhiyuan Li", "Kaifeng Lyu", "Sanjeev Arora" ], "title": "Reconciling modern deep learning with traditional optimization analyses: The intrinsic learning rate", "venue": "Advances in Neural Information Processing Systems,", "year": 2020 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In European conference on computer vision,", "year": 2014 }, { "authors": [ "Tsung-Yi Lin", "Piotr Dollár", "Ross Girshick", "Kaiming He", "Bharath Hariharan", "Serge Belongie" ], "title": "Feature pyramid networks for object detection", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2017 }, { "authors": [ "Ping Luo", "Xinjiang Wang", "Wenqi Shao", "Zhanglin Peng" ], "title": "Towards understanding regularization in batch normalization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Chao Peng", "Tete Xiao", "Zeming Li", "Yuning Jiang", "Xiangyu Zhang", "Kai Jia", "Gang Yu", "Jian Sun" ], "title": "Megdet: A large mini-batch object detector", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Boris T Polyak" ], "title": "Some methods of speeding up the convergence of iteration methods", "venue": "USSR Computational Mathematics and Mathematical Physics,", "year": 1964 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Tim Salimans", "Durk P Kingma" ], "title": "Weight normalization: A simple reparameterization to accelerate training of deep neural networks", "venue": "In Advances in Neural Information Processing Systems,", "year": 2016 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2018 }, { "authors": [ "Shibani Santurkar", "Dimitris Tsipras", "Andrew Ilyas", "Aleksander Madry" ], "title": "How Does Batch Normalization Help Optimization", "venue": "arXiv e-prints, art", "year": 2018 }, { "authors": [ "Wenqi Shao", "Tianjian Meng", "Jingyu Li", "Ruimao Zhang", "Yudian Li", "Xiaogang Wang", "Ping Luo" ], "title": "Ssn: Learning sparse switchable normalization via sparsestmax", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Saurabh Singh", "Shankar Krishnan" ], "title": "Filter response normalization layer: Eliminating batch dependence in the training of deep neural networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Dmitry Ulyanov", "Andrea Vedaldi", "Victor Lempitsky" ], "title": "Instance normalization: The missing ingredient for fast stylization", "venue": "arXiv preprint arXiv:1607.08022,", "year": 2016 }, { "authors": [ "Twan van Laarhoven" ], "title": "L2 regularization versus batch and weight normalization", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Junjie Yan", "Ruosi Wan", "Xiangyu Zhang", "Wei Zhang", "Yichen Wei", "Jian Sun" ], "title": "Towards stabilizing batch statistics in backward propagation of batch normalization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Guodong Zhang", "Chaoqi Wang", "Bowen Xu", "Roger Grosse" ], "title": "Three mechanisms of weight decay regularization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Goyal" ], "title": "2017) shows that combining with gradual warmup, LSP can enlarge the batch size up to 8192(256× 32) without severe degradation on ImageNet experiments. LSP has been proven extremely effective in a wide range of applications. However, from the perspective of SMD", "venue": null, "year": 2017 } ]
[ { "heading": "1 INTRODUCTION AND BACKGROUND", "text": "Normalization techniques (e.g. Batch Normalization (Ioffe & Szegedy, 2015) or its variants) are one of the most commonly adopted techniques for training deep neural networks (DNN). A typical normalization can be formulated as following: consider a single unit in a neural network, the input is X , the weight of linear layer is w (bias is included in w), then its output is\ny(X;w; γ;β) = g( Xw − µ(Xw)\nσ(wX) γ + β), (1)\nwhere g is a nonlinear activation function like ReLU or sigmoid, µ, σ are mean and standard deviation computed across specific dimension of Xw (like Batch Normalization (Ioffe & Szegedy, 2015), Layer Normalization Ba et al. (2016), Group Normalization (Wu & He, 2018), etc.). β, γ are learnable parameters to remedy for the limited range of normalized feature map. Aside from normalizing feature map, Salimans & Kingma (2016) normalizes weight by l2 norm instead:\ny(X;w; γ;β) = g(X w\n||w||2 γ + β), (2)\nwhere || · ||2 denotes l2 norm of a vector.\nCharacterizing evolving of networks during training. Though formulated in different manners, all normalization techniques mentioned above share an interesting property: the weight w affiliated with a normalized unit is scale-invariant: ∀α ∈ R+, y(X;αW ; γ, β) = y(X;w; γ, β). Due to the scale-invariant property of weight, the Euclidean distance defined in weight space completely fails to measure the evolving of DNN during learning process. As a result, original definition of learning rate η cannot sufficiently represent the update efficiency of normalized DNN.\nTo deal with such issue, van Laarhoven (2017); Hoffer et al. (2018); Zhang et al. (2019) propose “effective learning rate” as a substitute for learning rate to measure the update efficiency of normalized\nneural network with stochastic gradient descent (SGD), defined as\nηeff = η\n||w||22 . (3)\nJoint effects of normalization and weight decay. van Laarhoven (2017) explores the joint effect of normalization and weight decay (WD), and obtains the magnitudes of weight by assuming the convergence of weight, i.e. if wt = wt+1, the weight norm can be approximated as ||wt||2 = O( 4 √ η/λ), where λ is WD coefficient. Combining with Eq.(3), we have ηeff = √ ηλ. A more intuitive demonstration about relationship between normalization and weight decay is presented in Chiley et al. (2019) (see Figure 1): due to the fact that the gradient of scale invariant weight ∂L/∂w (L is the loss function of normalized network without WD part) is always perpendicular to weight w, one can infer that gradient component ∂L/∂w always tends to increase the weight norm, while the gradient component provided by WD always tends to reduce weight norm. Thus if weight norm remains unchanged, or “equilibrium has been reached”1, one can obtain\nwt −wt+1 ||wt||2\n= √ 2ηλ ∂L/∂w\nE||∂L/∂w||2 . (4)\nEq.(4) implies the magnitude of update is scale-invariant of gradients, and effective learning rate should be √ 2ηλ. Li & Arora (2020) manages to estimate the magnitude of update in SGDM, their\nresult is presented in limit and accumulation manner: if both limT−→∞RT = 1/T ∑T t=0 ||wt|| and\nlimT−→∞DT = 1/T ∑T t=0 ||wt −wt+1|| exist, then we have\nlim T−→∞ DT RT = 2ηλ 1 + α . (5)\nThough not rigorously, one can easily speculate from Eq.(5) the magnitude of update in SGDM cases should be √ 2ηλ/(1 + α) in equilibrium condition. But proof of Eq.(5) requires more strong assumptions: not only convergence of weight norm, but also convergence of update norm ‖wt+1 − wt‖2 (both in accumulation manner).\nAs discussed above, all previous qualitative results about “effective learning rate” (van Laarhoven, 2017; Chiley et al., 2019; Li & Arora, 2020) highly rely on equilibrium condition, but none of them explores why such equilibrium condition can be achieved. Only van Laarhoven (2017) simply interprets the occurrence of equilibrium as a natural result of convergence of optimization, i.e. when optimization is close to be finished, wt = wt+1, resulting in equilibrium condition. However, this interpretation has an apparent contradiction: according to Eq.(4) and (5), when equilibrium condition is reached, the magnitude of update is constant, only determined by hyper-parameters, which means optimization process has not converged yet. Li & Arora (2020) also notices the non-convergence of SGD with BN and WD, so they do not discuss reasonableness of assumptions adopted by Eq.(5). In a word, previous results about “effective learning rate” in equilibrium condition can only provide vague insights,\nthey are difficult to be connected with empirical observation.\nIn this work, we comprehensively reveal the learning dynamics of normalized neural network using stochastic gradient descent without/with momentum (SGD/SGDM) and weight decay, named as Spherical Motion Dynamics (SMD). Our investigation aims to answer the following question:\n1”Weight norm remains unchanged” means ||wt||2 ≈ ||wt+1||2, Chiley et al. (2019) calls this condition as “equilibrium”, which will also be used in the following context of this paper. Note equilibrium condition is not mathematically rigorous, we only use it for intuitive analysis.\nWhy and how equilibrium condition can be reached in Spherical Motion Dynamics?\nSpecifically, our contributions are\n• We introduce the assumptions that can lead to equilibrium condition in SMD, and justify their reasonableness by sufficient experiments. We also prove under given assumptions, equilibrium condition can be reached as weight norm approach to its theoretical value in a linear rate regime in SMD. Our assumptions show the equilibrium condition can occur long before the finish of the whole optimization;\n• We define a novel index, angular update, to measure the change of normalized neural network in a single iteration, and derive its theoretical value under equilibrium condition in SMD. We also prove angular update can approach to its theoretical value in a linear regime along with behavior of weight norm. Our results imply that the update efficiency of SGD/SGDM on normalized neural network only depends on pre-defined hyper-parameters in both SGD and SGDM case;\n• We verify our theorems on different computer vision tasks (including one of most challenging datasets ImageNet (Russakovsky et al., 2015) and MSCOCO (Lin et al., 2014)) with various networks structures and normalization techniques. Experiments show the theoretical value of angular update and weight norm agree well with empirical observation.\nRecently, a parallel work (Li et al., 2020b) is published to analyze the equilibrium condition of normalized neural network with weight decay in SGD case. They model the learning dynamic of normalized network as a SDE, and propose the concept “intrinsic learning rate” (λe = λη) to measure the converge rate of weight norm and effective learning rate. Their main result, in which converging time to equilibrium in SGD case is O(1/(λη)), is consistent with our theory. The major difference between Li et al. (2020b) and our work is that all results from Li et al. (2020b) are qualitatively results or conjectures limited on SGD case. While our work derives the quantitative results on both SGD/SGDM cases, our analysis allows us to precisely predict the value of weight norm/angular update in large scale data experiments, and clarify the difference of approaching rate between SGD and SGDM case. Besides, Li et al. (2020b) conducts empirical study to discuss the connection between the equilibrium state and accuracy of trained model. Our work mainly focuses on the equilibrium condition itself, connection between equilibrium condition and performance of trained model is beyond our discussion in this paper. The experiments we present in main text are only for verification of our theory.\nOur theory on equilibrium condition in Spherical Motion Dynamics implies equilibrium condition mostly relies on the update rules of SGD/SGDM with WD, and scale-invariant property. The cause of equilibrium condition is independent of decrease of loss or trajectory of optimization, but equilibrium condition turns out to significantly affect update efficiency of normalized network by controlling the relative update (Eq.(4)). We believe equilibrium condition is one of the key reason why learning dynamic of normalized neural network is not consistent with traditional optimization theory (Li et al., 2020b). We think it has great potential to study the leaning dynamic of normalized network or develop novel efficient learning strategy under the view of Spherical Motion Dynamics." }, { "heading": "2 RELATED WORK", "text": "Normalization techniques Batch normalization(BN (Ioffe & Szegedy, 2015)) is proposed to deal with gradient vanishing/explosion, and accelerate the training of DNN. Rapidly, BN has been widely used in almost all kinds of deep learning tasks. Aside from BN, more types of normalization techniques have been proposed to remedy the defects of BN (Ioffe, 2017; Wu & He, 2018; Chiley et al., 2019; Yan et al., 2020) or to achieve better performance (Ba et al., 2016; Ulyanov et al., 2016; Salimans & Kingma, 2016; Shao et al., 2019; Singh & Krishnan, 2020). Though extremely effective, the mechanism of BN still remains as a mystery. Existing works attempt to analyze the function of BN: Ioffe & Szegedy (2015) claims BN can reduce the Internal Covariance Shift (ICS) of DNN; Santurkar et al. (2018) argue that the effectiveness of BN is not related to ICS, but the smoothness of normalized network; Luo et al. (2019) shows BN can be viewed as an implicit regularization technique; Cai et al. (2019) proves that with BN orthogonal least square problem can converge at linear rate; Dukler et al. (2020) proves weight normalization can speed up training in a two-layer ReLU network.\nWeight decay Weight decay (WD) is well-known as l2 regularization, or ridge regression, in statistics. WD is also found to be extreme effective when applied in deep learning tasks. Krizhevsky & Geoffrey (2009) shows WD sometimes can even improve training accuracy not just generalization performance; Zhang et al. (2019) show WD can regularize the input-output Jacobian norm and reduce the effective damping coefficient; Li et al. (2020a) discusses the disharmony between WD and weight normalization. A more recent work Lewkowycz & Gur-Ari (2020) empirically finds the number of SGD steps T until a model achieves maximum performance satisfies T ∝ 1λη , where λ, η are weight decay factor and learning rate respectively, they interpret this phenomenon under the view of Neural Tangent Kernel (Jacot et al., 2018), showing that weight decay can accelerate the training process. Notice their result has no connection with equilibrium condition discussed in this work. Our results shows the cause of equilibrium condition can be reached long before neural network can get its highest performance.\nEffective learning rate Due to the scale invariant property caused by normalization, researchers start to study the behavior of effective learning rate. van Laarhoven (2017); Chiley et al. (2019) estimate the magnitude of effective learning rate under equilibrium assumptions in SGD case; Hoffer et al. (2018) quantify effective learning rate without equilibrium assumptions, so their results are much weaker; Arora et al. (2019) proves that without WD, normalized DNN can still converge with fixed/decaying learning rate in GD/SGD cases respectively; Zhang et al. (2019) shows WD can increase effective learning rate; Li & Arora (2020) proves standard multi-stage learning rate schedule with BN and WD is equivalent to an exponential increasing learning rate schedule without WD. As a proposition, Li & Arora (2020) quantifies the magnitude of effective learning rate in SGDM case. But none of them have ever discussed why equilibrium condition can be reached. A recent work Li et al. (2020b) studies the convergence of effective learning rate by SDE, proving that the convergence time is of O(1/(λη)), where λ, η are weight decay factor and learning rate respectively. Their result can only provide intuitive understanding, and is limited on SGD case." }, { "heading": "3 PRELIMINARY ON SPHERICAL MOTION DYNAMICS", "text": "First of all, we review the property of scale invariant weight, and depict Spherical Motion Dynamics (SMD) in SGD case. Notice except definitions, all intuitive statements or derivations in this section mostly comes from previous literature, they are not mathematically rigorous. We summarize them to provide background of our topic and preliminary knowledge for readers. Lemma 1. If w is scale-invariant with respect to L(w) , then for all k > 0, we have:\n〈wt, ∂L ∂w ∣∣∣ w=wt 〉 = 0 (6)\n∂L ∂w ∣∣∣ w=kwt = 1 k · ∂L ∂w ∣∣∣ w=wt . (7)\nProof can be seen in Appendix B.1. Lemma 1 is also discussed in Hoffer et al. (2018); van Laarhoven (2017); Li & Arora (2020). Eq.(7) implies gradient norm is influenced by weight norm, but weight norm does not affect the output of DNN, thus we define unit gradient to eliminate the effect of weight norm. Definition 1 (Unit Gradient). If wt 6= 0, w̃ = w/||w||2, the unit gradient of ∂L/∂w|w=wt is ∂L/∂w|w=w̃t .\nSetting k as 1/||wt||2 inEq.(7), we have ∂L ∂w ∣∣∣ w=wt = 1 ||wt|| · ∂L ∂w ∣∣∣ w=w̃t . (8)\nA typical SGD update rule without WD is\nwt+1 = wt − η ∂L ∂w ∣∣∣ w=wt , (9)\nif ||wt||2 = ||wt+1||2, dividing both side of Eq.(9) by ||wt||2, we have\nw̃t+1 = w̃t − η ||wt||22 ∂L ∂w ∣∣∣ w=w̃t = w̃t − ηeff · ∂L ∂w ∣∣∣ w=w̃t . (10)\nEq.(10) shows effective learning rate can be viewed as learning rate of SGD on unit sphere Sp−1 (Hoffer et al., 2018). But effective learning rate still cannot properly represent the magnitude of update, since unit gradient norm is unknown. Therefore we propose the angular update defined below. Definition 2 (Angular Update). Assuming wt is a scale-invariant weight from a neural network at iteration t, then the angular update ∆t is defined as\n∆t = ](wt,wt+1) = arccos ( 〈wt,wt+1〉 ||wt|| · ||wt+1|| ) , (11)\nwhere ](·, ·) denotes the angle between two vectors, 〈·, ·〉 denotes the inner product.\nAccording to Eq.(6), ∂L/∂w is perpendicular to weight w. Therefore, if angular update ∆t is small enough, it can be approximated by first order Taylor series expansion of tan ∆t, then we can see its connection between effective learning rate and unit gradient norm\n∆t = tan(∆t) = η ||wt|| · || ∂L ∂w ∣∣∣ w=wt ||2 = ηeff · ∂L ∂w ∣∣∣ w=w̃t . (12)\nAnother deduction from Eq.(6) is that weight norm always increases because\n||wt+1||22 = ||wt||22 + (η|| ∂L ∂w ∣∣∣ w=wt ||2)2 > ||wt||22. (13)\nFrom Eq.(7) we can infer that increasing weight norm can lead to smaller gradient norm if unit gradient norm is unchanged. Zhang et al. (2019) states the potential risk that GD/SGD without WD but BN will converge to a stationary point not by reducing loss but by reducing effective learning rate due to increasing weight norm. Arora et al. (2019) proves that full gradient descent can avoid the risk and converge to a stationary point defined in Sp−1, but their results still require sophisticated learning rate decay schedule in SGD case. Besides, practical implementation suggests training DNN without WD always suffers from poor generalization (Zhang et al., 2019; Bengio & LeCun, 2007; Lewkowycz & Gur-Ari, 2020).\nNow considering the update rule of SGD with WD:\nwt+1 = wt − η( ∂L ∂w ∣∣∣ w=wt + λwt). (14)\nWe can approximate the update of weight norm by\n||wt+1||2 ≈ ||wt||2 − λη||wt||2 + η2 2||wt||32 · || ∂L ∂w ∣∣∣ w=w̃t ||22. (15)\nThe derivation of Eq.(15) is presented in Appendix A.1. Eq.(15) implies WD can provide direction to reduce weight norm, hence Chiley et al. (2019); Zhang et al. (2019) points out the possibility that weight norm can be steady, but do not explain this clearly. Here we demonstrate the mechanism deeper (see Figure 1): if unit gradient norm remains unchanged, note “centripetal force” (−λη||wt||2) is proportional to weight norm, while “centrifugal force” ( η 2\n2||wt||32 · || ∂L∂w ∣∣∣ w=w̃t\n||22) is inversely proportional to cubic weight norm. As a result, the dynamics of weight norm is like a spherical motion in physics: overly large weight norm makes centripetal force larger than centrifugal force, leading to decreasing weight norm; while too small weight norm makes centripetal force smaller than centrifugal force, resulting in the increase of weight norm. Intuitively, equilibrium condition tends to be reached if the number of iterations is sufficiently large.\nNotice that the core assumption mentioned above is “unit gradient norm is unchanged”. In fact this assumption can solve the contradiction we present in Section 1: convergence of wt leads to nonzero ||wt+1 − wt||2/||wt||2. Our analysis implies the convergence of weight norm is not equivalent to convergence of weight, steady unit gradient norm can also make weight norm converge, steady unit gradient norm does not rely on the fact that optimization has reached an optimum solution. But a problem about unit gradient assumption rises: unit gradient norm cannot remain unchanged during training in practice, it is not a reasonable assumption. In next section, we formulate this assumption in a reasonable manner, and rigorously prove the existence of equilibrium condition in SGD case. Discussion about SGD cannot be easily extended to SGDM case, because momentum is not always perpendicular to weight as unit gradient. But we can still prove the existence of equilibrium condition in SGDM case under modified assumptions." }, { "heading": "4 MAIN RESULTS", "text": "First of all, we prove the existence of equilibrium condition in SGD case and provide the approaching rate of weight norm.\nTheorem 1. (Equilibrium condition in SGD) Assume the loss function is L(X;w) with scaleinvariant weight w, denote gt = ∂L∂w ∣∣ Xt,wt\n, g̃t = gt · ||wt||2. Considering the update rule of SGD with weight decay, wt+1 = wt − η · (gt + λwt) (16) where λ, η ∈ (0, 1). If the following assumptions hold: 1) λη << 1; 2) ∃L, V,∈ R+, E[||g̃t||22|wt] = L, E[(||g̃t||22 − L)2|wt] ≤ V ; 3) ∃l ∈ R+, ||g̃t||22 > l, l > 2[ 2λη 1−2λη ] 2L. Then\n∃B > 0, w∗ = 4 √ Lη/(2λ), we have\nE[||wT ||22 − (w∗)2]2 ≤ (1− 4λη)TB + V η2\nl . (17)\nRemark 1. The theoretical value of weight norm w∗ in Theorem 1 is consistent with the derivation of weight norm in equilibrium in van Laarhoven (2017), though van Laarhoven (2017) assumes the equilibrium condition has been reached in advance, hence van Laarhoven (2017) cannot provide the approaching rate and scale of bias/variance. The vanishing term ((1 − 4λη)TB) in Eq.(17) is consistent with the convergence time O(1/(λη)) presented in Li et al. (2020b).\nProof of Theorem 1 can be seen in Appendix B.2. It shows the square of weight norm can approach to its theoretical value in a linear rate regime (when vanishing term is larger than noisy term in Eq.(17)), and its variance is bounded by V η 2\nl , which is empirically small. Now we discuss the reasonableness of assumptions in Theorem 1. Assumption 1 is consistent with commonly used settings; assumptions 2 and 3 imply the E||g̃t||22 remains unchanged within several iterations, and its lower bound cannot be far from its expectation.\nWe need to clarify assumption 2 further: we do not require the E||g̃t||22 must be constant across the whole training process, but only remains unchanged locally. On one hand, small learning rate(η) can guarantee E||g̃t||22 changes slowly; on the other hand, when E||g̃t||22 changes, it means that square of weight norm will approach to its new theoretical value as Theorem 1 describes. Experiment result in Figure 2 strongly justifies our analysis. We also conduct extensive experiments to verify our claim further, please refer to Appendix C.1.\nNow we extend Theorem 1 to SGDM case. SGDM is more complex than SGD since momentum is not always perpendicular to the weight, therefore we need to modify assumptions.\nTheorem 2. (Equilibrium condition in SGDM) Considering the update rule of SGDM (heavy ball method (Polyak, 1964)):\nvt = αvt−1 + gt + λwt (18) wt+1 = wt − ηvt (19)\nwhere λ, η, α ∈ (0, 1). If following assumptions hold: 4) λη << 1, λη < (1 − √ α)2;\n5) Define ht = ||gt||22 + 2α〈vt−1, gt〉, h̃t = ht · ||wt||22. ∃L, V,∈ R+, E[h̃t|wt] = L, E[(h̃t − L)2|wt] ≤ V ; 6)∃l ∈ R+, ht > l, l > 2[ 6λη(1−α)3(1+α)−8λη(1−α) ]\n2L. then ∃B > 0, w∗ = 4 √ Lη/(λ(1− α)(2− λη/(1 + α))), we have\nE[||wT ||22 − (w∗)2]2 ≤ 3B · (1− 4λη\n1− α )T +\n3V η2(1 + 4α2 + α4)\nl(1− α)4 , (20)\nRemark 2. So far, no other work rigorously prove equilibrium condition can be reached in SGDM cases. Even the most relevant work (Li et al., 2020b) only provides their conjecture on approaching rate of weight norm in SGDM, they speculate that the time of approaching to equilibrium should be O(1/(λη)), same order as approaching time in SGD case, their conjecture cannot provide further insight. While our results (vanishing terms in Eq.(17), (20) respectively) can clearly reflect the difference: the approaching rate of SGDM should be 1/(1 − α) times faster than rate of SGD with same ηλ. α is usually set as 0.9, so SGDM can reach equilibrium condition much faster than SGD.\nProof can be seen in Appendix B.3. Like assumption 1, assumption 4 is also satisfied for commonly used hyper-parameter settings. Besides, λη < (1 − √ α)2 is also mentioned in Li & Arora (2020) for other purpose; Assumption 5 shows not unit gradient gradient norm ||g̃t||22 but an adjusted value h̃t dominates the expectation and variance of the weight norm square. We empirically find the expectation of 〈vt−1, gt〉 is very close to 0, therefore the behavior of h̃t is similar to that of ||g̃t||22 (see (d) in Figure 2), making square of weight norm approaching its theoretical value in SGDM case. We leave theoretical analysis on h̃t as future work. As for assumption 6, commonly used settings (ηλ << 1) can make 2[ 6λη(1−α)3(1+α)−8λη(1−α) ]\n2 as an very small lower bound for l/L. The experiments on justification of assumptions 4,5,6 can be seen in Figure 2 and appendix C.1. Comparing with Eq.(17) and Eq.(20), it implies that with same η, λ, SGDM can reach equilibrium condition much faster than SGD, but may have a larger variance, our experiments also verify that(see (b), (e) in Figure 2).\nSince we have proven weight norm will approach to its theoretical value in SMD in a linear rate regime, we can derive the theoretical value of angular update ∆t and its variance. Theorem 3. (Theoretical value of Angular Update) In SGD case, if assumptions in theorem 1 holds, then ∃C > 0, we have\nE(∆T − √\n2λη)2 < (1− 4ηλ)TC + V Ll\n(21)\nIn SGDM case, if assumptions in theorem 2 holds, ∃C > 0, we have E(∆t − √ 2λη\n1 + α )2 < (1− 4ηλ 1− α )TC + (1− α2)(1 + 4α2 + α4)V 4Llα4\n(22)\nRemark 3. Theoretical value of angular update in Theorem 3 is consistent with Eq.(4, 5) from Chiley et al. (2019); Li & Arora (2020) respectively. Notice the variance term in Eq.(21,22) is of O(V/Ll), it is too large comparing with its empirical value, we leave it as a future work to improve the bound of variance term. Though connection between performance of neural network and equilibrium condition is beyond the main discussion of this paper, the findings of a parallel work (Li et al., 2020b) can inspire us the possible advantage of momentum: Li et al. (2020b) interprets that smaller λη can get higher performance when training DNN, but the approaching rate (O(λη)) is slow; larger λη has faster approaching rate but it can only get bad performance. Inspired by Li et al. (2020b), we suspect that ∆t ∝ √ λη is highly correlated to the performance of DNN: smaller angular update in equilibrium condition can lead to better performance of DNN. According to theorem 3, with same λη, momentum method has faster approaching rate(λη/(1− α)) and smaller angular update ( √ 2λη/(1 + α)) than pure SGD. This means momentum method can simultaneously accelerate training process and improve the final performance, comparing pure SGD method. We leave this conjecture as a future work.\nProof of theorem 3 is shown in Appendix B.4. According to theorem 3, the theoretical value of angular update and its approaching rate almost only depends on hyper-parameters: weight decay factor λ, learning rate η, (and momentum factor α). It implies update efficiency of scale-invariant weights within a single step is totally controlled by predefined hyper-parameters in equilibrium condition, regardless other attributes of the weights (shape, size, position in network structure, or effects from other weights). That’s the key reasons why we propose Angular Update (Eq.(11)) to replace effective learning rate (Eq.(3)): effective learning rate can only reflect the influence of weight norm, it cannot reveal how unit gradient norm affect the relative update (see Eq.(10)), while angular update in equilibrium condition implies that both weight norm and unit gradient norm do not affect the scale of update within a single iteration, only hyper-parameters (learning rate, weight decay factor, momentum factor) matter. Our experiment results strongly prove our claim (see Figure 3(a), 3(b), 3(d), 3(e))." }, { "heading": "5 EXPERIMENTS", "text": "In this section, we show that our theorems can agree well with empirical observations on ImageNet (Russakovsky et al., 2015) and MSCOCO (Lin et al., 2014). We conduct experiments in two cases. In the first case we train neural network with fixed learning rate to verify our theorems in SGD and SGDM, respectively; in the second case we investigate SMD with more commonly used settings, multi-stage learning rate schedule." }, { "heading": "5.1 FIXED LEARNING RATE", "text": "With fixed learning rate, we train Resnet50 (He et al., 2016) with SGD/SGDM on ImageNet. Learning rate is fixed as η = 0.2; WD factor is λ = 10−4; with SGDM, the momentum factor is α = 0.9. Figure 2 presents the unit gradient norm square, weight norm, and angular update of the weights from LAYER.2.0.CONV2 of Resnet50 in SGD and SGDM cases, respectively. It can be inferred from Figure 2 the behavior of ||g̃t||22, h̃t and hyper-parameter settings satisfy our assumptions in Theorem 1 and 2, therefore theoretical value of weight norm and angular update agree with empirical value very well. We also can observe SGDM can achieve equilibrium more quickly than SGD. According to Eq.(17),(20), the underlying reason might be with same learning rate η and WD factor λ, approaching rate of SGDM (1− λη1−α ) is smaller than that of SGD (1− λη)." }, { "heading": "5.2 MULTI-STAGE LEARNING RATE SCHEDULE", "text": "Now we turn to study the behavior of angular update with SGDM and multi-stage leanring rate schedule on Imagenet (Russakovsky et al., 2015) and MSCOCO (Lin et al., 2014). In ImageNet classification task, we still adopt Resnet50 as baseline for it is a widely recognized network structure.The training settings rigorously follow Goyal et al. (2017): learning rate is initialized as 0.1, and divided by 10 at 30, 60, 80-th epoch; the WD factor is 10−4; the momentum factor is 0.9. In MSCOCO experiment, we conduct experiments on Mask-RCNN (He et al., 2017) benchmark using a Feature Pyramid Network (FPN) (Lin et al., 2017), ResNet50 backbone and SyncBN (Peng et al., 2018) following the 4x setting in He et al. (2019): total number of iteration is 360, 000, learning rate is initialized as 0.02, and divided by 10 at iteration 300, 000, 340, 000; WD coefficient is 10−4.\nThere appears to be some mismatch between theorems and empirical observations in (a), (b) of Figure 3: angular update in the last two stages is smaller than its theoretical value. This phenomenon can be well interpreted by our theory: according to Theorem 1, 2, when equilibrium condition is reached, theoretical value of weight norm satisfies ||wt||2 ∝ 4 √ η λ , therefore when learning rate is\ndivided by k, equilibrium condition is broken, theoretical value of weight norm in new equilibrium condition should get 4 √ 1/k smaller. But new equilibrium condition cannot be reached immediately (see (c), (f) in Figure 3), following corollary gives the least number of iterations to reach the new equilibrium condition. Corollary 3.1. In SGD case with learning rate η, WD coefficient λ, if learning rate is divided by k, and unit gradient norm remains unchanged, then at least d[log(k)]/(4λη)e iterations are required to reach the new equilibrium condition; In SGDM case with momentum coefficient α, then at least d[log(k)(1− α)]/(4λη)e iterations are required to reach the new equilibrium condition.\nCorollary 3.1 also implies SGD/SGDM with smaller learning rate requires more iterations to reach new equilibrium condition. Hence, in second learning rate stage in Imagenet experiments, angular update can reach its new theoretical value within 15 epochs, but during last two learning rate stages of Imagenet/MSCOCO experiments, SGDM cannot completely reach new equilibrium by the end of training procedure. As a result, we observe empirical value of angular update seems smaller than its theoretical value. Based on our theorem, we can bridge the gap by skipping the intermediate process from old equilibrium to new one. Specifically, when learning rate is divided by k, norm of scale-invariant weight is also divided by 4 √ k, SGDM can reach in new equilibrium immediately. Experiments((b),(e) in Figure 3) show this simple strategy can make angular update always approximately equal to its theoretical value across the whole training process though learning rate changes." }, { "heading": "6 CONCLUSION", "text": "In this paper, we comprehensively reveal the learning dynamics of DNN with normalization, WD, and SGD/SGDM, named as Spherical Motion Dynamics (SMD). Different from most related works (van Laarhoven, 2017; Hoffer et al., 2018; Chiley et al., 2019; Li & Arora, 2020), we directly explore the cause of equilibrium. Specifically, we introduce the assumptions that can lead to equilibrium condition, and show these assumptions are easily satisfied by practical implementation of DNN; Under given assumptions, we prove equilibrium condition can be reached at linear rate, far before optimization has converged. Most importantly, we show our theorem is widely valid, they can be verified on one of most challenging computer vision tasks, beyond synthetic datasets. We believe our theorems on SMD can bridge the gap between current theoretical progress and practical usage on deep learning techniques." }, { "heading": "A MISSING DERIVATIONS", "text": "" }, { "heading": "A.1 APPROXIMATION OF WEIGHT NORM UPDATE", "text": "Now considering the update rule of SGD with WD:\nwt+1 = wt − η( ∂L ∂w ∣∣∣ w=wt + λwt). (23)\nSince ∂L∂w ∣∣∣ w=wt is perpendicular to wt, we have\n||wt+1||22 = ||wt − η( ∂L ∂w ∣∣∣ w=wt + λwt)||22 (24)\n= (1− λη)2||wt||22 + η2|| ∂L ∂w ∣∣∣ w=wt ||22. (25)\nTherefore ||wt+1||2 − ||wt||2 = √\n(1− λη)2||wt||22 + η2|| ∂L ∂w ∣∣∣ w=wt ||22 − ||wt||2 (26)\n= (1− λη)2||wt||22 + η2|| ∂L∂w ∣∣∣ w=wt\n||22 − ||wt||22√ (1− λη)2||wt||22 + η2|| ∂L∂w ∣∣∣ w=wt ||22 + ||wt||2 . (27)\nNow assume η, λ is extremely small, so that\n(1− λη)2||wt||22 + η2|| ∂L ∂w ∣∣∣ w=wt ||22 − ||wt||22 ≈ −2λη||wt||22 + η2|| ∂L ∂w ∣∣∣ w=wt\n||22 (28)√ (1− λη)2||wt||22 + η2||\n∂L ∂w ∣∣∣ w=wt ||22 + ||wt||2 ≈ 2||wt||2. (29)\nTherefore, we have\n||wt+1||2−||wt||2 ≈ −λη||wt||2+ η2 2||wt||2 || ∂L ∂w ∣∣∣ w=wt ||22 = −λη||wt||2+ η2 2||wt||32 || ∂L ∂w ∣∣∣ w=w̃t\n||22 (30)\nwhere ∂L∂w ∣∣∣ w=w̃t is the unit gradient defined in Definition 1." }, { "heading": "B PROOF OF THEOREMS", "text": "Remark 4. In the following context, we will use the following conclusion multiple times: ∀δ, ε ∈ R, if |δ| 1, |ε| 1, then we have:\n(1 + δ)2 ≈ 1 + 2δ, √\n1 + δ ≈ 1 + δ 2 ,\n1\n1 + δ ≈ 1− δ, (1 + δ)(1 + ε) ≈ 1 + δ + ε. (31)" }, { "heading": "B.1 PROOF OF LEMMA 1", "text": "Proof. Given w0 ∈ Rp\\{0}, since ∀k > 0,L(w0) = L(kw0), then we have\n∂L(w) ∂w ∣∣∣ w=w0 = ∂L(kw) ∂w ∣∣∣ w=w0 = ∂L(w) ∂w ∣∣∣ w=kw0 · k (32)\n∂L(kw) ∂k ∣∣∣ w=w0 = 〈∂L(w) ∂w ∣∣∣ w=kw0 ,w0〉 = 1 k · 〈∂L(w) ∂w ∣∣∣ w=w0 ,w0〉 = 0 (33)" }, { "heading": "B.2 PROOF OF THEOREM 1", "text": "Lemma 2. If the sequence {xt}∞t=1 satisfies\nxt ≥ αxt−1 + L\nxt−1 (34)\n. where x1 > 0, L > 0\nThen, we have\nxt ≥ √ L\n1− α − αt−1|\n√ L\n1− α − x1| (35)\nProof. If xt ≥ √ L 1−α , since √ L/(1− α) ≥ 2 √ αL then we have\nxt+1 ≥ αxt + L\nxt ≥ α\n√ L\n1− α + L√ L/(1− α) =\n√ L\n1− α , (36)\nwhich means ∀k > t, xk ≥ √ L/(1− α). If xt < √ L/(1− α), then we have√\nL\n1− α − xt+1 ≤ (α− √ L(1− α) xt )( √ L 1− α − xt) < α( √ L 1− α − xt). (37)\nTherefore, by deduction method we have if xT < √ L/(1− α), then ∀t ∈ [1, T − 1], we have\n0 < xt < xt+1 < XT <\n√ L\n1− α , (38)\n(\n√ L\n1− α − xt) < αt−1(\n√ L\n1− α − x1). (39)\nIn summary, we have\nxt ≥ √ L\n1− α − αt−1|\n√ L\n1− α − x1| (40)\nProof of Theorem 1. since 〈wt, gt〉 = 0, then we have:\n||wt+1||22 = (1− ηλ)2||wt||22 + ||g̃t||22η2\n||wt||22 (41)\nDenote xt as ||wt||22, Lt as ||g̃t||22 and omit O((ηλ)2) part. Then Lt > l, E[Lt|xt] = L, V ar(Lt|xt) = E[(Lt − L)2|xt] < V . Eq.(41) is equivalent to\nxt+1 = (1− 2λη)xt + Ltη\n2\nxt . (42)\nAccording to Lemma 2, we have\nxt >\n√ lη\n2λ − (1− 2λη)t−1|x0 −\n√ lη\n2λ |. (43)\nEq.(43) implies when t > 1 + log(( √ 2−1) √ lη/(4λ))−log(|x0− √ lη/(4λ)|) log(1−2λη) , we have xt > √ lη 4λ .\nNow, denote x∗ as the √\nLη 2λ , then we have\nE[(xt+1 − x∗)2|xt] = E[((1− 2λη − Lη2\nxtx∗ )(xt − x∗) + Lt − L xt )2|xt]\n= (1− 2λη − Lη 2\nxtx∗ )2(xt − x∗)2 +\nE[(Lt − L)2|xt]η4\nx2t\n(44)\nIf t > T (α, λ, η, l, x0) = 1 + log((\n√ 2−1) √ lη/(4λ))−log(|x0− √ lη/(4λ)|)\nlog(1−2λη) , Eq.(43) implies 1− 2λη − √ 2L\nl · 2λη < 1− 2λη − Lη\n2\nx∗xt . (45)\nCombining with assumption 3 in Theorem 1, we have 1− 2λη − √\n2L l · 2λη > 0, which means\n0 < 1− 2λη − L x∗xt . (46)\nCombining with Eq.(44), (43), (46), if t > T (α, λ, η, l, x0), we have\nE[(xt+1 − x∗)2|xt] < (1− 2λη)2(xt − x∗)2 + 4V η3λ\nl . (47)\nConsidering the expectation with respect to the distribution of xt, we have\nE(xt+1 − x∗)2 < (1− 2λη)2E(xt − x∗)2 + 4V η3λ\nl . (48)\nApproximate (1− 2λη)2 = 1− 4λη, and iterate Eq.(48) for t− T (α, λ, η, l, x0) times, we have\nE(xt − x∗)2 < (1− 4λη)t−T (α,λ,η,l,x0)E(xT (α,λ,η,l,x0) − x ∗)2 +\nV η2\nl . (49)\nT (α, λ, η, l, x0) is finite, and only depends on α, λ, η, l, x0. Hence we can set B = max{(1 − 4λη)−tE(xt − x∗)2|t = 0, 1, 2, ..., T (α, λ, η, l, x0)}, note B is finite by\nEx2t+1 = E[(1−2λη)2x2t+2(1−2λη)Lt+ L2t x2t ] < (1−4λη)Ex2t+2(1−2λη)L+ V + L2 min(x20, lη/(2λ)) . (50) therefore, ∀t > 0, we have\nE(xt − x∗)2 < (1− 4λη)tB + V η2\nl . (51)" }, { "heading": "B.3 PROOF OF THEOREM 2", "text": "Lemma 3. Assume α, β, ε ∈ (0, 1), where β << 1. Denote diag(1 − 2β1−α , α, α 2 + 2α\n2\n1−αβ as Λ,\nk = ( 1(1−α)2 ,− 2α (1−α)2 , α2 (1−α)2 ) T , e = (1, 1, 1)T . If ε < 13 [ 1−α2 β − 8 1−α ], then ∀d ∈ R p, we have\n||(Λ− εβ(1− α)2keT )d||22 < (1− 4β\n1− α )||d||22 (52)\nProof. Omit O(β2) part, we have\n||(Λ−εβ(1−α)2keT )d||22 = (1− 4β\n1− α )d21+α 2d22+α 4(1+\n4β\n1− α )d23−2εβ(d1+d2+d3)(d1−2α2d2+α4d3)\n(53) (d1 + d2 + d3)(d1 − 2α2d2 + α4d3)\n= [d1 + (1− 2α2)d2]2\n2 +\n[d1 + (1 + α 4)d3] 2\n2 − (1/2 + 2α4)d22 −\n1 + α8\n2 + (α4 − 2α2)d2d3\n≥− (1 2 + 2α4 + α4 2 )d22 − ( α8 2 + α4 2 − 2α2 + 5 2 )d23 ≥− 3d22 − 5\n2 d23.\n(54)\nThen we have\n||(Λ− εβ(1− α)2keT )d||22 ≤ (1− 4β\n1− α )d21 + (α 2 + 3βε)d22 + (α 4 +\n4βα4 1− α + 5βε 2 )d23. (55)\nSince ε < 13 [ 1−α2 β − 8 1−α ], we have\nα2 + 3βε < 1− 4β 1− α , (56)\nα4 + 4βα4\n1− α +\n5βε 2 < 1− 4β 1− α . (57)\nHence, we have\n||(Λ− εβ(1− α)2keT )d||22 < (1− 4β\n1− α )||d||22 (58)\nProof of Theorem 2. The update rule is\nwt+1 = wt − ηvt\n= wt − η(αvt−1 + g̃t ||wt|| + λwt) = wt − η(α wt−1 −wt\nη + g̃t ||wt|| + λwt)\n= (1− ηλ+ α)wt − αwt−1 − gtη.\n(59)\nThen we have\n||wt+1||22 = (1− ηλ+ α)2||wt||22 − 2α(1 + α− ηλ)〈wt,wt−1〉+ α2||wt−1||22 + ||gt||22η2 + 2〈αwt−1, gtη〉 = (1− ηλ+ α)2||wt||22 − 2α(1 + α− ηλ)〈wt,wt−1〉+ α2||wt−1||22 + ||gt||22η2 + 2〈α(wt + ηvt−1), gtη〉 = (1− ηλ+ α)2||wt||22 − 2α(1 + α− ηλ)〈wt,wt−1〉+ α2||wt−1||22\n+ h̃tη\n2\n||wt||22 .\n(60)\n〈wt+1,wt〉 = (1 + α− λη)||wt||2 − α〈wt,wt−1〉, (61) Let Xt,A, e denote:\nXt = ( at bt ct ) = ||wt||22〈wt,wt−1〉 ||wt−1||22 , (62) A =\n (1 + α− λη)2 −2α(1 + α− λη) α21 + α− λη −α 0 1 0 0 , (63) e =\n( 1 0 0 ) , (64)\n(65)\nrespectively. The Eq.(60), (61) is formulated as a iterative map:\nXt+1 = AXt + h̃tη\n2\neTXt e. (66)\nWhen λη < (1− √ α)2, the eigen value of A are all real number:\nλ1 = (1 + α− λη)2 + (1 + α− λη)\n√ (1 + α− λη)2 − 4α\n2 − α = 1− 2λη 1− α +O(λ2η2), (67)\nλ2 = α, (68)\nλ3 = (1 + α− λη)2 − (1 + α− λη)\n√ (1 + α− λη)2 − 4α\n2 − α = α2 + 2α\n2\n(1− α) λη +O(λ2η2),(69)\nand they satisfy 0 < λ3 < λ2 = α < λ1 < 1. (70)\nTherefore, A can be formulated as S−1AS = Λ, (71)\nwhere Λ is a diagonal matrix whose diagonal elements are the eigen value of A; the column vector of S is the eigen vectors of A, note the fromulation of S,Λ are not unique. Specifically, we set Λ,S as\nΛ = ( λ1 0 0 0 λ2 0 0 0 λ3 ) , (72)\nS = 1 1 11+α−λη α+λ1 1+α−λη α+λ2 1+α−λη α+λ3\n1 λ1\n1 λ2\n1 λ3 . (73) Moreover, the inverse of S exists, and can be explicitly expressed as\nS−1 = (α+λ1)λ1 (λ1−α)(λ1−λ3) − 2λ1(α+λ1)(α+λ3) (λ1−λ3)(λ1−α)(1+α−β) λ1λ3(α+λ1) (λ1−α)(λ1−λ3) − 2α 2\n(λ1−α)(α−λ3) 2α(α+λ1)(α+λ3) (λ1−α)(α−λ3)(1+α−β) − 2αλ1λ3\n(λ1−α)(α−λ3) (α+λ3)λ3\n(α−λ3)(λ1−λ3) − 2λ3(α+λ3)(α+λ1) (λ1−λ3)(α−λ3)(1+α−β) (α+λ3)λ1λ3 (α−λ3)(λ1−λ3) . (74) let Yt = S−1Xt, combining with Eq.(66), we have\nYt+1 = ΛYt + h̃tη\n2\n(STe)TYt S−1e. (75)\nCombining with Eq.(73) and Eq.(74), and set Yt = (ãt, b̃t, c̃t)T , we rewrite Eq.(75) as\nãt+1 = λ1ãt + h̃tη\n2\nãt + b̃t + c̃t · (α+ λ1)λ1 (λ1 − α)(λ1 − λ3) , (76)\nb̃t+1 = αb̃t − h̃tη\n2\nãt + b̃t + c̃t · 2α\n2\n(λ1 − α)(α− λ3) , (77)\nc̃t+1 = λ3c̃t + h̃tη\n2\nãt + b̃t + c̃t · (α+ λ3)λ3 (α− λ3)(λ1 − λ3) . (78)\nNote ||wt||22 = ãt + b̃t + c̃t. Now we prove the following inequations by mathematical induction\nb̃t < 0, (79) c̃t > 0, (80)\n(α− λ1)b̃t > (λ1 − λ3)c̃t, (81) ãt + b̃t + c̃t > 0, (82)\nãt+1 + b̃t+1 + c̃t+1, > λ1(ãt + b̃t + c̃t) + h̃tη\n2\nãt + b̃t + c̃t , (83)\nSince the start point X1 = (a1, a1, a1)T (a1 > 0), the start point Y1 = S−1X1. Combining with Eq.(74), we have\nb̃1 = − 2α2λη\n(λ1 − α)(α− λ3) a1, (84)\nc̃1 = λ3(λ3 + α)(1− α+ λη) (λ3 − α)(λ1 − λ3)(1 + α− λη) ( 1− α− λη 1− α+ λη − λ1)a1, (85)\nby which we have (α− λ1)b̃1 > (λ1 − λ3)c̃1. Besides ã1 + b̃1 + c̃1 = eTSY1 = eTX1 = a1 > 0. Suppose for t = T , Eq. (79), (80), (81), (82) hold, combining with Eq.(77), (78), we can derive b̃T+1 < 0, ãT+1 > 0, so Eq.(79), (80) hold for t = T + 1; Besides, we have\n(α− λ1)b̃T+1 = α(α− λ1)b̃T + h̃tη\n2\nãT + b̃T + c̃T · 2α\n2\n(α− λ3)\n> λ3(λ1 − λ3)c̃T + h̃tη\n2\nãT + b̃T + c̃T · (α+ λ3)λ3\n(α− λ3) = (λ1 − λ3)c̃T+1,\n(86)\nthus Eq.(81) holds for t = T + 1. Sum Eq.(76), Eq.(77), Eq.(78), due to Eq.(81) we have\nãT+1 + b̃T+1 + c̃T+1 = λ1ãT + αb̃T + λ3c̃T + h̃tη\n2\nãT + b̃T + c̃T\n> λ1(ãT + b̃T + c̃T ) + h̃tη\n2\nãT + b̃T + c̃T ,\n(87)\nEq.(83) holds for t = T + 1, combining with the fact that ãT + b̃T + c̃T > 0, we have ãT+1 + b̃T+1 + c̃T+1 > 0.\nAccording to Lemma 2, we can estimate the lower bound of ãt + b̃t + c̃t: when t > 1 + log(( √ 2−1) √ lη/(4λ))−log(|||w0||22− √ lη/(4λ)|)\nlog(1−2λη) ,\nãt + b̃t + c̃t ≥\n√ lη 2(1− λ1) ≈ √ lη(1− α) 4λ (88)\nNow we can analyze the expectation of distance(l2 norm) between Yt = (ãt, b̃t, c̃t)T and the fixed point Y ∗ = (ã∗, b̃∗, c̃∗)T which satisfies\nY ∗ = ΛY ∗ + Lη2\n(STe)TY ∗ S−1e. (89)\nAssume xt = ãt + b̃t + c̃t, x∗ = ã∗ + b̃∗ + c̃∗ > 0, then we have:\nYt+1 − Y ∗ = (Λ− Lη2\nxtx∗ keT )(Yt − Y ∗) +\n(h̃t − L)η2\nxt k (90)\nwhere k = (k1, k2, k3)T = S−1e. In the following context, we will omit the O(λ2η2) part since λη 1. k1, k2, k3 can be approximated as\nk1 = 1\n(1− α)2 +O(β), (91)\nk2 = − 2α\n(1− α)2 +O(β), (92)\nk3 = α2\n(1− α)2 +O(β). (93)\nThen we have\nE[||Yt+1 − Y ∗||22|Yt] = ||(Λ− L\nxtx∗ keT )(Yt − Y ∗)||22 +\nE[(h̃t − L)2|Yt]η4\nx2t ||k||22 (94)\nThe fixed point of Eq.(89) is computed as x∗ = ã∗ + b̃∗ + c̃∗ = √\nLη\nλ(1−α)(2− λη1+α ) , and we have\nknown that if t > 1 + log(( √ 2−1) √ lη/(4λ))−log(|||w0||22− √ lη/(4λ)|) log(1−2λη) , xt > √ lη(1−α) 4λ ,, therefore we have\nLη2 xtx∗ <\n√ 2L\nl · 2λη, (95)\nE[(h̃t − L)2|Yt] x2t η4 < 4V η3λ l(1− α) . (96)\nAccording to assumption 3, we can prove√ 2L\nl <\n(1− α)2 3 [ 1− α2 β − 8 1− α ], (97)\ncombining with Lemma 3, we have\nE[||Yt+1 − Y ∗||22|Yt] < (1− 4λη\n1− α )||Yt − Y ∗||22 + 4V η3λ||k||22 l(1− α) , (98)\nwhich implies\nE||Yt+1 − Y ∗||22 < (1− 4λη\n1− α )t−TE||YT − Y ∗||22 +\nV η2(1 + 4α2 + α4)\nl(1− α)4 , (99)\nwhere T = [1 + log(( √ 2−1) √ lη/(4λ))−log(|||w0||22− √ lη/(4λ)|)\nlog(1−2λη) ]. Therefore, similar to the proof of theorem 1, ∃B > 0,\nE||Yt+1 − Y ∗||22 < (1− 4λη\n1− α )tB +\nV η2(1 + 4α2 + α4)\nl(1− α)4 . (100)\nRecall ||wt||22 = eTYt, therefore\nE[||wt||22 − (w∗)2]2 ≤ 3E||Yt+1 − Y ∗||22 < 3(1− 4λη\n1− α )tB +\n3V η2(1 + 4α2 + α4)\nl(1− α)4 . (101)\nRemark 5. By Eq.(101), the variance of ||wt||22 is bounded by 3V η2(1+4α2+α4)\nl(1−α)4 , which is not small enough. But we somehow can reduce it: according to Eq.(96), if xt is close to its theoretical value, then E[(h̃t−L)\n2|Yt] x2t η4 < 4V η 3λ(1−α) l , hence variance of ||wt|| 2 2 can be bounded by 3V η2(1+4α2+α4) 2L(1−α)2 ." }, { "heading": "B.4 PROOF OF THEOREM 3", "text": "Proof. In SGD case, we have\n〈wt+1,wt〉 = (1− λη)||wt||22, (102)\nthen we have\ncos2 ∆t = 〈wt+1,wt〉2 ||wt||22 · ||wt+1||22 = (1− 2λη) ||wt|| 2 2 ||wt+1||22 . (103)\nAccording to the definition of ∆t, ∆t ≥ 0, and ∆t is very close to 0, hence we have ∆t = sin ∆t = √ 1− cos2 ∆t (104)\n= √ 1− (1− λη)2 ||wt|| 2 2\n||wt+1||22 (105)\n= √ 1− (1− λη)2 xt\nxt+1 (106)\nwhere xt, xt+1 denotes ||wt||22, ||wt+1||22 respectively as in Eq. (42). Assume t is sufficiently large so that xt, xt+1 are close to x∗ = √ Lη 2λ , the first order of Taylor series expansion of Eq.(106) at xt = xt+1 = x ∗ is\n∆t = √ 2λη + (1− λη)2\n2 √ 2λη · 1 x∗ · [(xt+1 − x∗)− (xt − x∗)]. (107)\nReorganizing Eq.(107), and applying Cauchy Inequality, we have\n|∆t − √ 2λη|2 = (1− λη) 4\n8λη · 1 (x∗)2 · [(xt+1 − x∗)− (xt − x∗)]2 (108)\n≤ 1 8λη · 2λ Lη · 2[(xt − x∗)2 + (xt+1 − x∗)2] (109)\n= 1\n2Lη2 [(xt − x∗)2 + (xt+1 − x∗)2] (110)\nCombining with Eq.(51) and Eq.(108), we have\nE|∆t − √\n2λη|2 = 1 2Lη2 [E(xt − x∗)2 + E(xt+1 − x∗)2] ≤ (1− 4λη)t · C + V Ll , (111)\nwhere C = 12Lη2 ·B, B is defined in Eq.(51).\nIn SGDM case, the angular update can be computed by\n∆t = sin(∆t) (112) = √ 1− cos2 ∆t (113)\n= √ 1− 〈wt,wt+1〉 2\n||wt||22 · ||wt+1||22 (114)\n= √ 1− b 2 t\natct , (115)\nwhere (at, bt, ct) = (||wt||22, 〈wt,wt+1〉, ||wt+1||22). According to the proof of theorem 2, when t is sufficiently large, (at, bt, ct) will be close to the fixed point of Eq.(66), (a∗, b∗, c∗), where a∗ =\nc∗ = (\n√ Lη\nλ(1−α)(2− λη1+α ) , b∗ = 1+α−λ1+α a ∗. Then the first order Taylor series expansion of Eq.(115)\nat (at = a∗, bt = b∗, ct = c∗) is\n∆t =\n√ 1− b 2 t\natct (116)\n=\n√ 2λη\n1 + α +\n√ 1 + α 2 √ 2λη · (1− λη 1 + α )2[ at − a∗ a∗ − 2(bt − b ∗) b∗ + ct − c∗ c∗ ]. (117)\nNow substituting (at, bt, ct) with (ãt, b̃t, c̃t) defined in Eq.(75, 76,77, 78), Eq.(107) is rewritten as\n∆t =\n√ 2λη\n1 + α +\n√ 1 + α 2 √ 2λη · (1− λη 1 + α )2 1 a∗ [ (1− α)2 α2 (c̃t − c̃∗) +O(λη)] (118)\nwhere (ã∗, b̃∗, c̃∗) is the fixed point of Eq.(89). Omit O(λη), we have\n|∆t − √ 2λη\n1 + α |2 = 1 + α 8λη · (1− λη 1 + α )4 λ(1− α)(2− λη1+α ) Lη [ (1− α)4 α4 (c̃− c̃∗)2] (119)\n≤ (1− α 2)(1− α)4\n4Lη2α4 ||Yt − Y ∗||22, (120)\nwhere Yt, Y ∗ is defined in Eq.(75, 89) respectively. According to Eq.(100), the mean square error E|∆t − √ 2λη 1+α | 2 can be bounded by\nE|∆t − √ 2λη\n1 + α |2 ≤ (1− α 2)(1− α)4 4Lη2α4 E||Yt − Y ∗||22 (121)\n≤ (1− 4λη 1− α )tC + (1− α2)(1− α)4 4Lη2α4 · V η 2(1 + 4α2 + α4) l(1− α)4 (122)\n= (1− 4λη 1− α )tC + V (1− α2)(1 + 4α2 + α4) 4Llα4 , (123)\nwhere C = (1−α 2)(1−α)4\n4Lη2α4 ·B, B is defined in Eq.(101)." }, { "heading": "B.5 PROOF OF COROLLARY 3.1", "text": "Proof of Corollary 3.1. In SGD case, and Eq.(42), we have\n||wt+1||22 > (1− 2λη)||wt||22, (124)\nwhich means ||wt+T ||22 > (1− 2λη)T ||wt||22. (125) On the other hand, we know that when η is divided by k, ||wt||22 should be divided by √ k to reach the new equilibrium condition, therefore we have\n||wt+T ||2\n||wt||2 = 1√ k > (1− 2λη)T . (126)\nSince λη 1, log(1− 2λη) ≈ −2λη, thus\nT > log(k)\n4λη . (127)\n.\nIn SGDM case, by Eq.(87), we have\n||wt+1||22 > (1− 2λη\n1− α )||wt||22, (128)\nSimilar to SGD case, we have\nT > log(k)(1− α)\n4λη . (129)\n." }, { "heading": "C EXPERIMENTS", "text": "" }, { "heading": "C.1 EXPERIMENTS ON SYTHETIC DATA", "text": "In this section we apply experiments on sythetic data to prove our claim in section 4: theorem 1 only requires expected square norm of unit gradient is locally steady, the expectations do not need to remain unchanged across the whole training process. The proof of theorem implies square norm of weight is determined by the following iterative map:\nxt+1 = (1− 2λη)xt + Ltη\n2\nxt , (130)\nwhere λ, η ∈ (0, 1), Lt denotes the square of unit gradient norm. Hence we simulate xt with different type of {Lt}∞t=1. Results in Figure 4 shows as long as the local variance of square norm of unit gradient is not too much, and expectation of Lt changes smoothly, weight norm can quickly converge to its theoretical value base on expectation of square norm of unit gradient.\nWe also simulate SGDM case by following iteration map\nXt+1 = AXt + Ltη\n2\nXt[0] · e, (131)\nwhere A, Xt, e is defined as Eq.(62), (63), (64). Simulation results is shown in Figure 5." }, { "heading": "C.2 COMPLEMENTARY IN MULTI-STAGE LEARNING RATE SCHEDULE", "text": "In this section we present complementary results in Multi-Stage Learning Rate Schedule experiment.\nThe plots of weight norm (empirical and predicted values) in multi-learning rate stage is shown in Figure 6. We also present the test performance of resent50/maskrcnn on Imagenet/MSCOCO with multi-stage learning rate schedule mentioned in Section 5.2. We only provide complementary results for reference only. We do not intend to prove the advantages or disadvantages of rescaling strategy here, it is beyond the discussion of this paper." }, { "heading": "C.3 RETHINKING LINEAR SCALING PRINCIPLE IN SPHERICAL MOTION DYNAMICS", "text": "In this section, we will discuss the effect of Linear Scaling Principle (LSP) under the view of SMD. Linear Scaling Principle is proposed by Goyal et al. (2017) to tune the learning rate η with batch size B by η ∝ B. The intuition of LSP is if weights do not change too much within k iterations, then k iterations of SGD with learning rate η and minibatch size B (Eq.(132)) can be approximated by a single iteration of SGD with learning rate kη and minibatch size kB (Eq.(133):\nwt+k = wt − η ∑ j<k ( 1 B ∑ x∈Bj ∂L ∂w ∣∣∣ wt+j ,x + λwt+j), (132)\nwt+1 = wt − kη( 1\nkB ∑ j<k ∑ x∈Bj ∂L ∂w ∣∣∣ wt,x + λwt). (133)\nGoyal et al. (2017) shows that combining with gradual warmup, LSP can enlarge the batch size up to 8192(256× 32) without severe degradation on ImageNet experiments. LSP has been proven extremely effective in a wide range of applications. However, from the perspective of SMD, the angular update mostly relies on the pre-defined hyper-parameters, and it is hardly affected by batch size. To clarify the connection between LSP and SMD, we explore the learning dynamics of DNN with different batch size by conducting extensive experiments with ResNet50 on ImageNet, the training settings rigorously follow Goyal et al. (2017): momentum coefficient is α = 10−4; WD coefficient is λ = 10−4; Batch size is denoted by B; learning rate is initialized as B 256 · 0.1; Total training epoch is 90 epoch, and learning rate is divided by 10 at 30, 60, 80 epoch respectively.\nThe results of experiments(Figure 7, 8) suggests that the assumption of LSP does not always hold in practice because of three reasons: first, the approximate equivalence between a single iteration in large batch setting, and multiple iterations in small batch setting can only hold in pure SGD formulation, but momentum method is far more commonly used; Second, according Theorem 2, the enlargement ratio of angular update is only determined by the increase factor of learning rate. Figure 7 shows in practice, the accumulated angular update ](wt,wt+k) in small batch batch setting is much larger than angular update ](wt,wt+1) of a single iteration in larger batch setting when using Linear Scaling Principle; Third, even in pure SGD cases, the enlargement of angular update still relies on the increase of learning rate, and has no obvious connection to the enlargement of gradient’s norm when equilibrium condition is reached (see Figure 8).\nIn conclusion, though LSP usually works well in practical applications, SMD suggests we can find more sophisticated and reasonable schemes to tune the learning rate when batch size increases." }, { "heading": "C.4 SPHERICAL MOTION DYNAMICS WITH DIFFERENT NETWORK STRUCTURES", "text": "We also verify our theory on other commonly used network structures (MobileNet-V2 (Sandler et al., 2018), ShuffleNet-V2+ (Ma et al., 2018)) with standard training settings. The results is shown in Figure 9.\n1+α . Learning rate η is initialized as 0.5, and divided by 10 at epoch 30, 60, 80 respectively; WD\ncoefficient λ is 4× 10−5; Momentum parameter α is set as 0.9." } ]
2,020
SPHERICAL MOTION DYNAMICS: LEARNING DYNAM-
SP:9afb51b717b926a92c9f2a1b3dc7aceb960ff80a
[ "The paper targets to demonstrate social perception and human-AI collaboration in common household activities. It shows the development of a multi-agent virtual environment that is used to test an AI agent’s ability to reason about other agents’ mental states and help them in unfamiliar scenarios. This is performed by presenting an experimental study over specifically selected scenarios which involve aspects of social intelligence." ]
In this paper, we introduce Watch-And-Help (WAH), a challenge for testing social intelligence in agents. In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently. To succeed, the AI agent needs to i) understand the underlying goal of the task by watching a single demonstration of the human-like agent performing the same task (social perception), and ii) coordinate with the human-like agent to solve the task in an unseen environment as fast as possible (human-AI collaboration). For this challenge, we build VirtualHomeSocial, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines. We evaluate the performance of AI agents with the human-like agent as well as with real humans using objective metrics and subjective user ratings. Experimental results demonstrate that the proposed challenge and virtual environment enable a systematic evaluation on the important aspects of machine social intelligence at scale.1
[ { "affiliations": [], "name": "HUMAN-AI COLLABORATION" }, { "affiliations": [], "name": "Xavier Puig" }, { "affiliations": [], "name": "Tianmin Shu" }, { "affiliations": [], "name": "Shuang Li" }, { "affiliations": [], "name": "Zilin Wang" }, { "affiliations": [], "name": "Yuan-Hong Liao" }, { "affiliations": [], "name": "Joshua B. Tenenbaum" }, { "affiliations": [], "name": "Sanja Fidler" }, { "affiliations": [], "name": "Antonio Torralba" } ]
[ { "authors": [ "Alexandre Alahi", "Kratarth Goel", "Vignesh Ramanathan", "Alexandre Robicquet", "Li Fei-Fei", "Silvio Savarese" ], "title": "Social lstm: Human trajectory prediction in crowded spaces", "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition,", "year": 2016 }, { "authors": [ "Stefano V Albrecht", "Peter Stone" ], "title": "Autonomous agents modelling other agents: A comprehensive survey and open problems", "venue": "Artificial Intelligence,", "year": 2018 }, { "authors": [ "Bowen Baker", "Ingmar Kanitscheider", "Todor Markov", "Yi Wu", "Glenn Powell", "Bob McGrew", "Igor Mordatch" ], "title": "Emergent tool use from multi-agent autocurricula", "venue": null, "year": 1909 }, { "authors": [ "Chris L Baker", "Julian Jara-Ettinger", "Rebecca Saxe", "Joshua B Tenenbaum" ], "title": "Rational quantitative attribution of beliefs, desires and percepts in human mentalizing", "venue": "Nature Human Behaviour,", "year": 2017 }, { "authors": [ "Nolan Bard", "Jakob N Foerster", "Sarath Chandar", "Neil Burch", "Marc Lanctot", "H Francis Song", "Emilio Parisotto", "Vincent Dumoulin", "Subhodeep Moitra", "Edward Hughes" ], "title": "The hanabi challenge: A new frontier for ai research", "venue": "Artificial Intelligence,", "year": 2020 }, { "authors": [ "Simon Brodeur", "Ethan Perez", "Ankesh Anand", "Florian Golemo", "Luca Celotti", "Florian Strub", "Jean Rouat", "Hugo Larochelle", "Aaron C. Courville" ], "title": "Home: a household multimodal environment", "venue": "CoRR, abs/1711.11017,", "year": 2017 }, { "authors": [ "Cameron B Browne", "Edward Powley", "Daniel Whitehouse", "Simon M Lucas", "Peter I Cowling", "Philipp Rohlfshagen", "Stephen Tavener", "Diego Perez", "Spyridon Samothrakis", "Simon Colton" ], "title": "A survey of monte carlo tree search methods", "venue": "IEEE Transactions on Computational Intelligence and AI in games,", "year": 2012 }, { "authors": [ "Fabian Caba Heilbron", "Victor Escorcia", "Bernard Ghanem", "Juan Carlos Niebles" ], "title": "Activitynet: A large-scale video benchmark for human activity understanding", "venue": "In Proceedings of the ieee conference on computer vision and pattern recognition,", "year": 2015 }, { "authors": [ "Micah Carroll", "Rohin Shah", "Mark K Ho", "Tom Griffiths", "Sanjit Seshia", "Pieter Abbeel", "Anca Dragan" ], "title": "On the utility of learning about humans for human-ai coordination", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Devendra Singh Chaplot", "Kanthashree Mysore Sathyendra", "Rama Kumar Pasumarthi", "Dheeraj Rajagopal", "Ruslan Salakhutdinov" ], "title": "Gated-attention architectures for task-oriented language grounding", "venue": "In Thirty-Second AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Wongun Choi", "Silvio Savarese" ], "title": "Understanding collective activitiesof people from videos", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2013 }, { "authors": [ "Abhishek Das", "Samyak Datta", "Georgia Gkioxari", "Stefan Lee", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied question answering", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2018 }, { "authors": [ "Kerstin Dautenhahn" ], "title": "Socially intelligent robots: dimensions of human–robot interaction", "venue": "Philosophical transactions of the royal society B: Biological sciences,", "year": 2007 }, { "authors": [ "David F Fouhey", "Wei-cheng Kuo", "Alexei A Efros", "Jitendra Malik" ], "title": "From lifestyle vlogs to everyday interactions", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Chuang Gan", "Jeremy Schwartz", "Seth Alter", "Martin Schrimpf", "James Traer", "Julian De Freitas", "Jonas Kubilius", "Abhishek Bhandwaldar", "Nick Haber", "Megumi Sano", "Kuno Kim", "Elias Wang", "Damian Mrowca", "Michael Lingelbach", "Aidan Curtis", "Kevin Feigelis", "Daniel M. Bear", "Dan Gutfreund", "David Cox", "James J. DiCarlo", "Josh McDermott", "Joshua B. Tenenbaum", "Daniel L.K. Yamins" ], "title": "Threedworld: A platform for interactive multi-modal physical simulation, 2020", "venue": null, "year": 2020 }, { "authors": [ "Xiaofeng Gao", "Ran Gong", "Tianmin Shu", "Xu Xie", "Shu Wang", "Song-Chun Zhu" ], "title": "Vrkitchen: an interactive 3d virtual environment for task-oriented learning", "venue": null, "year": 1903 }, { "authors": [ "Michael A Goodrich", "Alan C Schultz" ], "title": "Human-robot interaction: a survey", "venue": "Foundations and trends in human-computer interaction,", "year": 2007 }, { "authors": [ "Daniel Gordon", "Aniruddha Kembhavi", "Mohammad Rastegari", "Joseph Redmon", "Dieter Fox", "Ali Farhadi" ], "title": "IQA: visual question answering in interactive environments", "venue": "CoRR, abs/1712.03316,", "year": 2017 }, { "authors": [ "Barbara Grosz", "Sarit Kraus" ], "title": "Collaborative plans for complex group action", "venue": "Artificial Intelligence,", "year": 1996 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Guy Hoffman" ], "title": "Evaluating fluency in human–robot collaboration", "venue": "IEEE Transactions on HumanMachine Systems,", "year": 2019 }, { "authors": [ "Mostafa S Ibrahim", "Srikanth Muralidharan", "Zhiwei Deng", "Arash Vahdat", "Greg Mori" ], "title": "A hierarchical deep temporal model for group activity recognition", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2016 }, { "authors": [ "Max Jaderberg", "Wojciech M. Czarnecki", "Iain Dunning", "Luke Marris", "Guy Lever", "Antonio Garcia Castañeda", "Charles Beattie", "Neil C. Rabinowitz", "Ari S. Morcos", "Avraham Ruderman", "Nicolas Sonnerat", "Tim Green", "Louise Deason", "Joel Z. Leibo", "David Silver", "Demis Hassabis", "Koray Kavukcuoglu", "Thore Graepel" ], "title": "Human-level performance in 3d multiplayer games with population-based reinforcement learning", "venue": null, "year": 2019 }, { "authors": [ "Matthew Johnson", "Katja Hofmann", "Tim Hutton", "David Bignell" ], "title": "The malmo platform for artificial intelligence experimentation", "venue": "In IJCAI,", "year": 2016 }, { "authors": [ "Henry A Kautz" ], "title": "A formal theory of plan recognition and its implementation", "venue": "Reasoning about plans,", "year": 1991 }, { "authors": [ "Kris M Kitani", "Brian D Ziebart", "James Andrew Bagnell", "Martial Hebert" ], "title": "Activity forecasting", "venue": "In European Conference on Computer Vision,", "year": 2012 }, { "authors": [ "Eric Kolve", "Roozbeh Mottaghi", "Winson Han", "Eli VanderBilt", "Luca Weihs", "Alvaro Herrasti", "Daniel Gordon", "Yuke Zhu", "Abhinav Gupta", "Ali Farhadi" ], "title": "AI2-THOR: An Interactive 3D Environment for Visual AI", "venue": null, "year": 2017 }, { "authors": [ "Richard E Korf" ], "title": "Planning as search: A quantitative approach", "venue": "Artificial intelligence,", "year": 1987 }, { "authors": [ "Yuan-Hong Liao", "Xavier Puig", "Marko Boben", "Antonio Torralba", "Sanja Fidler" ], "title": "Synthesizing environment-aware activities via activity sketches", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Ryan Lowe", "Yi Wu", "Aviv Tamar", "Jean Harb", "OpenAI Pieter Abbeel", "Igor Mordatch" ], "title": "Multi-agent actor-critic for mixed cooperative-competitive environments", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Dipendra Kumar Misra", "Andrew Bennett", "Valts Blukis", "Eyvind Niklasson", "Max Shatkhin", "Yoav Artzi" ], "title": "Mapping instructions to actions in 3d environments with visual goal prediction", "venue": "CoRR, abs/1809.00786,", "year": 2018 }, { "authors": [ "Volodymyr Mnih", "Adria Puigdomenech Badia", "Mehdi Mirza", "Alex Graves", "Timothy Lillicrap", "Tim Harley", "David Silver", "Koray Kavukcuoglu" ], "title": "Asynchronous methods for deep reinforcement learning", "venue": "In International conference on machine learning,", "year": 2016 }, { "authors": [ "Stefanos Nikolaidis", "Ramya Ramakrishnan", "Keren Gu", "Julie Shah" ], "title": "Efficient model learning from joint-action demonstrations for human-robot collaborative tasks", "venue": "In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI),", "year": 2015 }, { "authors": [ "Xavier Puig", "Kevin Ra", "Marko Boben", "Jiaman Li", "Tingwu Wang", "Sanja Fidler", "Antonio Torralba" ], "title": "Virtualhome: Simulating household activities via programs", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Neil C Rabinowitz", "Frank Perbet", "H Francis Song", "Chiyuan Zhang", "SM Eslami", "Matthew Botvinick" ], "title": "Machine theory of mind", "venue": "arXiv preprint arXiv:1802.07740,", "year": 2018 }, { "authors": [ "Miquel Ramırez", "Hector Geffner" ], "title": "Plan recognition as planning", "venue": "In Proceedings of the 21st international joint conference on Artifical intelligence. Morgan Kaufmann Publishers Inc,", "year": 2009 }, { "authors": [ "Cinjon Resnick", "Wes Eldridge", "David Ha", "Denny Britz", "Jakob Foerster", "Julian Togelius", "Kyunghyun Cho", "Joan Bruna" ], "title": "Pommerman: A multi-agent playground", "venue": "arXiv preprint arXiv:1809.07124,", "year": 2018 }, { "authors": [ "Leonel Rozo", "Sylvain Calinon", "Darwin G Caldwell", "Pablo Jimenez", "Carme Torras" ], "title": "Learning physical collaborative robot behaviors from human demonstrations", "venue": "IEEE Transactions on Robotics,", "year": 2016 }, { "authors": [ "Mikayel Samvelyan", "Tabish Rashid", "Christian Schroeder de Witt", "Gregory Farquhar", "Nantas Nardelli", "Tim GJ Rudner", "Chia-Man Hung", "Phil ip HS Torr", "Jakob Foerster", "Shimon Whiteson" ], "title": "The starcraft multi-agent challenge", "venue": "In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems,", "year": 2019 }, { "authors": [ "Manolis Savva", "Abhishek Kadian", "Oleksandr Maksymets", "Yili Zhao", "Erik Wijmans", "Bhavana Jain", "Julian Straub", "Jia Liu", "Vladlen Koltun", "Jitendra Malik" ], "title": "Habitat: A platform for embodied ai research", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2019 }, { "authors": [ "Mohit Shridhar", "Jesse Thomason", "Daniel Gordon", "Yonatan Bisk", "Winson Han", "Roozbeh Mottaghi", "Luke Zettlemoyer", "Dieter Fox" ], "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "venue": "arXiv preprint arXiv:1912.01734,", "year": 2019 }, { "authors": [ "Tianmin Shu", "Dan Xie", "Brandon Rothrock", "Sinisa Todorovic", "Song Chun Zhu" ], "title": "Joint inference of groups, events and human roles in aerial videos", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2015 }, { "authors": [ "Tianmin Shu", "Caiming Xiong", "Richard Socher" ], "title": "Hierarchical and interpretable skill acquisition in multi-task reinforcement learning", "venue": "arXiv preprint arXiv:1712.07294,", "year": 2017 }, { "authors": [ "Michael Shum", "Max Kleiman-Weiner", "Michael L Littman", "Joshua B Tenenbaum" ], "title": "Theory of minds: Understanding behavior in groups through inverse planning", "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence,", "year": 2019 }, { "authors": [ "Gunnar A Sigurdsson", "Abhinav Gupta", "Cordelia Schmid", "Ali Farhadi", "Karteek Alahari" ], "title": "Charades-ego: A large-scale dataset of paired third and first person videos", "venue": "arXiv preprint arXiv:1804.09626,", "year": 2018 }, { "authors": [ "Joseph Suarez", "Yilun Du", "Phillip Isola", "Igor Mordatch" ], "title": "Neural mmo: A massively multiagent game environment for training and evaluating intelligent agents", "venue": null, "year": 1903 }, { "authors": [ "Tijmen Tieleman", "Geoffrey Hinto" ], "title": "Lecture 6.5—rmsprop: Divide the gradient by a running average of its recent magnitude", "venue": "COURSERA: Neural Networks for Machine Learning,", "year": 2012 }, { "authors": [ "Tomer Ullman", "Chris Baker", "Owen Macindoe", "Owain Evans", "Noah Goodman", "Joshua B Tenenbaum" ], "title": "Help or hinder: Bayesian models of social goal inference", "venue": "In Advances in neural information processing systems,", "year": 2009 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Felix Warneken", "Michael Tomasello" ], "title": "Altruistic helping in human infants and young chimpanzees", "venue": null, "year": 2006 }, { "authors": [ "Erik Wijmans", "Samyak Datta", "Oleksandr Maksymets", "Abhishek Das", "Georgia Gkioxari", "Stefan Lee", "Irfan Essa", "Devi Parikh", "Dhruv Batra" ], "title": "Embodied question answering in photorealistic environments with point cloud perception", "venue": "URL http: //arxiv.org/abs/1904.03461", "year": 1904 }, { "authors": [ "Yi Wu", "Yuxin Wu", "Georgia Gkioxari", "Yuandong Tian" ], "title": "Building generalizable agents with a realistic and rich 3d environment", "venue": "arXiv preprint arXiv:1801.02209,", "year": 2018 }, { "authors": [ "Fei Xia", "Amir R Zamir", "Zhiyang He", "Alexander Sax", "Jitendra Malik", "Silvio Savarese. Gibson" ], "title": "env: Real-world perception for embodied agents", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Yuke Zhu", "Daniel Gordon", "Eric Kolve", "Dieter Fox", "Li Fei-Fei", "Abhinav Gupta", "Roozbeh Mottaghi", "Ali Farhadi" ], "title": "Visual semantic planning using deep successor representations", "venue": "In Proceedings of the IEEE International Conference on Computer Vision,", "year": 2017 } ]
[ { "heading": "1 INTRODUCTION", "text": "Humans exhibit altruistic behaviors at an early age (Warneken & Tomasello, 2006). Without much prior experience, children can robustly recognize goals of other people by simply watching them act in an environment, and are able to come up with plans to help them, even in novel scenarios. In contrast, the most advanced AI systems to date still struggle with such basic social skills.\nIn order to achieve the level of social intelligence required to effectively help humans, an AI agent should acquire two key abilities: i) social perception, i.e., the ability to understand human behavior, and ii) collaborative planning, i.e., the ability to reason about the physical environment and plan its actions to coordinate with humans. In this paper, we are interested in developing AI agents with these two abilities.\nTowards this goal, we introduce a new AI challenge, Watch-And-Help (WAH), which focuses on social perception and human-AI collaboration. In this challenge, an AI agent needs to collaborate with a human-like agent to enable it to achieve the goal faster. In particular, we present a 2-stage framework as shown in Figure 1. In the first, Watch stage, an AI agent (Bob) watches a human-like agent (Alice) performing a task once and infers Alice’s goal from her actions. In the second, Help stage, Bob helps Alice achieve the same goal in a different environment as quickly as possible (i.e., with the minimum number of environment steps).\nThis 2-stage framework poses unique challenges for human-AI collaboration. Unlike prior work which provides a common goal a priori or considers a small goal space (Goodrich & Schultz, 2007; Carroll et al., 2019), our AI agent has to reason about what the human-like agent is trying to achieve by watching a single demonstration. Furthermore, the AI agent has to generalize its acquired knowl-\n1Code and documentation for the VirtualHome-Social environment are available at https:// virtual-home.org. Code and data for the WAH challenge are available at https://github.com/ xavierpuigf/watch_and_help. A supplementary video can be viewed at https://youtu.be/ lrB4K2i8xPI.\nedge about the human-like agent’s goal to a new environment in the Help stage. Prior work does not investigate such generalization.\nTo enable multi-agent interactions in realistic environments, we extend an open source virtual platform, VirtualHome (Puig et al., 2018), and build a multi-agent virtual environment, VirtualHomeSocial. VirtualHome-Social simulates realistic and rich home environments where agents can interact with different objects (e.g, by opening a container or grabbing an object) and with other agents (e.g., following, helping, avoiding collisions) to perform complex tasks. VirtualHome-Social also provides i) built-in agents that emulate human behaviors, allowing training and testing of AI agents alongside virtual humans, and ii) an interface for human players, allowing evaluation with real humans and collecting/displaying human activities in realistic environments (a functionality key to machine social intelligence tasks but not offered by existing multi-agent platforms). We plan to open source our environment.\nWe design an evaluation protocol and provide a benchmark for the challenge, including a goal inference model for the Watch stage, and multiple planning and deep reinforcement learning (DRL) baselines for the Help stage. Experimental results indicate that to achieve success in the proposed challenge, AI agents must acquire strong social perception and generalizable helping strategies. These fundamental aspects of machine social intelligence have been shown to be key to humanAI collaboration in prior work (Grosz & Kraus, 1996; Albrecht & Stone, 2018). In this work, we demonstrate how we can systematically evaluate them in more realistic settings at scale.\nThe main contributions of our work are: i) a new social intelligence challenge, Watch-And-Help, for evaluating AI agents’ social perception and their ability to collaborate with other agents, ii) a multiagent platform allowing AI agents to perform complex household tasks by interacting with objects and with built-in agents or real humans, and iii) a benchmark consisting of multiple planning and learning based approaches which highlights important aspects of machine social intelligence." }, { "heading": "2 RELATED WORK", "text": "Human activity understanding. An important part of the challenge is to understand human activities. Prior work on activity recognition has been mostly focused on recognizing short actions (Sigurdsson et al., 2018; Caba Heilbron et al., 2015; Fouhey et al., 2018), predicting pedestrian trajectories (Kitani et al., 2012; Alahi et al., 2016), recognizing group activities (Shu et al., 2015; Choi & Savarese, 2013; Ibrahim et al., 2016), and recognizing plans (Kautz, 1991; Ramırez & Geffner, 2009). We are interested in the kinds of activity understanding that require inferring other people’s mental states (e.g., intentions, desires, beliefs) from observing their behaviors. Therefore, the Watch stage of our challenge focuses on the understanding of humans’ goals in a long sequence of actions instead. This is closely related to work on computational Theory of Mind that aims at inferring\nhumans’ goals by observing their actions (Baker et al., 2017; Ullman et al., 2009; Rabinowitz et al., 2018; Shum et al., 2019). However, in prior work, activities were simulated in toy environments (e.g., 2D grid worlds). In contrast, this work provides a testbed for conducting Theory-of-Mind type of activity understanding in simulated real-world environments.\nHuman-robot interaction. The helping aspect of the WAH challenge has been extensively studied in human-robot interaction (HRI). However, prior work in HRI has been mainly restricted in lab environments (Goodrich & Schultz, 2007; Dautenhahn, 2007; Nikolaidis et al., 2015; Rozo et al., 2016), and the goals in the collaborative tasks were either shared by both agents or were defined in a small space. The setup in WAH is much more challenging – the goal is sampled from a large space, needs to be inferred from a single demonstration, and must be performed in realistic and diverse household environments through a long sequence of actions.\nMulti-agent virtual environments. There has been a large body of platforms for various multiagent tasks (Jaderberg et al., 2019; Samvelyan et al., 2019; OpenAI, 2018; Lowe et al., 2017; Resnick et al., 2018; Shu & Tian, 2018; Carroll et al., 2019; Suarez et al., 2019; Baker et al., 2019; Bard et al., 2020). However, these multi-agent platforms can only simulate simple or game-like environments and do not support for human-AI collaborations on real-life activities. Existing platforms for realistic virtual environments mainly focus on single agent settings for tasks such as navigation (Savva et al., 2019; Xia et al., 2018; Brodeur et al., 2017; Zhu et al., 2017; Xia et al., 2018) , embodied question answering (Gordon et al., 2017; Wijmans et al., 2019; Das et al., 2018), or single agent task completion (Puig et al., 2018; Shridhar et al., 2019; Misra et al., 2018; Gao et al., 2019). In contrast, the proposed VirtualHome-Social environment allows AI agents to engage in multi-agent household activities by i) simulating realistic and interactive home environments, ii) incorporating humanoid agents with human-like behaviors into the system, iii) providing a wide range of commands and animations for navigation and object manipulation, and iv) allowing human participation. Because of these features, VirtualHome-Social can serve as a testbed for complex social perception and humanAI collaboration tasks, which is complementary to existing virtual environments." }, { "heading": "3 THE WATCH-AND-HELP CHALLENGE", "text": "The Watch-And-Help challenge aims to study AI agents’ ability to help humans in household activities. To do that, we design a set of tasks defined by predicates describing the final state of the environment. For each task, we first provide Bob a video that shows Alice successfully performing the activity (Watch stage), and then place both agents in a new environment where Bob has to help Alice achieve the same goal with the minimum number of time steps (Help stage).\nFigure 2 provides an overview of the system setup for the Watch-And-Help challenge. For this challenge, we build a multi-agent platform, VirtualHome-Social (Section 4), that i) supports concurrent actions from multiple agents and ii) provides observations for the agents. Alice represents a built-in agent in the system; she plans her actions based on her own goal and a partial observation of the environment. Bob serves as an external AI agent, who does not know Alice’s ground-truth goal and only has access to a single demonstration of Alice performing the same task in the past. During the Help stage, Bob receives his observation from the system at each step and sends an action command back to control the avatar in the environment. Alice, on her part, updates her plan at each step based on her latest observation to reflect any world state change caused by Bob. We also allow a human to control Alice in our system. We discuss how the system and the built-in agent work in Section 4.\nProblem Setup. Formally, each task in the challenge is defined by Alice’s goal g (i.e., a set of goal predicates), a demonstration of Alice taking actions to achieve that goalD = {stAlice, atAlice}Tt=1 (i.e., a sequence of states stAlice and actions a t Alice), and a new environment where Bob collaborates with Alice and help achieve the same goal as quickly as possible. During training, the ground-truth goal of Alice is shown to Bob as supervision; during testing, Bob no longer has access to the ground-truth goal and thus has to infer it from the given demonstration.\nGoal Definitions. We define the goal of a task as a set of predicates and their counts, which describes the target state. Each goal has 2 - 8 predicates. For instance, “ON(plate, dinnertable):2; ON(wineglass, dinnertable):1” means “putting two plates and one wine glass onto the dinner table.” The objects in a predicate refer to object classes rather than instances, meaning that any object of a specified class is acceptable. This goal definition reflects different preferences of agents (when setting up a dinner table, some prefer to put water glasses, others may prefer to put wine glasses), increasing the diversity in tasks. We design five predicate sets representing five types of household activities: 1) setting up a dinner table, 2) putting groceries / leftovers to the fridge, 3) preparing a simple meal, 4) washing dishes, and 5) reading a book while having snacks or drinks. In total, there are 30 different types of predicates. In each task, the predicates of a goal are sampled from one of the five predicate sets (as a single household activity). More details about the predicate sets and goal definitions are listed in Appendix B.1." }, { "heading": "4 VIRTUALHOME-SOCIAL", "text": "Building machine social intelligence for real-life activities poses additional challenges compared to typical multi-agent settings, such as far more unconstrained goal and action spaces, and the need to display human actions realistically for social perception.\nWith that in mind, we create VirtualHome-Social, a new environment where multiple agents (including real humans) can execute actions concurrently and observe each other’s behaviors. Furthermore, we embed planning-based agents in the environment as virtual humans that AI agents can reason about and interact with.\nIn the rest of this section, we describe the observations, actions, and the built-in human-like agent provided in VirtualHome-Social. Appendix A includes more information.\nObservation space. The environment supports symbolic and visual observations, allowing agents to learn helping behaviors under different conditions. The symbolic observations consist on a scene graph, with nodes representing objects and edges describing spatial relationships between them.\nAction space. Agents can navigate in the environment and interact with objects in it. To interact with objects, agents need to specify an action and the index of the intended object (e.g., “grab 〈3〉” stands for grabbing the object with id 3). An agent can only interact with objects that are within its field of sight, and therefore its action space changes at every step.\nHuman-like agents. To enable a training and testing environment for human-AI interactions, it is critical to incorporate built-in agents that emulate humans when engaging in multi-agent activities. Carroll et al. (2019) has attempted to train policies imitating human demonstrations. But those policies would not reliably perform complex tasks in partially observable environments. Therefore, we devise a planning-based agent with bounded rationality, provided as part of the platform. This agent operates on the symbolic representation of its partial observation of the environment. As shown in Figure 3, it relies on two key components: 1) a belief of object locations in the environment\nPlanner\nGoal Observation\nBelief\nEnvironment Action\nFigure 3: Overview of the human-like agent.\n(Figure 13 in Appendix A.3), and 2) a hierarchical planner, which uses Monte Carlo Tree Search (MCTS) (Browne et al., 2012) and regression planning (RP) (Korf, 1987) to find a plan for a given goal based on its belief. At every step, the human-like agent updates its belief based on the latest observation, finds a new plan, and executes the first action of the plan concurrently with other agents. The proposed design allows agents to robustly perform tasks in partially observable environments while producing human-like behaviors2. We provide more details of this agent in Appendix A.3." }, { "heading": "5 BENCHMARK", "text": "" }, { "heading": "5.1 EVALUATION PROTOCOL", "text": "Training and Testing Setup. We create a training set with 1011 tasks and 2 testing sets (test-1, test-2). Each test set has 100 tasks. We make sure that i) the helping environment in each task is different from the environment in the pairing demonstration (we sample a different apartment and randomize the initial state), and ii) goals (predicate combinations) in the test set are unseen during training. To evaluate generalization, we also hold out 2 apartments for the Help stage in the test sets. For the training set and test-1 set, all predicates in each goal are from the same predicate set, whereas a goal in test-2 consists of predicates sampled from two different predicates sets representing multiactivity scenarios (e.g., putting groceries to the fridge and washing dishes). Note that during testing, the ground-truth goals are not shown to the evaluated Bob agent. More details can be found in Appendix B. An episode is terminated once all predicates in Alice’s goal are satisfied (i.e., a success) or the time limit (250 steps) is reached (i.e., a failure).\nEvaluation Metrics. We evaluate the performance of an AI agent by three types of metrics: i) success rate, ii) speedup, and iii) a cumulative reward. For speedup, we compare the episode length when Alice and Bob are working together (LHelp) with the episode length when Alice is working alone (LAlice), i.e., LAlice/LBob − 1. To account for both the success rate and the speedup, we define the cumulative reward of an episode with T steps as R = ∑T t=1 1(s\nt = sg)− 0.004, where st is the state at step t, sg is the goal state. R ranges from -1 (failure) to 1 (achieving the goal in zero steps)." }, { "heading": "5.2 BASELINES", "text": "To address this challenge, we propose a set of baselines that consist of two components as shown in Figure 4: a goal inference model and a goal-conditioned helping planner / policy. In this paper, we assume that the AI agent has access to the ground-truth states of objects within its field of view (but one could also use raw pixels as input). We describe our approach for the two components below.\nGoal inference. We train a goal inference model based on the symbolic representation of states in the demonstration. At each step, we first encode the state using a Transformer (Vaswani et al., 2017) over visible objects and feed the encoded state into a long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997). We use average pooling to aggregate the latent states from the LSTM over time and build a classifier for each predicate to infer its count. Effectively, we build 30 classifiers, corresponding to the 30 predicates in our taxonomy and the fact that each can appear multiple times.\n2We conducted a user study rating how realistic were the trajectories of the agents and those created by humans, and found no significant difference between the two groups. More details can be found in Appendix D.4.\nHelping policy/planner. Due to the nature of the tasks in our challenge – e.g., partial observability, a large action space, sparse rewards, strict preconditions for actions – it is difficult to search for a helping plan or learn a helping policy directly over the agent’s actions. To mitigate these difficulties, we propose a hierarchical architecture with two modules for both planning and RL-based approaches as shown in Figure 4. At every step, given the goal inferred from the demonstration, ĝ, and the current observation of Bob, a high-level policy or planner will output a predicate as the best subgoal to pursue for the current step; the subgoal is subsequently fed to a low-level policy or planner which will yield Bob’s action atBob at this step. In our baselines, we use either a learned policy or a planner for each module. We use the symbolic representation of visible objects as Bob’s observation otBob for all models. We summarize the overall design of the baseline models as follows (please refer to Appendix C for the details of models and training procedures):\nHP: A hierarchical planner, where the high-level planner and the low-level planner are implemented by MCTS and regression planning (RP) respectively. This is the same planner as the one for Alice, except that i) it has its own partial observation and thus a different belief from Alice, and ii) when given the ground-truth goal, the high-level planner uses Alice’s plan to avoid overlapping with her.\nHybrid: A hybrid model of RL and planning, where an RL policy serves as the high-level policy and an RP is deployed to generated plans for each subgoal sampled from the RL-based high-level policy. This is to train an agent equipped with basic skills for achieving subgoals to help Alice through RL.\nHRL: A hierarchical RL baseline where high-level and low-level policies are all learned.\nRandom: A naive agent that takes a random action at each step.\nTo show the upper bound performance in the challenge, we also provide two oracles:\nOracleB: An HP-based Bob agent with full knowledge of the environment and the true goal of Alice.\nOracleA, B: Alice has full knowledge of the environment too." }, { "heading": "5.3 RESULTS", "text": "We evaluate the Watch stage by measuring the recognition performance of the predicates. The proposed model achieves a precision and recall of 0.85 and 0.96 over the test-1 set. To evaluate the importance of seeing the full demonstration, we test a model that takes as input the graph representation of the last observation, leading to a precision and recall of 0.79 and 0.75. When using actions taken by Alice as the input, the performance increases to a precision and recall of 0.99 and 0.99. The chance precision and recall is 0.08 and 0.09.\nWe report the performance of our proposed baselines (average and standard error across all episodes) in the Help stage in Figure 5. In addition to the full challenge setup, we also report the performance of the helping agents using true goals (indicated by the subscript TG) and using random goals (by RG), and the performance of Alice working alone. Results show that planning-based approaches are the most effective in helping Alice. Specifically, HPTG achieves the best performance among non-oracle baselines by using the true goals and reasoning about Alice’s future plan, avoiding redundant actions and collisions with her (Figure 6 illustrates an example of collaboration). Using the inferred goals, both HP and Hybrid can offer effective help. However, with a random goal inference (HPRG), a capable Bob agent becomes counter productive – frequently undoing what Alice has achieved due to their conflicting goals (conflicts appear in 40% of the overall episodes, 65% for Put Groceries and Set Meal). This calls for an AI agent with the ability to adjust its goal inference dynamically by observing Alice’s behavior in the new environment (e.g., Alice correcting a mistake made by Bob signals incorrect goal inference). HRL works no better than Random, even though it shares the same global policy with Hybrid. While the high level policy selects reasonable predicates to perform the task, the low level policy does not manage to achieve the desired goal. In most of the cases, this is due to the agent picking the right object, but failing to put it to the target location afterwards. This suggests that it is crucial for Bob to develop robust abilities to achieve the subgoals. There is no significant difference between Random and Alice baselines (t(99) = −1.38, p = 0.17). We also evaluate the baselines in the test-2 set, containing tasks with multiple activities. The goal inference model achieves a precision and recall of 0.68 and 0.64. The performance gap from test-1 indicates that the model fails to generalize to generalize to multi-activity scenarios, overfitting to predicate combinations seen during training. For the Help stage, we evaluate the performance of\nca bBob and Alice both try to grab the fork Bob avoids conflict Bob’s actions change Alice’s belief\nAlice alone, as well as the best performing baseline, HP. Alice achieves a success rate of 95.40 ± 0.01, while the HP baseline achieves a success rate of 88.60 ± 0.02 and a speedup of 0.21 ± 0.04. Compared to its performance in the test-1 set, the HP baseline suffers a significant performance degradation in the test-2 set, which is a result of the lower goal recognition accuracy in the Watch stage.\nTo better understand the important factors for the effectiveness of helping, we analyze the helping behaviors exhibited in our experiments and how they affect Alice from the following aspects.\nPredicting Alice’s Future Action. When coordinating with Alice, Bob should be able to predict Alice’s future actions to efficiently distribute the work and avoid conflicts (Figure 7ab).\nHelping Alice’s Belief’s Update. In addition to directly achieving predicates in Alice’s goal, Bob can also help by influencing Alice’s belief update. A typical behavior is that when Bob opens containers, Alice can update her belief accordingly and find the goal object more quickly (Figure 7c). This is the main reason why Bob with random actions can sometimes help speed up the task too.\nMulti-level Actions. The current baselines do not consider plans over low-level actions (e.g., pathfinding). This strategy significantly decreases the search space, but will also result in inefficient\npathfinding and inability to predict other agents’ future paths. Consequently, Bob agent sometimes unintentionally blocks Alice (Figure 7d). A better AI agent should consider actions on both levels.\nFalse Belief. Actions taken by an agent may cause another agent to have false beliefs (Figure 7e)." }, { "heading": "6 HUMAN EXPERIMENTS", "text": "Our ultimate goal is to build AI agents that can work with real humans. Thus, we further conduct the following two human experiments, where Alice is controlled by a real human.\nExperiment 1: Human performing tasks alone. In this experiment, we recruited 6 subjects to perform tasks alone by controlling Alice. Subjects were given the same observation and action space as what the human-like agent had access to. They could click one of the visible objects (including all rooms) and select a corresponding action (e.g., “walking towards”, “open”) from a menu to perform. They could also choose to move forward or turn left/right by pressing arrow keys. We evaluated 30 tasks in the test set. Each task was performed by 2 subjects, and we used the average steps they took as the single-agent performance for that task, which is then used for computing the speedup when AI agents help humans. The performance of a single agent when being controlled by a human or by a human-like agent in these 30 tasks is shown in Fig. 8ab with the label of Alice. Human players are slightly more efficient than the human-like agent but the difference is not significant, as reported by the t-test over the number of steps they took (t(29) = −1.63, p = .11). Experiment 2: Collaboration with real humans. This experiment evaluates how helpful AI agents are when working with real humans. We recruited 12 subjects and conducted 90 trials of human-AI collaboration using the same 30 tasks as in Exp. 1. In each trial, a subject was randomly paired with one of three baseline agents, HP, Hybrid, and HPRG, to perform a task. After each trial, subjects were asked to rate the AI agent they just worked with on a scale of 1 to 7 based on three criteria commonly used in prior work (Hoffman, 2019): i) how much the agent knew about the true goal (1 - no knowledge, 4 - some knowledge, 7 - perfect knowledge), ii) how helpful you found the agent was (1 - hurting, 4 - neutral, 7 - very helpful), and iii) whether you would trust the agent to do its job (1 - no trust, 4 - neutral, 7 - full trust). For a fair comparison, we made sure that the random goal predictions for HPRG were the same as the ones used in the evaluation with the human-like agent.\nAs shown Figure 8, the ranking of the three baseline AI agents remains the same when the humanlike agent is replaced by real humans, and the perceived performance (subjective ratings) is consistent with the objective scores. We found no significant difference in the objective metrics between helping humans and helping the human-like agent; the only exception is that, when paired with real humans, HPRG had a higher success rate (and consequently a higher average cumulative reward). This is because humans recognized that the AI agent might have conflicting subgoals and would finish other subgoals first instead of competing over the conflicting ones with the AI agent forever, whereas the human-like agent was unable to do so. Appendix D.3 shows an example. This adaption gave humans a better chance to complete the full goal within the time limit. We provide more details of the procedures, results, and analyses of the human experiments in Appendix D." }, { "heading": "7 CONCLUSION", "text": "In this work, we proposed an AI challenge to demonstrate social perception and human-AI collaboration in common household activities. We developed a multi-agent virtual environment to test an AI agent’s ability to reason about other agents’ mental states and help them in unfamiliar scenarios. Our experimental results demonstrate that the proposed challenge can systematically evaluate key aspects of social intelligence at scale. We also show that our human-like agent behaves similarly to real humans in the proposed tasks and the objects metrics are consistent with subject ratings.\nOur platform opens up exciting directions of future work, such as online goal inference and direct communication between agents. We hope that the proposed challenge and virtual environment can promote future research on building more sophisticated machine social intelligence." }, { "heading": "ACKNOWLEDGMENTS", "text": "The information provided in this document is derived from an effort sponsored by the Defense Advanced Research Projects Agency (DARPA), and awarded to Raytheon BBN Technologies under Contract Number HR001120C0022." }, { "heading": "A.1 COMPARISON WITH EXISTING PLATFORMS", "text": "There have been many virtual environments designed for single-agent and multi-agent tasks. Table 1 summarizes the key features of the proposed VirtualHome-Social in comparison with existing virtual platforms. The key features of our environment include i) multiple camera views, ii) both high-level and low-level actions, iii) humanoid avatars with realistic motion simulations, iv) built-in human-like agents emulating human behaviors in household activities, and v) multi-agent capacities.\nCritically, VirtualHome-Social enables collecting and displaying human activities in realistic environments, which is a key function necessarily for social perception and human-AI collaboration. In contrast, existing multi-agent platforms do no offer such functionality." }, { "heading": "A.2 ENVIRONMENT DESCRIPTION", "text": "The environment is composed of different apartments with objects that can be placed to generate diverse scenes for the Watch and Help stages. Each object contains a class name, a set of states, 3D coordinates and an index for identification, which is needed for action commands that involve object interaction. The object indices are unique and consistent in the scene so that an agent can track the identities of individual objects throughout an episode." }, { "heading": "A.2.1 APARTMENTS", "text": "We provide 7 distinctive apartments in total as shown in Figure 9. For the purpose of testing agents’ generalization abilities, in the Watch-And-Help challenge, the last two apartments are held out for the helping environments in the testing set exclusively." }, { "heading": "A.2.2 AVATARS", "text": "VirtualHome-Social provides a pool of diverse humanoid avatars (see Figure 10). This allows us to randomly sample different avatars for both agents in the Watch-And-Help challenge. We hope this can help reduce the biases in the environment. The supplementary video shows an example of this, where the clothing color indicates the role of each agent. For the public release of the platform, we intend to further increase the diversity of the avatar pool." }, { "heading": "A.2.3 OBSERVATION", "text": "The environment supports symbolic and visual observations (Figure 11a), allowing agents to learn helping behaviors under different conditions. The visual observations provide RGB, depth, semantic and instance segmentation, albedo and luminance, normal maps, 3D skeletons and bounding boxes. Building upon Liao et al. (2019), we represent the symbolic observations as a state graph with each node representing the class label and physical state of an object, and each edge representing the spatial relation of two objects. The environment also provides multiple views and supports both full observability and partial observability settings.\nWe show examples of the observations in the supplementary video. In addition to the world states, our system also allows users to include direct messages from other agents as part of the observation for an agent." }, { "heading": "A.2.4 ACTION SPACE", "text": "As shown in Figure 11b, agents in VirtualHome-Social can perform both high-level actions, such as navigating towards a known location, or interacting with an observed object, and low-level actions,\nsuch as turning or moving forward for a small step. For actions involving interactions with entities (objects or other agents), an agent needs to specify the indices of the intended entities (e.g., “grab 〈3〉” stands for grabbing the object with id 3). An agent can only interact with objects that are within its field of sight, and therefore its action space changes at every step. When executing navigation actions, an agent can only move 1 meter towards the target location within one step. On average, an agent’s action space includes 167 different actions per step." }, { "heading": "A.3 HUMAN-LIKE AGENT", "text": "We discuss how the human-like agent works in more details here. The agent pipeline can be seen in Figure 12. The agent has access to a partial observation of the environment, limited to the objects that are in the same room and not in some closed container. The agent is equipped with a belief module (Figure 13), that gives information about the unseen objects, under the assumption that the existence of objects in the environment is known, but not their location. For each object in the environment, the belief contains a distribution of the possible locations where it could be. We adopt uniform distributions as the initial belief when the agent has not observed anything.\nAt each time, the agent obtains a partial observation, and updates its belief distribution accordingly. Then, the belief module samples a possible world state from the current distribution. To ensure that the belief state is consistent between steps, we only resample object locations that violate the current belief (e.g. an object was believed to be in the fridge but the agent sees that the fridge is in fact empty).\nBased on the sampled state, a hierarchical planner will search for the optimal plan for reaching the goal, based on the goal definition. Specifically, we use MCTS to search for a sequence of subgoals\n(i.e., predicates), and then each subgoal is fed to a regression planner (RP) that will search for an action sequence to achieve the subgoal. For the high-level planner, the subgoal space is obtained by the intersection between what predicates remained to be achieved and what predicates could be achieved based on the sampled state. Note here each subgoal would specify an object instance instead of only the object class defined in the goal so that the low-level planner will be informed which object instances it needs to interact with. For instance, in the example illustrated in Figure 12, there are two plates (whose indices are 12, 52) and the dinner table’s index is 31 according to the sampled state. There are two unsatisfied goal predicates (i.e., two ON(plate, dinnertable)), then a possible subgoal space for the high-level planner would be {ON(12, 31), ON(52, 31)}. For RP, it starts from the state defined by the subgoal and searches for the low-level plan backward until it finds an action that is part of the current action space of the agent.\nTo mimic human behaviors in a home setting, we also expect the human-like agent to close containers unless it needs to look inside or put objects into them. For that, we augment the MCTS-based high-level planner with heuristics for the closing behavior – the agent will close an container when it finds no relevant goal objects inside or has already grabbed/put in the all target objects out of that container. We find that this augmentation makes the overall agent behaviors closer to what a real human would do in a household environment.\nThanks to the hierarchical design, the planner for the human-like agent can be run in real-time (on average, replanning at each step only takes 0.05 second). This also gives the agent a bounded rationality, in that the plan is not the most optimal but is reasonably efficient. The optimality of the planner can be further tuned by the hyper-parameters of MCTS, such as the number of simulation, the maximum number steps in the rollouts, and the exploration coefficients." }, { "heading": "A.4 SPECIFICATIONS", "text": "The environment can be run in a single or multiple processes. A single process runs at 10 actions per second. We train our models using 10 processes in parallel." }, { "heading": "B MORE DETAILS ON THE CHALLENGE SETUP", "text": "" }, { "heading": "B.1 PREDICATE SETS FOR GOAL DEFINITIONS", "text": "Table 2 summarizes the five predicate sets used for defining goals. Note that VirtualHome-Social supports more predicates for potential future extensions on the goal definitions." }, { "heading": "B.2 TRAINING AND TESTING SETUP", "text": "During training, we randomly sample one of the 1011 training tasks for setting up a training episode. For evaluating an AI agent on the testing set, we run each testing task for five times using different random seeds and report the average performance.\nFor training goal inference, we also provide an additional training set of 5303 demonstrations (without pairing helping environments) synthesized in the 5 training apartments. Note that these demonstrations are exclusively used for training goal inference models and would not be used for helping tasks." }, { "heading": "B.3 DISTRIBUTION OF INITIAL OBJECT LOCATIONS", "text": "Figure 14 shows the initial location distribution of all objects in the helping environments sampled for the challenge, and Figure 15 shows the initial location distributions for only the objects involved in the goal predicates.\nC IMPLEMENTATION DETAILS OF BASELINES" }, { "heading": "C.1 GOAL INFERENCE MODULE", "text": "Figure 16 shows the architecture of the goal inference model described in the paper, where d = 128 indicates the dimension of vectors. In this network, the LSTM has 128 hidden units and the MLP units are comprised of two 128-dim fully connected layers. For both node embeddings and the latent states from the LSTM, we use average pooling." }, { "heading": "C.2 HIERARCHICAL PLANNER", "text": "The hierarchical planner (HP) baseline is similar to the planner designed for the human-like agent (Section A.3) but has its own observation and belief. When given the ground-truth goal of Alice, the MCTS-based high-level planner will remove the subgoal that Alice is going to pursue from its own subgoal space." }, { "heading": "C.3 GENERAL TRAINING PROCEDURE FOR RL-BASED APPROACHES", "text": "We train the high-level RL policy by giving ground-truth goals and by using RP as the low-level planner to reach the subgoals sampled from the high-level policy. Whenever a goal predicate is satisfied (either by Alice or by Bob), Bob will get a reward of +2; it will also get a -0.1 penalty after each time step. We adopt the multi-task RL approach introduced in Shu et al. (2017) to train the lowlevel policy in a single-agent setting, where we randomly sample one of the predicates in the goal in each training episode and set it to be the objective for Bob. This is to ensure that Bob can learn to achieve subgoals through the low-level policy by himself. The HRL baseline is implemented by combining the high-level and low-level policies that are trained separately." }, { "heading": "C.4 LOW-LEVEL POLICY", "text": "Figure 17 illustrates the network architecture for the low-level policy. We use the symbolic observation (only the visible object nodes) as input, and encode them in the same way as Figure 16 does. We encode two object classes in the given subgoal sg (i.e., a predicate) through word2vec encoding yielding two 128-dim vectors. We then concatenate these two vectors and feed them to a fully connected layer to get a 128-dim goal encoding. Based on the goal encoding, we further get two attention vectors, σobject and σtype. Each element of the attention vectors ranges from 0 to 1. For each object node, we use the element-wise product of σobject and its node embedding to get its reshaped representation. Similarly, we can get the reshaped context representation by an element-wise product of the context embedding and σtype. This is inspired by a common goal-conditioned policy network architecture (Chaplot et al., 2018; Shu et al., 2017), which helps extract state information relevant to the goal. From each reshaped node representation, we can get a scalar for each object representing the log-likelihood of selecting that object to interact with for the current action. After a softmax over all the object logits, we get the object selection policy πobject(k|ot, sg), where k is the index of the object instance selected from all visible objects (which also includes “Null” for actions that do not involve an object). For encoding the history, we feed the reshaped context representation to an LSTM with 128 hidden units. Based on the latent state from the LSTM, we get i) the action type policy πtype(a|ot, sg), which selects an action type (i.e., “open,” “close,” “grab,” “put,” “walk,”\nor “follow”), and ii) the value function V (ot, sg). The sampled k and a jointly define the action for the AI agent. Note that some sampled combinations may not be valid actions, which will not be executed by the VirtualHome-Social environment.\nIn addition to the policy and value output, we also build a binary classifier for each visible node to predict whether it is close enough for the agent to interact with according to the symbolic graphs. This closeness prediction serves an auxiliary prediction which helps the network learn a better state representation and consequently greatly improves the sample efficiency.\nIn each training episode, we randomly sample a predicate from the complete goal definition as the final goal of the agent. The agent gets a reward of 0.05 for being close to the target object and/or location, and a reward of 10.0 when it grabs the correct object or puts it to the correct location. Note that when training the low-level policy, we set up a single-agent environment to ensure that the AI agent can learn to achieve a predicate by itself.\nWe adopt a 2-phase curriculum learning similar to Shu et al. (2017): In the first phase, we train a policy for grabbing the target object indicated in the goal. During this phase, a training episode terminates whenever the agent grabs the correct type of object. In the second phase, we train another policy which learns to reuse the learned grabbing policy (which is deployed whenever the “grab” action type is sampled) to get the goal object and then put the grabbed object to target location specified in the goal.\nWe use off-policy advantage actor-critic (A2C) (Mnih et al., 2016) for policy optimization. The network is updated by RMSprop (Tieleman & Hinto, 2012) with a learning rate of 0.001 and a batch size of 32. The first phase is trained with 100,000 episodes and the second phase is trained with 26,000 episodes." }, { "heading": "C.5 HIGH-LEVEL POLICY", "text": "As Figure 18 depicts, the high-level policy (used by Hybrid and HRL baselines) has a similar architecture design as the low-level policy. Compared with the low-level policy, it does not need to define object selection policy; instead, based on the latent state from the LSTM, it outputs the policy for selecting the first and the second object class in a predicate to form a subgoal3. It also augments the goal encoder in the low-level policy with a sum pooling (i.e., Bag of Words) to aggregate the encoding of all predicates in a goal, where predicates are duplicated w.r.t. their counts in the goal definition (e.g., in Figure 18, ON(plate, dinnertable) appears twice, which means there are\n3Note that this is different from the subgoals generated from the high-level planner (Section A.3), which would specify object instances.\nshould be 2 plates on the dinnertable). Similar to the low-level policy, we get an attention vector σg from the goal encoding to reshape the state representation. In total, the network has three outputs: the object subgoal policy for sampling the object class name in the subgoal, the location subgoal policy for sampling the target location class name in the subgoal, and a value function.\nThe high-level policy is trained with a regression planner deployed to find a low-level plan for reaching that subgoal. Note that the regression planner searches for a plan based on a state sampled from the agent’s belief maintained by a belief module discussed in Section A.3. It will also randomly select object instances from the sampled state that fit the defined object classes in the subgoals sampled from the high-level policy.\nSimilar to the low-level policy, we use off-policy A2C for policy optimization, and the network is updated by RMSprop with a learning rate of 0.001 and a batch size of 16. We first train the highlevel policy in a single-agent setting where the AI agent is trained to perform a task by itself; we then finetune the high-level policy in the full training setting where the human-like agent is also present and works alongside with the AI agent. During training, we always provide the ground-truth goal of Alice to the AI agent." }, { "heading": "D ADDITIONAL DETAILS OF HUMAN EXPERIMENTS", "text": "" }, { "heading": "D.1 HUMAN SUBJECTS", "text": "Both the collection of human plans as well as the evaluations in our user studies were conducted by recruited participants, who gave informed consent." }, { "heading": "D.2 PROCEDURE FOR COLLECTING HUMAN PLANS", "text": "To collect the tasks for both experiments, we built a web interface on top of VirtualHome-Social, allowing humans to control the characters in the environment. Specifically, the subjects in our human experiments were always asked to control Alice. At every step, humans were given a set of visible objects, and the corresponding actions that they could perform with those objects (in addition to\nthe low-level actions), matching the observation and action space of the human-like agent. When working with an AI agent, both the human player and the AI agent took actions concurrently.\nIn both experiments, human players were given a short tutorial and had a chance to get familiar with the controls. They were shown the exact goals to be achieved, and were instructed to finish the task as fast as possible. For each task, we set the same time limit, i.e., 250 steps. A task is terminated when it exceeds the time limit or when all the goals specified have been reached.\nThe 30 tasks used in the human experiments were randomly sampled from the test set and were evenly distributed across 5 task categories (i.e., 6 tasks for each category).\nIn Experiment 2, each subject was asked to perform 7 or 8 trials. We made sure that each subject got to play with all three baseline AI agents in at least 2 trials." }, { "heading": "D.3 EXAMPLE OF HUMAN ADAPTING TO AI AGENTS WITH CONFLICTING GOALS", "text": "The main reason why real humans work better than the human-like agent when paired with an AI agent that has a conflicting goal (in particular, the HPRG baseline), is that they can recognize the conflicting goal, and avoid competing over the same objects forever. Figure 20 depicts an example of this adaptive behavior from a real human player in Experiment 2, which results in the completion of the task within the time limit. Note that in our experiments, a task is considered successful and terminated once all the predicates in a goal have been achieved.\nThis also calls for an AI agent with the ability to adjust its goal inference dynamically by observing Alice’s behavior in the new environment (e.g., Alice correcting a mistake made by Bob signals incorrect goal inference)." }, { "heading": "D.4 SUBJECTIVE EVALUATION OF SINGLE AGENT PLANS", "text": "To evaluate whether people think the human-like agent behaves similarly to humans given the same goals, we recruited another 8 subjects. We showed each subject 15 videos, each of which is a video replay of a human or the human-like agent performing one of the 30 tasks (we randomly selected one human video and one built-in agent video for each task). For each video, subjects were given the goal and asked to rate how much they agreed with the statement, “the character in the video behaves similarly to a human given the same goal in this apartment,” on a Likert scale of 5 (1 is “strongly disagree,” 3 is “neutral,” and 5 is “strongly agree”)4. The average ratings for the characters controlled by the human-like agent and by the real humans are 3.38 (±0.93) and 3.72 (±0.92) respectively. We found no significant difference between the ratings for the human-like agent’s plans and the ratings for the real humans’ plans in our tasks, as reported by a paired, two-tailed t-test (t(29) = −1.35, p = .19). This demonstrates that the proposed human-like agent can produce plans that are similar to real humans’ plans in our challenge.\nBased on the free responses collected from the subjects who rated these videos, human plans look slightly more efficient sometimes since they do not look for objects in unlikely places and avoid moving back and forth between rooms frequently. The human-like agent behaves similarly in most of the time but would occasionally search through the rooms in a counter-intuitive order due to its bounded rationality and the fact that plans are sampled stochastically." }, { "heading": "D.5 ADDITIONAL QUANTITATIVE ANALYSES OF HUMAN EXPERIMENT RESULTS", "text": "To evaluate whether the performance of a baseline AI agent helping the human-like agent reflects the performance of it helping real humans, we conduct paired, two-tailed t-test for the three baselines in Experiment 2 based on their cumulative rewards. For HPRG, there is a significant difference between helping the human-like agent and helping real humans (t(29) = −2.36, p = .03) as discussed in Section 6 and Appendix D.3. However, there is no significant difference for HP (t(29) = −1.78, p = .1) and Hybrid ((t(29) = −0.5, p = .62)). This validates that, in general, collaboration with\n4Since we focus on the agents’ plans in this work, users were instructed to focus on the actions taken by the agents, rather than the graphical display of their body motion.\nthe human-like agent is comparable to collaboration with real humans. Given these analyses, the training and evaluation procedure5 presented in this paper is both scalable and comprehensive.\n5I.e., i) training AI agents with the human-like agent, and then ii) evaluating them both with the human-like agent (in a larger test set), and with real humans (in a smaller but representative test set)." } ]
2,021
null
SP:bfe85369cfa71f6b26477f26d133751ac05b0536
[ "This paper proposes a new approach to learning control policies with improved data efficiency and fewer number of data collection sessions (with each session using a different policy). Further, the authors proposed a new concept of “deployment efficiency”, with a new “deployment” referring to using a new policy to interact with the real environment, for example, for data collection. The new approach belongs to the family of model-based online reinforcement learning algorithms and seems to primarily augment a prior approach, called ME-TRPO, by using a helper policy trained by behavior cloning data collected after the most recent deployment of learner policy. The experiment results validate that the proposed approach achieves better data efficiency and deployment efficiency compared to prior approaches." ]
Most reinforcement learning (RL) algorithms assume online access to the environment, in which one may readily interleave updates to the policy with experience collection using that policy. However, in many real-world applications such as health, education, dialogue agents, and robotics, the cost or potential risk of deploying a new data-collection policy is high, to the point that it can become prohibitive to update the data-collection policy more than a few times during learning. With this view, we propose a novel concept of deployment efficiency, measuring the number of distinct data-collection policies that are used during policy learning. We observe that naïvely applying existing model-free offline RL algorithms recursively does not lead to a practical deployment-efficient and sample-efficient algorithm. We propose a novel model-based algorithm, Behavior-Regularized Model-ENsemble (BREMEN), that not only performs better than or comparably as the state-of-the-art dynamic-programming-based and concurrently-proposed model-based offline approaches on existing benchmarks, but can also effectively optimize a policy offline using 10-20 times fewer data than prior works. Furthermore, the recursive application of BREMEN achieves impressive deployment efficiency while maintaining the same or better sample efficiency, learning successful policies from scratch on simulated robotic environments with only 5-10 deployments, compared to typical values of hundreds to millions in standard RL baselines. 1
[ { "affiliations": [], "name": "Tatsuya Matsushima" }, { "affiliations": [], "name": "Hiroki Furuta" }, { "affiliations": [], "name": "Yutaka Matsuo" }, { "affiliations": [], "name": "Ofir Nachum" }, { "affiliations": [], "name": "Shixiang Shane Gu" } ]
[ { "authors": [ "Fabian Abel", "Yashar Deldjoo", "Mehdi Elahi", "Daniel Kohlsdorf" ], "title": "Recsys challenge 2017: Offline and online evaluation", "venue": "In ACM Conference on Recommender Systems,", "year": 2017 }, { "authors": [ "Rishabh Agarwal", "Dale Schuurmans", "Mohammad Norouzi" ], "title": "An optimistic perspective on offline reinforcement learning", "venue": "arXiv preprint arXiv:1907.04543,", "year": 2019 }, { "authors": [ "Yu Bai", "Tengyang Xie", "Nan Jiang", "Yu-Xiang Wang" ], "title": "Provably efficient q-learning with low switching cost", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Gabriel Barth-Maron", "Matthew Hoffman", "David Budden", "Will Dabney", "Dan Horgan", "Dhruva TB", "Alistair Muldal", "Nicolas Heess", "Timothy Lillicrap" ], "title": "Distributed distributional deterministic policy gradients", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rinu Boney", "Juho Kannala", "Alexander Ilin" ], "title": "Regularizing model-based planning with energy-based models", "venue": "In Conference on Robot Learning,", "year": 2019 }, { "authors": [ "Serkan Cabi", "Sergio Gómez Colmenarejo", "Alexander Novikov", "Ksenia Konyushova", "Scott Reed", "Rae Jeong", "Konrad Zolna", "Yusuf Aytar", "David Budden", "Mel Vecerik", "Oleg Sushkov", "David Barker", "Jonathan Scholz", "Misha Denil", "Nando de Freitas", "Ziyu Wang" ], "title": "Scaling data-driven robotics with reward sketching and batch reinforcement learning", "venue": "In Robotics: Science and Systems,", "year": 2020 }, { "authors": [ "Yinlam Chow", "Aviv Tamar", "Shie Mannor", "Marco Pavone" ], "title": "Risk-sensitive and robust decisionmaking: a cvar optimization approach", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "A lyapunovbased approach to safe reinforcement learning", "venue": "In Advances in neural information processing systems,", "year": 2018 }, { "authors": [ "Yinlam Chow", "Ofir Nachum", "Aleksandra Faust", "Edgar Duenez-Guzman", "Mohammad Ghavamzadeh" ], "title": "Lyapunov-based safe policy optimization for continuous control", "venue": null, "year": 1901 }, { "authors": [ "Kurtland Chua", "Roberto Calandra", "Rowan McAllister", "Sergey Levine" ], "title": "Deep reinforcement learning in a handful of trials using probabilistic dynamics models", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ignasi Clavera", "Jonas Rothfuss", "John Schulman", "Yasuhiro Fujita", "Tamim Asfour", "Pieter Abbeel" ], "title": "Model-based reinforcement learning via meta-policy optimization", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Thomas Degris", "Martha White", "Richard S Sutton" ], "title": "Off-policy actor-critic", "venue": "arXiv preprint arXiv:1205.4839,", "year": 2012 }, { "authors": [ "Marc Deisenroth", "Carl E Rasmussen" ], "title": "PILCO: A model-based and data-efficient approach to policy search", "venue": "In International Conference on Machine Learning,", "year": 2011 }, { "authors": [ "Gabriel Dulac-Arnold", "Daniel Mankowitz", "Todd Hester" ], "title": "Challenges of real-world reinforcement learning", "venue": "arXiv preprint arXiv:1904.12901,", "year": 2019 }, { "authors": [ "Damien Ernst", "Pierre Geurts", "Louis Wehenkel" ], "title": "Tree-based batch mode reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2005 }, { "authors": [ "Lasse Espeholt", "Hubert Soyer", "Remi Munos", "Karen Simonyan", "Vlad Mnih", "Tom Ward", "Yotam Doron", "Vlad Firoiu", "Tim Harley", "Iain Dunning" ], "title": "IMPALA: Scalable distributed deep-rl with importance weighted actor-learner architectures", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Lasse Espeholt", "Raphaël Marinier", "Piotr Stanczyk", "Ke Wang", "Marcin Michalski" ], "title": "SEED RL: Scalable and efficient deep-rl with accelerated central inference", "venue": null, "year": 1910 }, { "authors": [ "Benjamin Eysenbach", "Shixiang Gu", "Julian Ibarz", "Sergey Levine" ], "title": "Leave no trace: Learning to reset for safe and autonomous reinforcement learning", "venue": "International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Rasool Fakoor", "Pratik Chaudhari", "Alexander J. Smola" ], "title": "P3o: Policy-on policy-off policy optimization", "venue": "arXiv preprint arXiv:1905.01756,", "year": 2019 }, { "authors": [ "Justin Fu", "Aviral Kumar", "Ofir Nachum", "George Tucker", "Sergey Levine" ], "title": "D4RL: Datasets for deep data-driven reinforcement learning", "venue": "arXiv preprint arXiv:2004.07219,", "year": 2020 }, { "authors": [ "Scott Fujimoto", "David Meger", "Doina Precup" ], "title": "Off-policy deep reinforcement learning without exploration", "venue": "In International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Omer Gottesman", "Fredrik Johansson", "Joshua Meier", "Jack Dent", "Donghun Lee", "Srivatsan Srinivasan", "Linying Zhang", "Yi Ding", "David Wihl", "Xuefeng Peng", "Jiayu Yao", "Isaac Lage", "Christopher Mosch", "Li wei H. Lehman", "Matthieu Komorowski", "Aldo Faisal", "Leo Anthony Celi", "David Sontag", "Finale Doshi-Velez" ], "title": "Evaluating reinforcement learning algorithms in observational health settings", "venue": "arXiv preprint arXiv:1805.12298,", "year": 2018 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Ilya Sutskever", "Sergey Levine" ], "title": "Continuous deep q-learning with model-based acceleration", "venue": "In International Conference on Machine Learning,", "year": 2016 }, { "authors": [ "Shixiang Gu", "Ethan Holly", "Timothy Lillicrap", "Sergey Levine" ], "title": "Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates", "venue": "In International Conference on Robotics and Automation,", "year": 2017 }, { "authors": [ "Shixiang Gu", "Timothy Lillicrap", "Zoubin Ghahramani", "Richard E. Turner", "Sergey Levine" ], "title": "Q-Prop: Sample-efficient policy gradient with an off-policy critic", "venue": "In International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Zhaohan Guo", "Emma Brunskill" ], "title": "Concurrent pac rl", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2015 }, { "authors": [ "Tuomas Haarnoja", "Aurick Zhou", "Pieter Abbeel", "Sergey Levine" ], "title": "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor", "venue": "In International Conference on Machine Learning,", "year": 2018 }, { "authors": [ "Nicolas Heess", "Gregory Wayne", "David Silver", "Timothy Lillicrap", "Tom Erez", "Yuval Tassa" ], "title": "Learning continuous control policies by stochastic value gradients", "venue": "In Advances in Neural Information Processing Systems,", "year": 2015 }, { "authors": [ "Matteo Hessel", "Joseph Modayil", "Hado Van Hasselt", "Tom Schaul", "Georg Ostrovski", "Will Dabney", "Dan Horgan", "Bilal Piot", "Mohammad Azar", "David Silver" ], "title": "Rainbow: Combining improvements in deep reinforcement learning", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Michael Janner", "Justin Fu", "Marvin Zhang", "Sergey Levine" ], "title": "When to trust your model: Model-based policy optimization", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Natasha Jaques", "Asma Ghandeharioun", "Judy Hanwen Shen", "Craig Ferguson", "Agata Lapedriza", "Noah Jones", "Shixiang Gu", "Rosalind Picard" ], "title": "Way off-policy batch deep reinforcement learning of implicit human preferences in dialog", "venue": "arXiv preprint arXiv:1907.00456,", "year": 2019 }, { "authors": [ "Dmitry Kalashnikov", "Alex Irpan", "Peter Pastor", "Julian Ibarz", "Alexander Herzog", "Eric Jang", "Deirdre Quillen", "Ethan Holly", "Mrinal Kalakrishnan", "Vincent Vanhoucke", "Sergey Levine" ], "title": "QT-Opt: Scalable deep reinforcement learning for vision-based robotic manipulation", "venue": "In Conference on Robot Learning,", "year": 2018 }, { "authors": [ "Rahul Kidambi", "Aravind Rajeswaran", "Praneeth Netrapalli", "Thorsten Joachims" ], "title": "MOReL : Modelbased offline reinforcement learning", "venue": "arXiv preprint arXiv:2005.05951,", "year": 2020 }, { "authors": [ "D. Kingma", "J. Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations,", "year": 2014 }, { "authors": [ "Aviral Kumar", "Justin Fu", "Matthew Soh", "George Tucker", "Sergey Levine" ], "title": "Stabilizing off-policy q-learning via bootstrapping error reduction", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Aviral Kumar", "Aurick Zhou", "George Tucker", "Sergey Levine" ], "title": "Conservative q-learning for offline reinforcement learning", "venue": "arXiv preprint arXiv:2006.04779,", "year": 2020 }, { "authors": [ "Thanard Kurutach", "Ignasi Clavera", "Yan Duan", "Aviv Tamar", "Pieter Abbeel" ], "title": "Model-Ensemble Trust-Region Policy Optimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Sascha Lange", "Thomas Gabel", "Martin Riedmiller" ], "title": "Batch reinforcement learning", "venue": "In Reinforcement learning. Springer,", "year": 2012 }, { "authors": [ "Sergey Levine", "Aviral Kumar", "George Tucker", "Justin Fu" ], "title": "Offline reinforcement learning: Tutorial, review, and perspectives on open problems", "venue": "arXiv preprint arXiv:2005.01643,", "year": 2020 }, { "authors": [ "Timothy P. Lillicrap", "Jonathan J. Hunt", "Alexander Pritzel", "Nicolas Heess", "Tom Erez", "Yuval Tassa", "David Silver", "Daan Wierstra" ], "title": "Continuous control with deep reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Long-Ji Lin" ], "title": "Self-improving reactive agents based on reinforcement learning, planning and teaching", "venue": "Machine learning,", "year": 1992 }, { "authors": [ "Yao Liu", "Adith Swaminathan", "Alekh Agarwal", "Emma Brunskill" ], "title": "Off-policy policy gradient with state distribution correction", "venue": "arXiv preprint arXiv:1904.08473,", "year": 2019 }, { "authors": [ "Travis Mandel", "Yun-En Liu", "Sergey Levine", "Emma Brunskill", "Zoran Popovic" ], "title": "Offline policy evaluation across representations with applications to educational games", "venue": "In International Conference on Autonomous Agents and Multiagent Systems,", "year": 2014 }, { "authors": [ "Ajay Mandlekar", "Fabio Ramos", "Byron Boots", "Li Fei-Fei", "Animesh Garg", "Dieter Fox" ], "title": "IRIS: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data", "venue": null, "year": 1911 }, { "authors": [ "Volodymyr Mnih", "Koray Kavukcuoglu", "David Silver", "Andrei A Rusu", "Joel Veness", "Marc G Bellemare", "Alex Graves", "Martin Riedmiller", "Andreas K Fidjeland", "Georg Ostrovski" ], "title": "Human-level control through deep reinforcement learning", "venue": null, "year": 2015 }, { "authors": [ "Ofir Nachum", "Shixiang Shane Gu", "Honglak Lee", "Sergey Levine" ], "title": "Data-efficient hierarchical reinforcement learning", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Ofir Nachum", "Michael Ahn", "Hugo Ponte", "Shixiang Gu", "Vikash Kumar" ], "title": "Multi-agent manipulation via locomotion using hierarchical sim2real", "venue": "In Conference on Robot Learning,", "year": 2019 }, { "authors": [ "Anusha Nagabandi", "Gregory Kahn", "Ronald S. Fearing", "Sergey Levine" ], "title": "Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning", "venue": "International Conference on Robotics and Automation,", "year": 2018 }, { "authors": [ "Vaishnavh Nagarajan", "J Zico Kolter" ], "title": "Generalization in deep networks: The role of distance from initialization", "venue": "arXiv preprint arXiv:1901.01672,", "year": 2019 }, { "authors": [ "Arun Nair", "Praveen Srinivasan", "Sam Blackwell", "Cagdas Alcicek", "Rory Fearon", "Alessandro De Maria", "Vedavyas Panneershelvam", "Mustafa Suleyman", "Charles Beattie", "Stig Petersen" ], "title": "Massively parallel methods for deep reinforcement learning", "venue": "arXiv preprint arXiv:1507.04296,", "year": 2015 }, { "authors": [ "Ashvin Nair", "Murtaza Dalal", "Abhishek Gupta", "Sergey Levine" ], "title": "Accelerating online reinforcement learning with offline datasets", "venue": "arXiv preprint arXiv:2006.09359,", "year": 2020 }, { "authors": [ "Xue Bin Peng", "Aviral Kumar", "Grace Zhang", "Sergey Levine" ], "title": "Advantage-weighted regression: Simple and scalable off-policy reinforcement learning", "venue": null, "year": 1910 }, { "authors": [ "Matthias Plappert", "Marcin Andrychowicz", "Alex Ray", "Bob McGrew", "Bowen Baker", "Glenn Powell", "Jonas Schneider", "Josh Tobin", "Maciek Chociej", "Peter Welinder" ], "title": "Multi-goal reinforcement learning: Challenging robotics environments and request for research", "venue": "arXiv preprint arXiv:1802.09464,", "year": 2018 }, { "authors": [ "Doina Precup", "Richard S Sutton", "Sanjoy Dasgupta" ], "title": "Off-policy temporal-difference learning with function approximation", "venue": "In International Conference on Machine Learning,", "year": 2001 }, { "authors": [ "Aravind Rajeswaran", "Vikash Kumar", "Abhishek Gupta", "Giulia Vezzani", "John Schulman", "Emanuel Todorov", "Sergey Levine" ], "title": "Learning complex dexterous manipulation with deep reinforcement learning and demonstrations", "venue": "arXiv preprint arXiv:1709.10087,", "year": 2017 }, { "authors": [ "Aravind Rajeswaran", "Igor Mordatch", "Vikash Kumar" ], "title": "A game theoretic framework for model based reinforcement learning", "venue": "arXiv preprint arXiv:2004.07804,", "year": 2020 }, { "authors": [ "Alex Ray", "Joshua Achiam", "Dario Amodei" ], "title": "Benchmarking safe exploration in deep reinforcement learning", "venue": "arXiv preprint arXiv:1910.01708,", "year": 2019 }, { "authors": [ "Stéphane Ross", "Geoffrey Gordon", "Drew Bagnell" ], "title": "A reduction of imitation learning and structured prediction to no-regret online learning", "venue": "In International conference on artificial intelligence and statistics,", "year": 2011 }, { "authors": [ "Nicolas Le Roux" ], "title": "Efficient iterative policy optimization", "venue": "arXiv preprint arXiv:1612.08967,", "year": 2016 }, { "authors": [ "John Schulman", "Sergey Levine", "Philipp Moritz", "Michael Jordan", "Pieter Abbeel" ], "title": "Trust region policy optimization", "venue": "In International Conference on Machine Learning,", "year": 2015 }, { "authors": [ "John Schulman", "Filip Wolski", "Prafulla Dhariwal", "Alec Radford", "Oleg Klimov" ], "title": "Proximal policy optimization algorithms", "venue": "arXiv preprint arXiv:1707.06347,", "year": 2017 }, { "authors": [ "Noah Y. Siegel", "Jost Tobias Springenberg", "Felix Berkenkamp", "Abbas Abdolmaleki", "Michael Neunert", "Thomas Lampe", "Roland Hafner", "Martin A. Riedmiller" ], "title": "Keep doing what worked: Behavioral modelling priors for offline reinforcement learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Satinder P Singh", "Tommi Jaakkola", "Michael I Jordan" ], "title": "Learning without state-estimation in partially observable markovian decision processes", "venue": "In Machine Learning Proceedings. Elsevier,", "year": 1994 }, { "authors": [ "Satinder P. Singh", "Tommi Jaakkola", "Michael I. Jordan" ], "title": "Reinforcement learning with soft state aggregation", "venue": "In Advances in Neural Information Processing Systems,", "year": 1995 }, { "authors": [ "Sungryull Sohn", "Yinlam Chow", "Jayden Ooi", "Ofir Nachum", "Honglak Lee", "Ed Chi", "Craig Boutilier" ], "title": "BRPO: Batch residual policy optimization", "venue": "arXiv preprint arXiv:2002.05522,", "year": 2020 }, { "authors": [ "Richard S Sutton" ], "title": "Dyna, an integrated architecture for learning, planning, and reacting", "venue": "ACM Sigart Bulletin,", "year": 1991 }, { "authors": [ "Emanuel Todorov", "Tom Erez", "Yuval Tassa" ], "title": "Mujoco: A physics engine for model-based control", "venue": "In International Conference on Intelligent Robots and Systems,", "year": 2012 }, { "authors": [ "Tingwu Wang", "Xuchan Bao", "Ignasi Clavera", "Jerrick Hoang", "Yeming Wen", "Eric Langlois", "Shunshi Zhang", "Guodong Zhang", "Pieter Abbeel", "Jimmy Ba" ], "title": "Benchmarking model-based reinforcement learning", "venue": null, "year": 1907 }, { "authors": [ "Yifan Wu", "George Tucker", "Ofir Nachum" ], "title": "Behavior Regularized Offline Reinforcement Learning", "venue": "arXiv preprint arXiv:1911.11361,", "year": 2019 }, { "authors": [ "Tianhe Yu", "Garrett Thomas", "Lantao Yu", "Stefano Ermon", "James Zou", "Sergey Levine", "Chelsea Finn", "Tengyu Ma" ], "title": "MOPO: Model-based offline policy optimization", "venue": "arXiv preprint arXiv:2005.13239,", "year": 2020 }, { "authors": [ "Janner" ], "title": "2019), we denote the generalization error of a dynamics model on the state distribution under the true behavior policy as m = maxt", "venue": null, "year": 2019 }, { "authors": [ "Wu" ], "title": "however, termination function is enabled in Hopper and Walker2d. The batch size of transitions for policy update is 50,000 in BREMEN and ME-TRPO, following Kurutach et al", "venue": null, "year": 2018 }, { "authors": [ "Wang" ], "title": "DEPLOYMENT-EFFICIENT RL Table 4 shows the hyper-parameters of BREMEN. The rollout length is searched from {250, 500, 1000}, and max step size δ is searched from {0.001", "venue": "As for the discount factor γ and GAE λ,", "year": 2019 }, { "authors": [ "Wu" ], "title": "BRAC applies a primal form of KL value penalty, and BRAC (max Q) means sampling multiple actions and taking the maximum according to the learned Q function", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Reinforcement learning (RL) algorithms have recently demonstrated impressive success in learning behaviors for a variety of sequential decision-making tasks (Barth-Maron et al., 2018; Hessel et al., 2018; Nachum et al., 2019). Virtually all of these demonstrations have relied on highly-frequent online access to the environment, with the RL algorithms often interleaving each update to the policy with additional experience collection of that policy acting in the environment. However, in many real-world applications of RL, such as health (Murphy et al., 2001), education (Mandel et al., 2014), dialog agents (Jaques et al., 2019), and robotics (Gu et al., 2017a; Kalashnikov et al., 2018), the deployment of a new data-collection policy may be associated with a number of costs and risks. If we can learn tasks with a small number of data collection policies, we can substantially reduce them.\nBased on this idea, we propose a novel measure of RL algorithm performance, namely deployment efficiency, which counts the number of changes in the data-collection policy during learning, as illustrated in Figure 1. This concept may be seen in contrast to sample efficiency or data efficiency (Precup et al., 2001; Degris et al., 2012; Gu et al., 2017b; Haarnoja et al., 2018; Lillicrap et al., 2016; Nachum et al., 2018), which measures the amount of environment interactions incurred during training, without regard to how many distinct policies were deployed to perform those interactions. Even when the\n∗Equal contribution. 1Codes and pre-trained models are available at https://github.com/matsuolab/BREMEN.\ndata efficiency is high, the deployment efficiency could be low, since many on-policy and off-policy algorithms alternate data collection with each policy update (Schulman et al., 2015; Lillicrap et al., 2016; Gu et al., 2016; Haarnoja et al., 2018). Such dependence on high-frequency policy deployments is best illustrated in the recent works in offline RL (Fujimoto et al., 2019; Jaques et al., 2019; Kumar et al., 2019; Levine et al., 2020; Wu et al., 2019), where baseline off-policy algorithms exhibited poor performance when trained on a static dataset. These offline RL works, however, limit their study to a single deployment, which is enough for achieving high performance with data collected from a sub-optimal behavior policy, but often not from a random policy. In contrast to those prior works, we aim to learn successful policies from scratch in a manner that is both sample and deployment-efficient.\nMany existing model-free offline RL algorithms (Levine et al., 2020) are tuned and evaluated on massive datasets (e.g., one million transitions). In order to develop an algorithm that is both sample and deployment-efficient, each iteration of the algorithm between successive deployments has to work effectively on much smaller dataset sizes. We believe model-based RL is better suited to this setting due to its higher demonstrated sample efficiency than model-free RL (Kurutach et al., 2018; Nagabandi et al., 2018). Although the combination of model-based RL and offline or limiteddeployment settings seems straight-forward, we find this naïve approach leads to poor performance. This problem can be attributed to extrapolation errors (Fujimoto et al., 2019) similar to those observed in model-free methods. Specifically, the learned policy may choose sequences of actions which lead it to regions of the state space where the dynamics model cannot predict properly, due to poor coverage of the dataset. This can lead the policy to exploit approximation errors of the dynamics model and be disastrous for learning. In model-free settings, similar data distribution shift problems are typically remedied by regularizing policy updates explicitly with a divergence from the observed data distribution (Jaques et al., 2019; Kumar et al., 2019; Wu et al., 2019), which, however, can overly limit policies’ expressivity (Sohn et al., 2020).\nIn order to better approach these problems arising in limited deployment settings, we propose Behavior-Regularized Model-ENsemble (BREMEN), which learns an ensemble of dynamics models in conjunction with a policy using imaginary rollouts while implicitly regularizing the learned policy via appropriate parameter initialization and conservative trust-region learning updates. We evaluate BREMEN on standard offline RL benchmarks of high-dimensional continuous control tasks, where only a single static dataset is used. In this fixed-batch setting, our experiments show that BREMEN can not only achieve performance competitive with state-of-the-art when using standard dataset sizes but also learn with 10-20 times smaller datasets, which previous methods are unable to attain. Enabled by such stable and sample-efficient offline learning, we show that BREMEN can learn successful policies with only 5-10 deployments in the online setting, significantly outperforming existing off-policy and offline RL algorithms in deployment efficiency while keeping sample efficiency." }, { "heading": "2 PRELIMINARIES", "text": "We consider a Markov Decision Process (MDP) setting, characterized by the tuple M = (S,A, p, r, γ), where S is the state space, A is the action space, p(s′|s, a) is the transition probability distribution or dynamics, r(s) is the reward function and γ ∈ (0, 1) is the discount factor. A policy π is a function that determines the agent behavior, mapping from states to probability distributions over actions. The goal is to obtain the optimal policy π∗, which maximizes the expectation of discounted sum of rewards. The transition probability p(s′|s, a) is usually unknown, and estimated with a parameterized dynamics model fφ (e.g. a neural network) in model-based RL. For simplicity, we assume that the reward function r(s) is known, and the reward can be computed for any arbitrary state, but we may extend to the unknown setting and predict it using a parameterized function.\nOn-policy vs Off-policy, Online vs Offline At a high-level, most RL algorithms alternate many times between collecting a batch of transitions (deployments) and optimizing the policy (learning). If the algorithms discard data after each policy update, they are on-policy (Schulman et al., 2015; 2017), while if they accumulate data in a buffer D, i.e. experience replay (Lin, 1992), they are off-policy (Mnih et al., 2015; Lillicrap et al., 2016; Gu et al., 2016; 2017b; Haarnoja et al., 2018; Fujimoto et al., 2019; Fakoor et al., 2019) because not all the data in buffer comes from the current policy. However, we consider all these algorithms to be online RL algorithms, since they involve many deployments during learning, ranging from hundreds to millions. On the other hand, in pure offline RL, one does not assume direct interaction and learns a policy from only a fixed dataset, which effectively corresponds to a single deployment allowed for learning. Classically, interpolating these two extremes were semi-batch RL algorithms (Lange et al., 2012; Singh et al., 1995), which improve the policy through repetitions of collecting a large batch of transitions D = {(s, a, s′, r)} and performing many or full policy updates. While these semi-batch RL also realize good deployment efficiency, they have not been extensively studied with neural network function approximators or in off-policy settings with experience replay for scalable sample-efficient learning. In our work, we aim to have both high deployment efficiency and sample efficiency by developing an algorithm that can solve the tasks with minimal policy deployments as well as transition samples." }, { "heading": "3 DEPLOYMENT EFFICIENCY", "text": "Deploying a new policy for data collection can be associated with a number of costs and risks for many real-world applications like medicine, dialogue systems, or robotic control (Murphy et al., 2001; Mandel et al., 2014; Gu et al., 2017a; Kalashnikov et al., 2018; Nachum et al., 2019; Jaques et al., 2019). While there are abundant works on safety for RL (Chow et al., 2015; Eysenbach et al., 2018; Chow et al., 2018; Ray et al., 2019; Chow et al., 2019), they often do not provide guarantees in practice when combined with neural networks and stochastic optimization. It is therefore necessary to verify each policy before deployment (e.g. measuring the variance of rewards or checking out-of-bounds actions). Due to such costs associated with each deployment, it is desirable to minimize the number of distinct deployments needed during the learning process. Even ignoring safety considerations, frequent updates to a deployed policy can exacerbate communication bottlenecks in large-scale distributed RL systems, which are becoming more prevalent (Nair et al., 2015; Espeholt et al., 2018; 2019). We additionally discuss on the importance of the deployment efficiency in real-world applications. See Appendix C.\nIn order to focus research on these practical bottlenecks, we propose a novel measure of RL algorithms, namely, deployment efficiency, which counts how many times the data-collection policy has been changed during improvement from random policy to solve the task. For example, if an RL algorithm operates by using its learned policy to collect transitions from the environment I times, each time collecting a batch of B new transitions, then the number of deployments is I , while the total number of samples collected is I × B. The lower I is, the more deployment-efficient the algorithm is; in contrast, sample efficiency looks at I × B. Online RL algorithms, whether they are on-policy or off-policy, typically update the policy and acquire new transitions by deploying the newly updated policy at every iteration. This corresponds to performing hundreds to millions of deployments during learning on standard benchmarks (Haarnoja et al., 2018), which is severely deployment inefficient. On the other hand, offline RL literature only studies the case of 1 deployment. A deployment-efficient algorithm would stand in the middle of these two extremes and ideally learn a successful policy from scratch while deploying only a few distinct policies, as illustrated in Figure 1.\nRecent deep RL literature seldom emphasizes deployment efficiency, with few exceptions in specific applications (Kalashnikov et al., 2018) where such a learning procedure is necessary. Deploymentinefficient algorithms will fail in scenarios where the deployment of each new policy is exorbitantly expensive, such as safety-critical robotics or user-facing products. Although current state-of-the-art algorithms on continuous control have substantially improved sample or data efficiency, they have not optimized for deployment efficiency. For example, SAC (Haarnoja et al., 2018), an efficient model-free off-policy algorithm, performs half a million to one million policy deployments during learning on MuJoCo (Todorov et al., 2012) benchmarks. ME-TRPO (Kurutach et al., 2018), a model-based algorithm, performs a much lower 100-300 policy deployments, although this is still relatively high for practical settings.2 In our work, we demonstrate successful learning on standard benchmark environments with only 5-10 deployments." }, { "heading": "4 BEHAVIOR-REGULARIZED MODEL-ENSEMBLE", "text": "To achieve a favorable combination of both high deployment and sample efficiency, we propose Behavior-Regularized Model-ENsemble (BREMEN). BREMEN incorporates Dyna-style (Sutton, 1991; Kurutach et al., 2018) model-based RL, learning an ensemble of dynamics models in conjunction with a policy using imaginary rollouts and behavior regularization via conservative trust-region updates." }, { "heading": "4.1 IMAGINARY ROLLOUT FROM MODEL ENSEMBLE", "text": "As in recent Dyna-style model-based RL methods (Kurutach et al., 2018; Wang et al., 2019), BREMEN uses an ensemble of K deterministic dynamics models f̂φ = { f̂φ1 , . . . , f̂φK } to alleviate the\nproblem of model bias. Each model f̂φi is parameterized by φi and trained by the following objective, which minimizes mean squared error between the prediction of next state f̂φi(st, at) and true next state st+1 over a dataset D:\nmin φi\n1 |D| ∑\n(st,at,st+1)∈D\n1\n2 ∥∥∥st+1 − f̂φi (st, at)∥∥∥2 2 . (1)\nDuring training of a policy πθ, imagined trajectories of states and actions are generated sequentially, using a dynamics model f̂φi that is randomly selected at each time step:\nat ∼ πθ(·|ŝt), ŝt+1 = f̂φi(ŝt, at) where i ∼ {1 · · ·K}. (2)" }, { "heading": "4.2 POLICY UPDATE WITH BEHAVIOR REGULARIZATION", "text": "In order to manage the discrepancy between the true dynamics and the learned model caused by the distribution shift in batch settings, we propose to use iterative policy updates via a trust-region constraint, re-initialized with a behavior-cloned policy after every deployment. Specifically, after each deployment, we are given an updated dataset of experience transitions D. With this dataset, we approximate the true behavior policy πb through behavior cloning (BC), utilizing a neural network π̂β parameterized by β, where we implicitly assume a fixed variance, a common practice in BC (Rajeswaran et al., 2017):\nmin β\n1 |D| ∑\n(st,at)∈D\n1 2 ‖at − π̂β (st)‖22 . (3)\nAfter obtaining the estimated behavior policy, we initialize the target policy πθ as a Gaussian policy with mean from π̂β and standard deviation of 1. This BC initialization in conjunction with gradient descent based optimization may be seen as implicitly biasing the optimized πθ to be close to the data-collection policy (Nagarajan & Kolter, 2019), and thus works as a remedy for the distribution shift problem (Ross et al., 2011). To further bias the learned policy to be close to the data-collection\n2We examined the number of deployments by checking their original implementations, while the frequency of data collection is a tunable hyper-parameter.\nAlgorithm 1 BREMEN for Deployment-Efficient RL Input: Empty dataset Dall, D, Initial parameters φ = {φ1, · · · , φK}, β, Number of policy optimization T ,\nNumber of deployments I . 1: Randomly initialize the target policy πθ . 2: for deployment i = 1, · · · , I do 3: Collect B transitions in the true environment using πθ and add them to dataset Dall ← Dall ∪ {st, at, rt, st+1}, D ← {st, at, rt, st+1}. 4: Train K dynamics models f̂φ using Dall via Equation 1. 5: Train estimated behavior policy π̂β using D by behavior cloning via Equation 3. 6: Re-initialize target policy πθ0 = Normal(π̂β , 1). 7: for policy optimization k = 1, · · · , T do 8: Generate imaginary rollout via Equation 2. 9: Optimize target policy πθ satisfying Equation 4 with the rollout.\npolicy, we opt to use a KL-based trust-region optimization (Schulman et al., 2015). Therefore, the optimization of BREMEN becomes\nθk+1 = argmax θ E s,a∼πθk ,f̂φi [ πθ(a|s) πθk(a|s) Aπθk (s, a) ] (4)\ns.t. E s∼πθk ,f̂φi [DKL (πθ(·|s)‖πθk(·|s))] ≤ δ, πθ0 = Normal(π̂β , 1),\nwhere Aπθk (s, a) is the advantage of πθk computed using model-based rollouts in the learned dynamics model and δ is the maximum step size.\nThe combination of BC for initialization and finite iterative trust-region updates serves as an implicit KL regularization. This is in contrast to many previous offline RL algorithms that augment the value function with a penalty of explicit KL divergence (Siegel et al., 2020; Wu et al., 2019) or maximum mean discrepancy (Kumar et al., 2019). Empirically, we found that our regularization technique outperforms the explicit KL penalty (Section 5.3). Furthermore, we provide a mathematical intuition explaining how our methods works as an implicit regularization of distributional shift in Appendix A.\nBy recursively performing offline procedure, BREMEN can be used for deployment-efficient learning as shown in Algorithm 1, starting from a randomly initialized policy, collecting experience data, and performing offline policy updates." }, { "heading": "5 EXPERIMENTS", "text": "In order to realize a deployment-efficient RL algorithm, the batch policy optimizer has to be stable and sample-efficient. We first evaluate BREMEN in the offline setting, where the algorithm learns the policy from a static dataset. Standard benchmarks of MuJoCo physics simulator shown in (Wu et al., 2019) and more recent datasets (Fu et al., 2020) are used in the evaluation, and we compared the asymptotic performance of BREMEN with other offline RL methods including the concurrent model-based approaches. We then tested the sample-efficiency of offline algorithms using smaller datasets. We lastly extend the experiment to deployment-efficient settings, where the algorithms learn their policies from scratch via a limited number of deployments and perform some ablations to see how components in BREMEN affect performance. See Appendix F for further details." }, { "heading": "5.1 EVALUATING OFFLINE RL PERFORMANCES", "text": "Standard Benchmarks We evaluate BREMEN on standard offline RL benchmarks following and identical protocol as in Wu et al. (2019): We first train online SAC to a certain cumulative reward threshold, 4,000 in HalfCheetah, 1,000 in Ant, Hopper, and Walker2d, and collect offline datasets. We evaluate agents with the offline dataset of one million (1M) transitions, which is standard for BCQ and BRAC. Table 1 (top) shows that BREMEN can achieve performance competitive with state-of-the-art model-free offline RL algorithms when using the standard dataset size of 1M. We also test BREMEN with more recent benchmarks of D4RL (Fu et al., 2020) and compared the performance with the existing model-free and model-based methods. See Appendix D for the results.\nEvaluating Sample-Efficiency We then evaluate the sample-efficiency by making much smaller datasets of 50k and 100k transitions (5∼10 % of Wu et al. (2019)). Surprisingly, Table 1 (middle and bottom) shows that BREMEN can also learn with smaller datasets, where BCQ and BRAC are unable to exceed even BC baseline. This is a novel evaluation protocol we proposed, and our BREMEN’s superior performance here is exactly what enables recursive BREMEN in the next section to be an effective algorithm in deployment-constrained settings." }, { "heading": "5.2 EVALUATING DEPLOYMENT EFFICIENCY IN ONLINE RL BENCHMARKS", "text": "We compare BREMEN to ME-TRPO, SAC, BCQ, and BRAC applied to limited deployment settings. To adapt offline methods (BCQ, BRAC) to this setting, we simply apply them in a recursive fashion;3 at each deployment iteration, we collect a batch of data with the most recent policy and then run the offline update with this dataset. As for SAC, we simply change the replay buffer to update only at specific deployment intervals. For the sake of comparison, we align the number of deployments and the amount of data collection at each deployment (either 100k or 200k) for all methods.4\nFigure 2 shows the results with 200k (top) and 100k (bottom) batched transitions per deployment. Regardless of the environments and the batch size per update, BREMEN achieves remarkable performance while existing online and offline RL methods struggle to make any progress. As a point of comparison, we also include results for online SAC and ME-TRPO without deployment-limits but using the same number of transitions. We additionally compare BREMEN to the model-based offline RL methods with uncertainty-based penalties. See Appendix E for further details.\nFollowing the motivation of deployment efficiency, obtaining a successful policy under data-collection constraint conditions in the real application, we extensively evaluate our algorithm on more realistic robotics environments in OpenAI Gym. The experimental procedure is the same as above, while\n3Recursive BCQ and BRAC also do behavioral cloning-based policy initialization after each deployment. 4We evaluate the trade-off between sample and deployment efficiency in Appendix B.\nwe limit the batch size at each deployment as only 25k. Figure 3 presents the reaching tasks with Fetch robot and 20-DoF shadow hand (Plappert et al., 2018), and the experimental results in both environments. Only BREMEN shows stable improvement and high performance, while other offline and online algorithms fail to learn. These results suggest that a model-based method is a desirable approach for satisfying practical requirements in robotics, i.e. sample and deployment efficiency." }, { "heading": "5.3 ABLATION: EVALUATING EFFECTIVENESS OF IMPLICIT KL CONTROL", "text": "In this section, we present an experiment to better understand the effect of BREMEN’s implicit regularization. Figure 4 shows the KL divergence of learned policies from the last deployed policy. We compare BREMEN to variants of BREMEN that use an explicit KL penalty on value instead of BC initialization (conservative KL trust-region updates are still used). We find that the explicit KL without behavior initialization variants learn policies that move farther away from the last deployed policy than behavior initialized policies. This suggests that the implicit behavior regularization employed by BREMEN is more effective as a conservative policy learning protocol. In addition, to assess the effect of repeated behavior cloning initialization, we also evaluate a variant of BREMEN without behavior cloning re-initialization (grey). This variant works in easier environments (Ant, Halfcheetah), but does not show remarkable progress in more challenging ones with termination (Hopper, Walker2d).\nThis result empirically supports the need for repeated behavior initialization after each deployment. The results of further experiments are shown in Appendix G." }, { "heading": "6 RELATED WORK", "text": "Deployment Efficiency and Offline RL Although we are not aware of any previous works which explicitly proposed the concept of deployment efficiency, its necessity in many real-world applications has been generally known. One may consider previously proposed semi-batch RL algorithms (Ernst et al., 2005; Lange et al., 2012; Singh et al., 1994; Roux, 2016) or theoretical analysis of switching cost under the tabular PAC-MDP settings (Bai et al., 2019; Guo & Brunskill, 2015) as approaching this issue. More recently, a related but distinct problem known as offline RL has gained popularity (Levine et al., 2020; Wu et al., 2019; Agarwal et al., 2019; Kumar et al., 2020). These works consider an extreme version of 1 deployment, and typically collect the static batch with a partially trained policy rather than a random policy. While offline RL has shown promising results for a variety of real-world applications, such as robotics (Mandlekar et al., 2019), dialogue systems (Jaques et al., 2019), or medical treatments (Gottesman et al., 2018), these algorithms struggle when learning a policy from scratch or when the dataset is small. Nevertheless, common themes of many offline RL algorithms – regularizing the learned policy to the behavior policy (Fujimoto et al., 2019; Kumar et al., 2019; Siegel et al., 2020; Wu et al., 2019) and utilizing ensembles to handle uncertainty (Kumar et al., 2019; Wu et al., 2019) – served as inspirations for the proposed our algorithm. A major difference of BREMEN from prior works is that the target policy is not explicitly forced to stick close to the estimated behavior policy through the policy update except for the initial iteration. Rather, BREMEN employs a more implicit regularization by initializing the learned policy with a behavior cloned policy and then applying conservative trust-region updates. Another major difference is the application of model-based approaches to fully offline settings, which has not been extensively studied in prior works (Levine et al., 2020), except the two concurrent works (Kidambi et al., 2020; Yu et al., 2020) that study pessimistic or uncertainty penalized MDPs with guarantees – closely related to Liu et al. (2019). By contrast, our work shows that a simple technique can already enable model-based offline algorithms to significantly outperform the prior model-free methods, and is, to the best of our knowledge, the first to define and extensively evaluate deployment efficiency with recursive experiments.\nModel-Based RL There are many types of model-based RL algorithms (Sutton, 1991; Deisenroth & Rasmussen, 2011; Heess et al., 2015). A simple algorithmic choice is Dyna-style (Sutton, 1991), which uses a parameterized model to estimate the true MDP transition function, stochastically mapping states and actions to next states. The dynamics model can then serve as a simulator of the environment during policy updates. Dyna-style algorithms often suffer from the distributional shift, also known as model bias, which leads RL agents to exploit regions where the data is insufficient, and significant performance degradation. A variety of remedies have been proposed to relieve the issue of model bias, such as the use of multiple dynamics models as an ensemble (Chua et al., 2018; Kurutach\net al., 2018; Janner et al., 2019), meta-learning (Clavera et al., 2018), energy-based regularizer (Boney et al., 2019), game-theoretic framework (Rajeswaran et al., 2020), and explicit penalty for unknown states (Kidambi et al., 2020; Yu et al., 2020). Notably, we have employed a subset of these remedies – model ensembles and trust-region updates (Kurutach et al., 2018) – for BREMEN. Compared to prior works, our work is notable for using BC initialization in conjunction with trust-region updates to alleviate the distribution shift of the learned policy from the dataset used to train the dynamics model." }, { "heading": "7 CONCLUSION", "text": "In this work, we introduced deployment efficiency, a novel measure for RL performance that counts the number of changes in the data-collection policy during learning. To enhance deployment efficiency, we proposed a novel model-based offline algorithm, Behavior-Regularized Model-ENsemble (BREMEN), combining model-ensembles with trust region updates from model-based RL literature (Kurutach et al., 2018), and policy initialization with behavior cloning from offline RL literature (Fujimoto et al., 2019; Wu et al., 2019). Crucially, BREMEN can improve policies offline sample-efficiently even when the batch size is 10-20 times smaller than prior works, allowing BREMEN to achieve impressive results in limited deployment settings, obtaining successful policies from scratch in only 5-10 deployments. Not only can this help alleviate costs and risks in real-world applications, but it can also reduce the amount of communication required during distributed learning and could form the basis for communication-efficient large-scale RL in contrast to prior works (Nair et al., 2015; Espeholt et al., 2018; 2019). Most critically, we show that under deployment efficiency constraints, most prior algorithms – model-free or model-based, online or offline – fail to achieve successful learning. One possible direction for future work is to incorporate verification efficiency into consideration, since a stochastic multi-modal policy could collect more diverse transitions while it takes more trajectories to be verified for safety than a uni-modal policy. While we presented promising results on some realistic simulated environments, validating BREMEN on real robots is another direction. We hope our work can gear the research community to value deployment efficiency as an important criterion for RL algorithms, and to eventually achieve similar sample efficiency and asymptotic performance as the state-of-the-art algorithms like SAC (Haarnoja et al., 2018) while having the deployment efficiency well-suited for safe and practical real-world reinforcement learning." }, { "heading": "APPENDIX", "text": "A IMPLICIT KL CONTROL FROM A MATHEMATICAL PERSPECTIVE\nWe can intuitively understand that behavior cloning initialization with trust-region updates works as a regularization of distributional shift, and this can be supported by theory. Following the notation of Janner et al. (2019), we denote the generalization error of a dynamics model on the state distribution under the true behavior policy as m = maxt Es∼dπbt DTV (p(st+1|st, at)||pφ(st+1|st, at)), where DTV represents the total variation distance between true dynamics p and learned model pφ. We also denote the distribution shift on the target policy as maxsDTV (πb||π) ≤ π. A bound relating the true returns η[π] and the model returns η̂[π] on the target policy is given in Janner et al. (2019) as,\nη[π] ≥ η̂[π]− [ 2γrmax( m + 2 π)\n(1− γ)2 + 4rmax π (1− γ)\n] . (5)\nThis bound guarantees the improvement under the true returns as long as the improvement under the model returns increases by more than the slack in the bound due to m, π (Janner et al., 2019; Levine et al., 2020).\nWe may relate this bound to the specific learning employed by BREMEN, which includes dynamics model learning, behavior cloning policy initialization, and conservative KL-based trust-region policy updates. To do so, we consider an idealized version of BREMEN, where the expectations over states in equations Equation 1, 3, 4 are replaced with supremums and the dynamics model is set to have unit variance. Proposition 1 (Policy and model error bound). Suppose we apply the idealized BREMEN on a dataset D, and define β , φ in terms of the behavior cloning and dynamics model losses as,\nβ := sup s\nEa∼D(−|s)[‖a− π̂β (s)‖ 2 2 /2]−H(πb(−|s))\nφ := sup s,a\nEs′∼D(−|s,a) [ ‖s′ − f̂φ(s, a)‖22/2 ] −H(p(−|s, a)),\nwhereH denotes the Shannon entropy. If one then applies T KL-based trust-region steps of step size δ (Equation 4) using stochastic dynamics models with mean f̂φ and standard deviation 1, then\nπ =\n√ 1\n2 β + da 4 log 2π + T\n√ 1\n2 δ ; m ≤\n√ 1\n2 φ + ds 4 log 2π,\nwhere da and ds denotes the dimension of action and state space.\nProof. We first consider π . The behavior cloning objective in its supremum form is,\nβ = sup s∈D\nEa∼D(−|s)[‖a− π̂β (s)‖ 2 2 /2]−H(πb(−|s))\n= sup s∈D Ea∼D(−|s) [− log πθ0(a|s)]−H(πb(−|s))− da 2 log 2π\n= sup s∈D DKL(πb(−|s)||πθ0(−|s))− da 2 log 2π.\nWe apply Pinsker’s inequality to the true and estimated behavior policy to yield\nsup s DTV (πb(−|s)||πθ0(−|s)) ≤\n√ 1\n2 β + da 4 log 2π.\nBy the same Pinsker’s inequality, we have,\nsup s DTV (πθk(−|s)||πθk+1(−|s)) ≤\n√ δ/2.\nTherefore, by triangle inequality, we have\nsup s DTV (πb(−|s)||πθT (−|s)) ≤\n√ 1\n2 β + da 4 log 2π + T\n√ 1\n2 δ = π,\nas desired.\nWe perform similarly for m. The model dynamics loss is\nφ = sup s,a\nEs′∼D(−|s,a) [ ‖s′ − f̂φ(s, a)‖22/2 ] −H(p(−|s, a))\n= sup s,a Es′∼D(−|s,a) [− log pφ(s′|s, a)]−H(p(−|s, a))− ds 2 log 2π\n= sup s,a DKL(p(−|s, a)||pφ(−|s, a))− ds 2 log 2π.\nWe apply Pinsker’s inequality to the true dynamics and learned model to yield\nm ≤ sup s,a\nDTV (p(−|s, a)||pφ(−|s, a)) ≤ √ 1\n2 φ + ds 4 log 2π,\nas desired." }, { "heading": "B TRADE-OFF BETWEEN SAMPLE AND DEPLOYMENT EFFICIENCY", "text": "An important aspect of deployment efficiency is the trade-off between sample and deployment efficiency. To collect multiple data points per experiment and show this trade-off, we run recursive BREMEN with different batch sizes, and record how many samples are required to cross different reward thresholds.\nHalfCheetah (Reward 7,000 result) and other results from Figure 5 generally show that high deployment efficiency lowers sample efficiency, confirming the inherent trade-off. However, in rare cases, e.g. Ant (Reward 5,000 result), it could be possible to achieve both high deployment efficiency and high sample efficiency through the right choice of the batch size hyper-parameter." }, { "heading": "C DISCUSSION: IMPORTANCE OF DEPLOYMENT EFFICIENCY IN REAL-WORLD APPLICATIONS", "text": "Our notion of deployment-efficiency is necessitated by cost and safety constraints typical in many real world scenarios. Namely, a common approach to real-world applications (Cabi et al., 2020; DulacArnold et al., 2019; Kalashnikov et al., 2018) is the following iterative training and data-collection paradigm:\n1. Aggregate past previous dataset from worker(s) 2. Update policy based on the collected data 3. Deploy the policy to the worker(s) 4. Monitor the policy works as expected e.g. checking if it does not violate safety criterion\n(this safety verification step may alternatively happen before step 3) 5. Let the worker(s) collect experiences with the latest policy.\nIt is easy to see that the number of deployments is a critical bottleneck, as it involves both monitoring of the policy (Step 4) and communication to the workers (Step 3), and both of these steps can incur significant cost. Specifically, Step 4 requires evaluating the policy for safety, and often requires human monitors (Atkeson et al., 2015). As for Step 3, communication to workers can also be a bottleneck, especially in highly-parallelized distributed RL systems (Nair et al., 2015; Espeholt et al., 2018; 2019). Every policy deployment requires a potentially expensive communication between different machines/processes, and this can be a bottleneck on the whole system.\nAs a concrete example of the necessity of good deployment efficiency, consider optimization of personalization in web apps or recommender systems (Abel et al., 2017). Once a policy is learned on a batch of past experiences, it is deployed to a collection of web-hosting servers. In this scenario, both safety and communication concerns are relevant: Safety of the new policy is typically ensured by initially deploying the policy to a small percentage of users; after monitoring the results for some length of time (e.g. the newly deployed policy does not deteriorate user experiences), one can expand the target user set. As for communication, deploying a new policy to web-hosting servers can be time intensive, especially in large-scale web applications where the policy must be deployed to a network of servers around the world. Thus, in this setting, it is clear that online updating of the policy is infeasible due to both safety and communication constraints. Accordingly, the deployment-efficiency of any candidate RL algorithm is of tantamount importance.\nThe safe exploration might be mentioned as a potential alternative to deployment-efficiency. While safe exploration can arguably tackle the first concern above (safety risks of the policy), it does nothing to mitigate the latter (the engineering or communication costs associated with online deployment of a policy). Furthermore, this still ignores the fact that in many scenarios the ability to do safe exploration is not a given. While some safe RL algorithms can provide guarantees in tabular cases, these guarantees no longer hold when using function approximation with neural networks (Chow et al., 2018). In these cases, it can be much more difficult to perform “safe exploration” than it is to develop a deployment-efficient algorithm." }, { "heading": "D EVALUATING OFFLINE PERFORMANCES ON D4RL DATASETS", "text": "We compare BREMEN to MOPO (Yu et al., 2020), concurrently proposed model-based offline methods penalized by model epistemic uncertainty, and state-of-the-art model-free offline algorithms, namely, CQL (Kumar et al., 2020), BEAR (Kumar et al., 2019), BRAC (Wu et al., 2019), AWR (Peng et al., 2019) and BCQ (Fujimoto et al., 2019), on the D4RL MuJoCo locomotion datasets (Fu et al., 2020), used as standard offline RL benchmarks (Kumar et al., 2020; Nair et al., 2020). They have several types of offline data collected with different strategies. We choose the hyper parameters of BREMEN in Section 5.1 and F.2.2. Table 2 shows BREMEN beats recent state-of-the-art algorithms with the highest normalized score (around 100 corresponds to an expert) in several tasks, while none of the methods consistently achieves the best performance. This result suggests that the implicit regularization with the model-based method performs surprisingly well in offline settings despite of its simplicity.\nE INCORPORATING PESSIMISTIC MODEL-BASED OFFLINE METHODS INTO BREMEN\nThe concurrent model-based offline RL methods prescribe the use of uncertainty-based penalties (Kidambi et al., 2020; Yu et al., 2020), which can be incorporated into BREMEN. We therefore augmented BREMEN with either a hard (MOReL-like, green) or soft (MOPO-like, orange) reward penalty according to model uncertainty. MOReL quantifies the uncertainty measuring the maximum discrepancy of the prediction across the ensembles of the models and receives constant negative reward (-5.0 in our experiments) if the discrepancy is larger than the threshold (we set 3.0). MOPO measures the uncertainty by the maximum standard deviation of the model ensembles and uses this as a reward penalty with a coefficient (0.1 in our experiments). Evaluations in Figure 6 reveal that the soft reward penalty has notable results in Hopper and Walker2d, where model uncertainty is more crucial due to the episode’s termination. Hard reward penalty seems overly pessimistic in deployment-efficient settings." }, { "heading": "F DETAILS OF EXPERIMENTAL SETTINGS", "text": "F.1 IMPLEMENTATION DETAILS\nFor our baseline methods, we use the open-source implementations of SAC, BC, BCQ, and BRAC published in Wu et al. (2019). SAC and BRAC have (300, 300) Q-Network and (200, 200) policy network. BC has (200, 200) policy network, and BCQ has (300, 300) Q-Network, (300, 300) policy network, and (750, 750) conditional VAE. As for online ME-TRPO, we utilize the codebase of model-based RL benchmark (Wang et al., 2019). BREMEN and online ME-TRPO use the policy consisting of two hidden layers with 200 units. The dynamics model also consists of two hidden layers with 1,024 units. We use Adam (Kingma & Ba, 2014) as the optimizer with the learning rate of 0.001 for the dynamics model, and 0.0005 for behavior cloning in BREMEN. Especially in BREMEN and online ME-TRPO, we adopt a linear feature value function to stabilize the training. BREMEN in deployment-efficient settings takes about two or three hours per deployment on an NVIDIA TITAN V.\nTo leverage neural networks as Dyna-style (Sutton, 1991) dynamics models, we modify reward and termination function so that they are not dependent on the internal physics engine for calculation, following model-based benchmark codebase (Wang et al., 2019); see Table 3. Note that the score of baselines (e.g., BCQ, BRAC) is slightly different from Wu et al. (2019) due to this modification of the reward function. We re-run each algorithm in our environments and got appropriate convergence.\nThe maximum length of one episode is 1,000 steps without any termination in Ant and HalfCheetah; however, termination function is enabled in Hopper and Walker2d. The batch size of transitions for policy update is 50,000 in BREMEN and ME-TRPO, following Kurutach et al. (2018). The batch size of BC and BRAC is 256, and BCQ is 100, also following Wu et al. (2019)." }, { "heading": "F.2 HYPER PARAMETERS", "text": "In this section, we describe the hyper-parameters in both deployment-efficient RL (Section F.2.1) and offline RL (Section F.2.2) settings. We run all of our experiments with five random seed, and the results are averaged." }, { "heading": "F.2.1 DEPLOYMENT-EFFICIENT RL", "text": "Table 4 shows the hyper-parameters of BREMEN. The rollout length is searched from {250, 500, 1000}, and max step size δ is searched from {0.001, 0.01, 0.05, 0.1, 1.0}. As for the discount factor γ and GAE λ, we follow Wang et al. (2019)." }, { "heading": "Parameter Ant HalfCheetah Hopper Walker2d", "text": "Stationary Noise in BREMEN To achieve effective exploration, the stochastic Gaussian policy is a good choice. We found that adding stationary Gaussian noise to the policy in the imaginary trajectories and data collection led to the notable improvement. Stationary Gaussian policy is written as, at = tanh(µθ(st)) + , ∼ N (0, σ2). Another choice is a learned Gaussian policy, which parameterizes not only µθ but also σθ. Learned gaussian policy is also written as,\nat = tanh(µθ(st)) + σθ(st) , ∼ N (0, σ2).\nWe utilize the zero-mean GaussianN (0, σ2), and tune up σ in Figure 10 with HalfCheetah, comparing stationary and learned strategies. From this experiment, we found that the stationary noise, the scale of 0.1, consistently performs well, and therefore we used it for all our experiments.\nHalfCheetah\nOther Hyper-parameters in the Existing Methods As for online ME-TRPO, we collect 3,000 steps through online interaction with the environment per 25 iterations and split these transitions into a 2-to-1 ratio of training and validation dataset for learning dynamics models. In batch size 100,000 settings, we collect 2,000 steps and split with 1-to-1 ratio. Totally, we iterate 12,500 times policy optimization, which is equivalent to 500 deployments of the policy. Note that we carefully tune up the hyper-parameters of online ME-TRPO, and the performance is improved from Wang et al. (2019).\nTable 5 and Table 6 shows the tunable hyper-parameters of BCQ and BRAC, respectively. We refer Wu et al. (2019) to choose these values. In this work, BRAC applies a primal form of KL value penalty, and BRAC (max Q) means sampling multiple actions and taking the maximum according to the learned Q function." }, { "heading": "Parameter Ant HalfCheetah Hopper Walker2d", "text": "" }, { "heading": "Parameter Ant HalfCheetah Hopper Walker2d", "text": "" }, { "heading": "F.2.2 OFFLINE RL", "text": "In the offline experiments, we apply the same hyper-parameters as in the deployment-efficient settings described above, except for the iteration per batch. Algorithm 2 is pseudocode for BREMEN in offline RL settings where policies are updated only with one fixed batch dataset. The number of iteration T is set to 6,250 in BREMEN, and 500,000 in BC, BCQ, and BRAC.\nThe datasets for 50k or 100k experiments are sliced from the beginning of the 1M batched datasets without shuffling, but we observed that the distribution of rewards in 50k or 100k is not different from 1M.\nAlgorithm 2 BREMEN for Offline RL Input: Offline dataset D = {st, at, rt, st+1}, Initial parameters φ = {φ1, · · · , φK}, β, Number of policy\noptimization T . 1: Train K dynamics models f̂φ using D via Equation 1. 2: Train estimated behavior policy π̂β using D by behavior cloning via Equation 3. 3: Initialize target policy πθ0 = Normal(π̂β , 1). 4: for policy optimization k = 1, · · · , T do 5: Generate imaginary rollout. 6: Optimize target policy πθ satisfying Equation 4 with the rollout." }, { "heading": "G ADDITIONAL EXPERIMENTAL RESULTS", "text": "" }, { "heading": "G.1 PERFORMANCE ON THE DATASET WITH DIFFERENT NOISE", "text": "Following Wu et al. (2019) and Kidambi et al. (2020), we additionally compare BREMEN in offline settings to the other baselines (BC, BCQ, BRAC) with five datasets of different exploration noise. Each dataset has also one million transitions.\n• eps1: 40 % of the dataset is collected by data-collection policy (partially trained SAC policy) πb, 40 % of the dataset is collected by epsilon greedy policy with = 0.1 to take a random action, and 20 % of dataset is collected by an uniformly random policy. • eps3: Same as eps1, 40 % of the dataset is collected by πb, 40 % is collected by epsilon\ngreedy policy with = 0.3, and 20 % is collected by an uniformly random policy. • gaussian1: 40 % of the dataset is collected by data-collection policy πb, 40 % is collected\nby the policy with adding zero-mean Gaussian noise N (0, 0.12) to each action sampled from πb, and 20 % is collected by an uniformly random policy. • gaussian3: 40 % of the dataset is collected by data-collection policy πb, 40 % is collected by\nthe policy with zero-mean Gaussian noiseN (0, 0.32), and 20 % is collected by an uniformly random policy. • random: All of the dataset is collected by an uniformly random policy.\nTable 7 shows that BREMEN can also achieve performance competitive with state-of-the-art modelfree offline RL algorithm even with noisy datasets. The training curves of each experiment are shown in Appendix G.4." }, { "heading": "G.2 COMPARISON AMONG DIFFERENT NUMBER OF ENSEMBLES", "text": "To deal with the distribution shift during policy optimization, also known as model bias, we introduce the dynamics model ensembles. We validate the performance of BREMEN with a different number of dynamics models K. Figure 11 and Figure 12 show the performance of BREMEN with the different number of ensembles in deployment-efficient and offline settings. Ensembles with more dynamics models resulted in better performance due to the mitigation of distributional shift except for K = 10, and then we choose K = 5.\nG.3 IMPLICIT KL CONTROL IN OFFLINE SETTINGS\nSimilar to Section 5.3, we present offline RL experiments to better understand the effect of implicit KL regularization. In contrast to the implicit KL regularization with Equation 4, the optimization of BREMEN with explicit KL value penalty becomes\nθk+1 = argmax θ E s,a∼πθk ,f̂φi [ πθ(a|s) πθk(a|s) (Aπθk (s, a)− αDKL(πθ(·|s)‖π̂β(·|s))) ]\n(6)\ns.t. E s∼πθk [DKL (πθ(·|s)‖πθk(·|s))] ≤ δ,\nwhere Aπθk (s, a) is the advantage of πθk computed using imaginary rollouts with the learned dynamics model and δ is the maximum step size. Note that BREMEN with explicit KL penalty does not utilize behavior cloning initialization.\nWe empirically conclude that the explicit constraint −αDKL(πθ(·|s)‖π̂β(·|s)) is unnecessary and just TRPO update with behavior-initialization as implicit regularization is sufficient in BREMEN algorithm. Figure 13 shows the KL divergence between learned policies and the last deployed policies (top row) and model errors measured by a mean squared error of predicted next state from the true state (second row). We find that behavior initialized policy with conservative KL trust-region updates well stuck to the last deployed policy during improvement without explicit KL penalty. The policy initialized with behavior cloning also tended to suppress the increase of model error, which implies that behavior initialization alleviates the effect of the distribution shift. In Walker2d, the model error of BREMEN is relatively large, which may relate to the poor performance with noisy datasets in Section G.1." }, { "heading": "G.4 TRAINING CURVES FOR OFFLINE RL WITH DIFFERENT NOISES", "text": "In this section, we present training curves of our all experiments in offline settings. Figure 14 shows the results in Section 5.1. Figure 15, 16, 17, 18, and 19 also show the results in Section G.1." }, { "heading": "G.5 DEPLOYMENT-EFFICIENT RL EXPERIMENT WITH DIFFERENT REWARD FUNCTION", "text": "In addition to the main results in Section 5.2 (Figure 2), we also evaluate BREMEN in deploymentefficient setting with different reward function. We modified HalfCheetah environment into the one similar to cheetah-run task in Deep Mind Control Suite.5 The reward function is defined as\nrt = { 0.1ẋt (0 ≤ ẋt ≤ 10) 1 (ẋt > 10),\nand the termination is turned off. Figure 20 shows the performance of BREMEN and existing methods. BREMEN also shows better deployment efficiency than other existing offline methods and online ME-TRPO, except for SAC, which is the same trend as that of main results." }, { "heading": "BCQ", "text": "5https://github.com/deepmind/dm_control/blob/master/dm_control/suite/ cheetah.py" } ]
2,021
ING VIA MODEL-BASED OFFLINE OPTIMIZATION
SP:1d642e5532adea5cd782f529fed197448e60c458
[ "This paper proposes Deep Autoencoding Predictive Components (DAPC), a self-supervised representation learning approach for sequential data. In this approach, the model learns to maximize the predictive information, which is the mutual information between past and future time windows. In order to avoid degenerate solutions, the proposed approach relies on a second loss that optimizes masked reconstructions." ]
We propose Deep Autoencoding Predictive Components (DAPC) – a selfsupervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between the past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data. 1
[ { "affiliations": [], "name": "Junwen Bai" }, { "affiliations": [], "name": "Weiran Wang" } ]
[ { "authors": [ "Galen Andrew", "Raman Arora", "Jeff Bilmes", "Karen Livescu" ], "title": "Deep canonical correlation analysis", "venue": "In ICML,", "year": 2013 }, { "authors": [ "Alexei Baevski", "Michael Auli", "Abdelrahman Mohamed" ], "title": "Effectiveness of self-supervised pretraining for speech recognition", "venue": "arXiv preprint arXiv:1911.03912,", "year": 2019 }, { "authors": [ "Alexei Baevski", "Henry Zhou", "Abdelrahman Mohamed", "Michael Auli" ], "title": "wav2vec 2.0: A framework for self-supervised learning of speech representations", "venue": "arXiv preprint arXiv:2006.11477,", "year": 2020 }, { "authors": [ "Anthony J Bell", "Terrence J Sejnowski" ], "title": "An information-maximization approach to blind separation and blind deconvolution", "venue": "Neural computation,", "year": 1995 }, { "authors": [ "David Beniaguev" ], "title": "Historical hourly weather data 2012-2017, 2017", "venue": "URL www.kaggle.com/ selfishgene/historical-hourly-weather-data", "year": 2017 }, { "authors": [ "William Bialek", "Ilya Nemenman", "Naftali Tishby" ], "title": "Predictability, complexity, and learning", "venue": "Neural computation,", "year": 2001 }, { "authors": [ "Tom B Brown", "Benjamin Mann", "Nick Ryder", "Melanie Subbiah", "Jared Kaplan", "Prafulla Dhariwal", "Arvind Neelakantan", "Pranav Shyam", "Girish Sastry", "Amanda Askell" ], "title": "Language models are few-shot learners", "venue": null, "year": 2005 }, { "authors": [ "W. Chan", "N. Jaitly", "Q.V. Le", "O. Vinyals" ], "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition", "venue": null, "year": 2016 }, { "authors": [ "Ting Chen", "Simon Kornblith", "Mohammad Norouzi", "Geoffrey Hinton" ], "title": "A simple framework for contrastive learning of visual representations", "venue": "arXiv preprint arXiv:2002.05709,", "year": 2020 }, { "authors": [ "J. Chorowski", "D. Bahdanau", "D. Serdyuk", "K. Cho", "Y. Bengio" ], "title": "Attention-based models for speech recognition", "venue": "In NIPS,", "year": 2015 }, { "authors": [ "J. Chorowski", "R.J. Weiss", "S. Bengio", "A.V.D. Oord" ], "title": "Unsupervised speech representation learning using wavenet autoencoders", "venue": "IEEE Trans. Audio, Speech and Language Process.,", "year": 2019 }, { "authors": [ "Junyoung Chung", "Caglar Gulcehre", "Kyunghyun Cho", "Yoshua Bengio" ], "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "venue": "In NIPS 2014 Workshop on Deep Learning,", "year": 2014 }, { "authors": [ "Junyoung Chung", "Kyle Kastner", "Laurent Dinh", "Kratarth Goel", "Aaron C Courville", "Yoshua Bengio" ], "title": "A recurrent latent variable model for sequential data", "venue": "In Advances in neural information processing systems,", "year": 2015 }, { "authors": [ "Y-A. Chung", "W-N. Hsu", "H. Tang", "J. Glass" ], "title": "An unsupervised autoregressive model for speech representation learning", "venue": "In Interspeech,", "year": 2019 }, { "authors": [ "Yu-An Chung", "James Glass" ], "title": "Generative pre-training for speech with autoregressive predictive coding", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Yu-An Chung", "Wei-Ning Hsu", "Hao Tang", "James Glass" ], "title": "An unsupervised autoregressive model for speech representation learning", "venue": null, "year": 2019 }, { "authors": [ "David Clark", "Jesse Livezey", "Kristofer Bouchard" ], "title": "Unsupervised discovery of temporal structure in noisy data with dynamical components analysis", "venue": null, "year": 2019 }, { "authors": [ "Djork-Arné Clevert", "Thomas Unterthiner", "Sepp Hochreiter" ], "title": "Fast and accurate deep network learning by exponential linear units (elus)", "venue": "arXiv preprint arXiv:1511.07289,", "year": 2015 }, { "authors": [ "Peter Dayan", "Laurence F Abbott" ], "title": "Theoretical neuroscience: computational and mathematical modeling of neural systems", "venue": "Computational Neuroscience Series,", "year": 2001 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In NAACL-HLT,", "year": 2019 }, { "authors": [ "Joshua I Glaser", "Ari S Benjamin", "Raeed H Chowdhury", "Matthew G Perich", "Lee E Miller", "Konrad P Kording" ], "title": "Machine learning for neural decoding", "venue": "eNeuro,", "year": 2020 }, { "authors": [ "A. Graves", "S. Fernández", "F. Gomez", "J. Schmidhuber" ], "title": "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks", "venue": "In ICML,", "year": 2006 }, { "authors": [ "Jean-Bastien Grill", "Florian Strub", "Florent Altché", "Corentin Tallec", "Pierre H Richemond", "Elena Buchatskaya", "Carl Doersch", "Bernardo Avila Pires", "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar" ], "title": "Bootstrap your own latent: A new approach to self-supervised learning", "venue": "arXiv preprint arXiv:2006.07733,", "year": 2020 }, { "authors": [ "Michael Gutmann", "Aapo Hyvärinen" ], "title": "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2010 }, { "authors": [ "Kaiming He", "Haoqi Fan", "Yuxin Wu", "Saining Xie", "Ross Girshick" ], "title": "Momentum contrast for unsupervised visual representation learning", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Geoffrey Hinton", "Li Deng", "Dong Yu", "George Dahl", "Abdel rahman Mohamed", "Navdeep Jaitly", "Andrew Senior", "Vincent Vanhoucke", "Patrick Nguyen", "Tara N. Sainath", "Brian Kingsbury" ], "title": "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups", "venue": "IEEE Signal Processing Magazine,", "year": 2012 }, { "authors": [ "R Devon Hjelm", "Alex Fedorov", "Samuel Lavoie-Marchildon", "Karan Grewal", "Phil Bachman", "Adam Trischler", "Yoshua Bengio" ], "title": "Learning deep representations by mutual information estimation and maximization", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Takaaki Hori", "Jaejin Cho", "Shinji Watanabe" ], "title": "End-to-end speech recognition with word-based RNN language models", "venue": "In SLT,", "year": 2018 }, { "authors": [ "Wei-Ning Hsu", "Yu Zhang", "James Glass" ], "title": "Unsupervised learning of disentangled and interpretable representations from sequential data", "venue": "In Advances in neural information processing systems,", "year": 2017 }, { "authors": [ "Dongwei Jiang", "Xiaoning Lei", "Wubo Li", "Ne Luo", "Yuxuan Hu", "Wei Zou", "Xiangang Li" ], "title": "Improving transformer-based speech recognition using unsupervised pre-training", "venue": null, "year": 1910 }, { "authors": [ "Shigeki Karita", "Nanxin Chen", "Tomoki Hayashi", "Takaaki Hori", "Hirofumi Inaguma", "Ziyan Jiang", "Masao Someki", "Nelson Enrique Yalta Soplin", "Ryuichi Yamamoto", "Xiaofei Wang", "Shinji Watanabe", "Takenori Yoshimura", "Wangyou Zhang" ], "title": "A comparative study on transformer vs rnn in speech applications", "venue": "ASRU,", "year": 2019 }, { "authors": [ "Kazuya Kawakami", "Luyu Wang", "Chris Dyer", "Phil Blunsom", "Aaron van den Oord" ], "title": "Learning robust and multilingual speech representations", "venue": "arXiv preprint arXiv:2001.11128,", "year": 2020 }, { "authors": [ "Diederik P Kingma", "Max Welling" ], "title": "Auto-encoding variational bayes", "venue": "In ICLR,", "year": 2014 }, { "authors": [ "Alexander Kraskov", "Harald Stögbauer", "Peter Grassberger" ], "title": "Estimating mutual information", "venue": "Physical review E,", "year": 2004 }, { "authors": [ "Ming Li", "Paul Vitányi" ], "title": "An introduction to Kolmogorov complexity and its applications", "venue": null, "year": 2008 }, { "authors": [ "Yingzhen Li", "Stephan Mandt" ], "title": "Disentangled sequential autoencoder", "venue": "arXiv preprint arXiv:1803.02991,", "year": 2018 }, { "authors": [ "Shaoshi Ling", "Yuzong Liu", "Julian Salazar", "Katrin Kirchhoff" ], "title": "Deep contextualized acoustic representations for semi-supervised speech recognition", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Andy T Liu", "Shang-Wen Li", "Hung-yi Lee" ], "title": "Tera: Self-supervised learning of transformer encoder representation for speech", "venue": "arXiv preprint arXiv:2007.06028,", "year": 2020 }, { "authors": [ "Lu Liu", "Yiheng Huang" ], "title": "Masked pre-trained encoder base on joint ctc-transformer", "venue": "arXiv preprint arXiv:2005.11978,", "year": 2020 }, { "authors": [ "David McAllester", "Karl Stratos" ], "title": "Formal limitations on the measurement of mutual information", "venue": "In AISTATS,", "year": 2020 }, { "authors": [ "Ishan Misra", "Laurens van der Maaten" ], "title": "Self-supervised learning of pretext-invariant representations", "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition,", "year": 2020 }, { "authors": [ "Joseph E. O’Doherty", "Mariana M.B. Cardoso", "Joseph G. Makin", "Philip N. Sabes" ], "title": "Nonhuman primate reaching with multichannel sensorimotor cortex electrophysiology: Broadband for indy", "venue": null, "year": 2018 }, { "authors": [ "Aaron van den Oord", "Yazhe Li", "Oriol Vinyals" ], "title": "Representation learning with contrastive predictive coding", "venue": "arXiv preprint arXiv:1807.03748,", "year": 2018 }, { "authors": [ "Sherjil Ozair", "Corey Lynch", "Yoshua Bengio", "Aaron Van den Oord", "Sergey Levine", "Pierre Sermanet" ], "title": "Wasserstein dependency measure for representation learning", "venue": null, "year": 2019 }, { "authors": [ "Vassil Panayotov", "Guoguo Chen", "Daniel Povey", "Sanjeev Khudanpur" ], "title": "Librispeech: an asr corpus based on public domain audio books", "venue": "In ICASSP,", "year": 2015 }, { "authors": [ "Daniel S. Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin D. Cubuk", "Quoc V. Le" ], "title": "Specaugment: A simple data augmentation method for automatic speech recognition", "venue": null, "year": 2019 }, { "authors": [ "S. Pascual", "M. Ravanelli", "J. Serrà", "A. Bonafonte", "Y. Bengio" ], "title": "Learning problem-agnostic speech representations from multiple self-supervised tasks", "venue": "In Interspeech,", "year": 2019 }, { "authors": [ "Santiago Pascual", "Mirco Ravanelli", "Joan Serrà", "Antonio Bonafonte", "Yoshua Bengio" ], "title": "Learning problem-agnostic speech representations from multiple self-supervised tasks", "venue": "arXiv preprint arXiv:1904.03416,", "year": 2019 }, { "authors": [ "Douglas B. Paul", "Janet M. Baker" ], "title": "The design for the wall street journal-based CSR corpus", "venue": "In Proceedings of the workshop on Speech and Natural Language,", "year": 1992 }, { "authors": [ "AN Pchelintsev" ], "title": "Numerical and physical modeling of the dynamics of the lorenz system", "venue": "Numerical analysis and Applications,", "year": 2014 }, { "authors": [ "Matthew E Peters", "Mark Neumann", "Mohit Iyyer", "Matt Gardner", "Christopher Clark", "Kenton Lee", "Luke Zettlemoyer" ], "title": "Deep contextualized word representations", "venue": "In Proceedings of NAACL-HLT,", "year": 2018 }, { "authors": [ "Mirco Ravanelli", "Jianyuan Zhong", "Santiago Pascual", "Pawel Swietojanski", "Joao Monteiro", "Jan Trmal", "Yoshua Bengio" ], "title": "Multi-task self-supervised learning for robust speech recognition", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "Steffen Schneider", "Alexei Baevski", "Ronan Collobert", "Michael Auli" ], "title": "wav2vec: Unsupervised pre-training for speech recognition", "venue": null, "year": 2019 }, { "authors": [ "Rui Shu", "Hung H. Bui", "Shengjia Zhao", "Mykel J. Kochenderfer", "Stefano Ermon" ], "title": "Amortized inference regularization", "venue": "In NIPS,", "year": 2018 }, { "authors": [ "Xingchen Song", "Guangsen Wang", "Zhiyong Wu", "Yiheng Huang", "Dan Su", "Dong Yu", "Helen Meng" ], "title": "Speech-xlnet: Unsupervised acoustic model pretraining for self-attention networks", "venue": null, "year": 2019 }, { "authors": [ "Steven H Strogatz" ], "title": "Nonlinear dynamics and chaos with student solutions manual: With applications to physics, biology, chemistry, and engineering", "venue": "CRC press,", "year": 2018 }, { "authors": [ "Yonglong Tian", "Dilip Krishnan", "Phillip Isola" ], "title": "Contrastive multiview coding", "venue": "arXiv preprint arXiv:1906.05849,", "year": 2019 }, { "authors": [ "Jakub Tomczak", "Max Welling" ], "title": "Vae with a vampprior", "venue": "In International Conference on Artificial Intelligence and Statistics,", "year": 2018 }, { "authors": [ "Michael Tschannen", "Josip Djolonga", "Paul K Rubenstein", "Sylvain Gelly", "Mario Lucic" ], "title": "On mutual information maximization for representation learning", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ashish Vaswani", "Noam Shazeer", "Niki Parmar", "Jakob Uszkoreit", "Llion Jones", "Aidan N Gomez", "Łukasz Kaiser", "Illia Polosukhin" ], "title": "Attention is all you need", "venue": null, "year": 2017 }, { "authors": [ "P. Vincent", "H. Larochelle", "I. Lajoie", "Y. Bengio", "P.A. Manzagol" ], "title": "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion", "venue": "Journal of Machine Learning Research,", "year": 2010 }, { "authors": [ "Weiran Wang", "Raman Arora", "Karen Livescu", "Jeff Bilmes" ], "title": "On deep multi-view representation learning", "venue": "In ICML,", "year": 2015 }, { "authors": [ "Weiran Wang", "Qingming Tang", "Karen Livescu" ], "title": "Unsupervised pre-training of bidirectional speech encoders via masked reconstruction", "venue": "In ICASSP,", "year": 2020 }, { "authors": [ "S. Watanabe", "T. Hori", "S. Kim", "J. Hershey", "T. Hayashi" ], "title": "Hybrid CTC/attention architecture for end-to-end speech recognition", "venue": "IEEE Journal of Selected Topics in Signal Processing,", "year": 2017 }, { "authors": [ "Shinji Watanabe", "Takaaki Hori", "Shigeki Karita", "Tomoki Hayashi", "Jiro Nishitoba", "Yuya Unno", "Nelson Enrique Yalta Soplin", "Jahn Heymann", "Matthew Wiesner", "Nanxin Chen", "Adithya Renduchintala", "Tsubasa Ochiai" ], "title": "ESPnet: End-to-end speech processing toolkit", "venue": null, "year": 2018 }, { "authors": [ "Laurenz Wiskott", "Terrence Sejnowski" ], "title": "Slow feature analysis: Unsupervised learning of invariances", "venue": "Neural computation,", "year": 2002 }, { "authors": [ "Chan" ], "title": "ADDITIONAL RESULTS FOR AUTOMATIC SPEECH RECOGNITION (ASR) The acoustic model is trained with a multi-task objective (Watanabe et al., 2017) which combines attention Chorowski et al", "venue": null, "year": 2006 }, { "authors": [ "Hori" ], "title": "RNNLM trained on the language model training data of each corpus, with a vocabulary size of 65K for WSJ, and 200K for LibriSpeech. We use the lookahead scores derived from word RNNLM during beam search, for selecting promising character tokens", "venue": "In Table 9,", "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Self-supervised representation learning methods aim at learning useful and general representations from large amounts of unlabeled data, which can reduce sample complexity for downstream supervised learning. These methods have been widely applied to various domains such as computer vision (Oord et al., 2018; Hjelm et al., 2018; Chen et al., 2020; Grill et al., 2020), natural language processing (Peters et al., 2018; Devlin et al., 2019; Brown et al., 2020), and speech processing (Schneider et al., 2019; Pascual et al., 2019b; Chung & Glass, 2020; Wang et al., 2020; Baevski et al., 2020). In the case of sequence data, representation learning may force the model to recover the underlying dynamics from the raw data, so that the learnt representations remove irrelevant variability in the inputs, embed rich context information and become predictive of future states. The effectiveness of the representations depends on the self-supervised task which injects inductive bias into learning. The design of self-supervision has become an active research area.\nOne notable approach for self-supervised learning is based on maximizing mutual information between the learnt representations and inputs. The most commonly used estimate of mutual information is based on contrastive learning. A prominant example of this approach is CPC (Oord et al., 2018), where the representation of each time step is trained to distinguish between positive samples which are inputs from the near future, and negative samples which are inputs from distant future or other sequences. The performance of contrastive learning heavily relies on the nontrivial selection ∗Work done during an internship at Salesforce Research. †Work done while Weiran Wang was with Salesforce Research. 1Code is available at https://github.com/JunwenBai/DAPC.\nof positive and negative samples, which lacks a universal principle across different scenarios (He et al., 2020; Chen et al., 2020; Misra & Maaten, 2020). Recent works suspected that the mutual information lower bound estimate used by contrastive learning might be loose and may not be the sole reason for its success (Ozair et al., 2019; Tschannen et al., 2019).\nIn this paper, we leverage an estimate of information specific to sequence data, known as predictive information (PI, Bialek et al., 2001), which measures the mutual information between the past and future windows in the latent space. The estimate is exact if the past and future windows have a joint Gaussian distribution, and is shown by prior work to be a good proxy for the true predictive information in practice (Clark et al., 2019). We can thus compute the estimate with sample windows of the latent sequence (without sampling negative examples), and obtain a well-defined objective for learning the encoder for latent representations. However, simply using the mutual information as the learning objective may lead to degenerate representations, as PI emphasizes simple structures in the latent space and a powerful encoder could achieve this at the cost of ignoring information between latent representations and input features. To this end, we adopt a masked reconstruction task to enforce the latent representations to be informative of the observations as well. Similar to Wang et al. (2020), we mask input dimensions as well as time segments of the inputs, and use a decoder to reconstruct the masked portion from the learnt representations; we also propose variants of this approach to achieve superior performance.\nOur method, Deep Autoencoding Predictive Components (DAPC), is designed to capture the above intuitions. From a variational inference perspective, DAPC also has a natural probabilistic interpretation. We demonstrate DAPC on both synthetic and real datasets of different sizes from various domains. Experimental results show that DAPC can recover meaningful low dimensional dynamics from high dimensional noisy and nonlinear systems, extract predictive features for forecasting tasks, and obtain state-of-the-art accuracies for Automatic Speech Recognition (ASR) with a much lower cost, by pretraining encoders that are later finetuned with a limited amount of labeled data.\n2 METHOD\nThe main intuition behind Deep Autoencoding Predictive Components is to maximize the predictive information of latent representation sequence. To ensure the learning process is tractable and non-degenerate, we make a Gaussian assumption and regularize the learning with masked reconstruction. In the following subsections, we elaborate on how we estimate the predictive information and how we design the masked reconstruction task. A probabilistic interpretation of DAPC is also provided to show the connection to deep generative models." }, { "heading": "2.1 PREDICTIVE INFORMATION", "text": "Given a sequence of observations X = {x1, x2, ...} where xi ∈ Rn, we extract the corresponding latent sequence Z = {z1, z2, ...} where zi ∈ Rd with an encoder function e(X), e.g., recurrent neural nets or transformers (Vaswani et al., 2017).2 Let T > 0 be a fixed window size, and denote Zpastt = {zt−T+1, ..., zt}, Z future t = {zt+1, ..., zt+T } for any time step t. The predictive information (PI) is defined as the mutual information (MI) between Zpastt and Z future t :\nMI(Zpastt , Z future t ) = H(Z past t ) +H(Z future t )−H(Z past t , Z future t ) (1)\n2In this work, the latent sequence has the same length as the input sequence, but this is not an restriction; one can have a different time resolution for the latent sequence, using sub-sampling strategies such as that of Chan et al. (2016).\nwhere H is the entropy function. Intuitively, PI measures how much knowing Zpastt reduces the uncertainty about Zfuturet (and vice versa). PI reaches its minimum value 0 if Z past t and Z future t are independent, and it is maximized if Zfuturet is a deterministic function of Z past t . Different from the MI estimate used by contrastive learning, which measures the MI between representation at each single time step and its future inputs, predictive information measures the MI between two windows of T time steps collectively. The window size T used in PI estimation reflects the time resolution for which the time series is more or less stationary.\nPI was designed as a general measure of the complexity of underlying dynamics which persists for a relatively long period of time (Li & Vitányi, 2008). Furthermore, PI is aware of temporal structures: different dynamics could lead PI to converge or diverge even if they look similar. These virtues of PI contribute to the versatility of this measure. The use of PI beyond a static complexity measure (and as a learning objective) is done only recently in machine learning by Clark et al. (2019), which proposes to learn a linear dimensionality reduction method named Dynamical Component Analysis (DCA) to maximize the PI of the projected latent sequence.\nOne approach for estimating PI is through estimating the joint density P (Zpastt , Z future t ), which can be done by density estimation methods such as k-NN and binning (Dayan & Abbott, 2001; Kraskov et al., 2004). However, such estimates heavily rely on hyperparameters, and it is more challenging to come up with differentiable objectives based on them that are compatible with deep learning frameworks. Our approach for estimating PI is the same as that of DCA. Assume that every 2T consecutive time steps {zt−T+1, ..., zt, ..., zt+T } in the latent space form a stationary, multivariate Gaussian distribution. Σ2T (Z) is used to denote the covariance of the distribution, and similarly ΣT (Z) the covariance of T consecutive latent steps. Under the stationarity assumption, H(Z past t ) remains the same for any t so we can omit the subscript t, and H(Zpast) is equal to H(Zfuture) as well. Using the fact that H(Zpast) = 12 ln(2πe) dT |ΣT (Z)|, PI for the time series z reduces to\nIT = MI(Z past, Zfuture) = ln |ΣT (Z)| −\n1 2 ln |Σ2T (Z)|. (2)\nDetailed derivations can be found in Appendix A. It is then straightforward to collect samples of the consecutive 2T -length windows and compute the sample covariance matrix for estimating Σ2T (Z). An empirical estimate of ΣT (Z) corresponds to the upper left sub-matrix of Σ2T (Z). Recall that, under the Gaussian assumption, the conditional distribution P (Zfuture|Zpast) is again Gaussian, whose mean is a linear transformation of Zpast. Maximizing IT has the effect of minimizing the entropy of this conditional Gaussian, and thus reducing the uncertainty of future given past.\nThough our estimation formula for PI is exact only under the Gaussian assumption, it was observed by Clark et al. (2019) that the Gaussian-based estimate is positively correlated with a computationally intensive estimate based on non-parametric density estimate, and thus a good proxy for the full estimate. We make the same weak assumption, so that optimizing the Gaussian-based estimate improves the true PI. Our empirical results show that representations learnt with the Gaussian PI have strong predictive power of future (see Sec 4.2). Furthermore, we find that a probabilistic version of DAPC (described in Sec 2.3) which models (Zpast, Zfuture) with a Gaussian distribution achieves similar performance as this deterministic version (with the Gaussian assumption).\nWe now describe two additional useful techniques that we develop for PI-based learning.\nMulti-scale PI One convenient byproduct of this formulation and estimation for IT is to reuse Σ2T (Z) for estimating IT/2, IT/4 and so on, as long as the window size is greater than 1. Since the upper left sub-matrix of Σ2T (Z) approximates ΣT (Z), we can extract ΣT (Z) from Σ2T (Z) without any extra computation, and similarly for ΣT/2(Z). We will show that multi-scale PI, which linearly combines PI at different time scales, boosts the representation quality in ASR pretraining.\nOrthogonality penalty Observe that the PI estimate in (2) is invariant to invertible linear transformations in the latent space. To remove this degree of freedom, we add the penalty to encourage latent representations to have identity covariance, so that each of the d latent dimensions will have unit scale and different dimensions are linearly independent and thus individually useful. This penalty is similar to the constraint enforced by deep canonical correlation analysis (Andrew et al., 2013), which was found to be useful in representation learning (Wang et al., 2015)." }, { "heading": "2.2 MASKED RECONSTRUCTION AND ITS SHIFTED VARIATION", "text": "The PI objective alone can potentially lead to a degenerate latent space, when the mapping from input sequence to latent sequence is very powerful, as the latent representations can be organized in a way that increases our PI estimate at the cost of losing useful structure from the input. This is also observed empirically in our experiments (see Sec 4.1). To regularize PI-based learning, one simple idea is to force the learnt latent representations to be informative of the corresponding input observations. For this purpose, we augment PI-based learning with a masked reconstruction task.\nMasked reconstruction was first proposed in BERT (Devlin et al., 2019), where the input text is fed to a model with a portion of tokens masked, and the task is to reconstruct the masked portion. Wang et al. (2020) extended the idea to continuous vector sequence data (spectrograms). The authors found that randomly masking input dimensions throughout the sequence yields further performance gain, compared to masking only consecutive time steps. We adopt their formulation in DAPC to handle continuous time series data.\nGiven an input sequence X of length L and dimensionality n, we randomly generate a binary mask M ∈ Rn×L, where Mi,j = 0 indicates Xi,j is masked with value 0 and Mi,j = 1 indicates Xi,j is kept the same. We feed the masked inputs to the encoder e(·) to extract representations (in Rd) for each time step, and use a feed-forward network g(·) to reconstruct the masked input observations. e(·) and g(·) are trained jointly. The masked reconstruction objective can be defined as\nR = ||(1−M) (X − g(e(X M)))||2fro. (3) Figure 1 gives an illustration for the masked spectrogram data. We randomly generate nT time masks each with width up to wT , and similarly nF frequency masks each with width up to wF . In our experiments, we observe that input dimension masking makes the reconstruction task more challenging and yields higher representation quality. Therefore, this strategy is useful for general time series data beyond audio.\nWe introduce one more improvement to masked reconstruction. Standard masked reconstruction recovers the masked inputs for the same time step. Inspired by the success of Autoregressive Predictive Coding (Chung & Glass, 2020), we propose a shifted variation of masked reconstruction, in which the latent state zi is decoded to reconstruct a future frame xi+s (than xi). Formally, the shifted masked reconstruction loss Rs is defined as\nRs = ||(1−M→s) (X→s − g(e(X M)))||2fro (4) where → s indicates right-shifting s time frames while the input dimensions remain unchanged. When s = 0, Rs reduces to the standard masked reconstruction objective, and in the ASR experiments we find that a nonzero s value helps. We ensure no information leakage by enforcing that the portion to be reconstructed is never presented in the inputs. As indicated by Chung et al. (2019b), predicting a future frame encourages more global structure and avoids the simple inference from local smoothness in domains like speech, and therefore helps the representation learning.\nTo sum up, our overall loss function is defined as the combination of the losses described above:\nmin e,g Ls,T (X) =− (IT + αIT/2) + βRs + γRortho (5)\nwhere α, β, γ are tradeoff weights and Rortho = ||Σ1 − Id||2fro is the orthonormality penalty discussed in Sec. 2.1, with Σ1 ∈ Rd×d corresponding to the top left sub-matrix of Σ2T estimated from the latent sequence Z = e(X M). The whole framework of DAPC is illustrated in Figure 1." }, { "heading": "2.3 A PROBABILISTIC INTERPRETATION OF DAPC", "text": "We now discuss a probabilistic interpretation of DAPC in the Variational AutoEncoder (VAE) framework (Kingma & Welling, 2014). Let X = (Xp, Xf ) and Z = (Zp, Zf ), where the subscripts p and f denote past and future respectively. Consider a generative model, where the prior distribution is p(Zp, Zf ) ∼ N (0,Σ). One can write down explicitly p(Zf |Zp), which is a Gaussian with\nµf |p = Σf,pΣ −1 p,pZp, Σf |p = Σff − Σf,pΣ−1p,pΣp,f .\nThe linear dynamics in latent space are completely defined by the covariance matrix Σ. Large predictive information implies low conditional entropy H(Zf |Zp).\nLet (Zp, Zf ) generate (Xp, Xf ) with a stochastic decoder g(X|Z). We only observe X and would like to infer the latent Z by maximizing the marginal likelihood of X . Taking a VAE approach, we parameterize a stochastic encoder e(Z|X) for the approximate posterior, and derive a lower bound for the maximum likelihood objective. Different from standard VAE, here we would not want to parameterize the prior to be a simple Gaussian, in which case the Zp and Zf are independent and have zero mutual information. Instead we encourage the additional structure of high predictive information for the prior. This gives us an overall objective as follows:\nmin Σ,e,g\n∫ p̂(X) {∫ − e(Z|X) log g(X|Z)dz +KL(e(Z|X)||p(Z)) } dx− ηIT (Σ)\nwhere p̂(X) is the empirical distribution over training data, the first term corresponds to the reconstruction loss, the second term measures the KL divergence between approximate posterior and the prior, and the last term is the PI defined in (2).\nThe challenge is how to parameterize the covariance Σ. We find that simply parameterizing it as a positive definite matrix, e.g., Σ = AAT , does not work well in our experiments, presumably because there is too much flexibility with such a formulation. What we find to work better is the pseudo-input technique discussed in VampPrior (Tomczak & Welling, 2018): given a set of pseudo-sequences X∗ which are learnable parameters (initialized with real training sequences), we compute the sample covariance from e(Z|X∗) as Σ. This approach yields an overall objective very similar to (5), with the benefit of a well-defined generative model (and the Gaussian assumption being perfectly satisfied), which allows us to borrow learning/inference techniques developed in the VAE framework. For example, masking the input for the encoder can be seen as amortized inference regularization (Shu et al., 2018). We show experimental results on this probabilistic DAPC in Appendix B and C. In general, probabilistic DAPC performs similarly to the deterministic counterpart, though the training process is more time and memory intensive. On the other hand, these empirical results show that deviating from the Gaussian assumption, as is the case for deterministic DAPC, does not cause significant issues for representation learning in practice if proper regularization is applied.\nRelated to this interpretations are VAE-base sequential models (Chung et al., 2015; Hsu et al., 2017; Li & Mandt, 2018) that also use reconstruction and enforce different structures/dynamics in the latent space. Most of them are designed for the purpose of generating high quality sequence data, while the qualities of their latent representations are mostly not shown for downstream tasks." }, { "heading": "3 RELATED WORK", "text": "Mutual information (MI) maximization is a principal approach for representation learning (Bell & Sejnowski, 1995), where the objective is to maximize the MI estimate between learnt representations and inputs. The currently dominant approach for estimating MI is based on contrastive learning. For sequence data, CPC (Oord et al., 2018) uses representations at current time as a classifier to discriminate inputs of nearby frames (positive samples) from inputs of far-away steps or inputs from other sequences (negative samples) with a cross-entropy loss; this leads to the noise-contrastive estimation (NCE, Gutmann & Hyvärinen, 2010). Deep InfoMax (DIM, (Hjelm et al., 2018)) generalizes the NCE estimator with a few variants, and proposes to maximize MI between global summary features and local features from intermediate layers (rather than the inputs as in CPC). SimCLR (Chen et al., 2020) extends the contrastive loss to use a nonlinear transformation of the representation (than the representation itself) as a classifier for measuring MI. Contrastive Multiview Coding (Tian et al., 2019) generalizes the contrastive learning frame to multiple views. Momentum Contrast (He et al., 2020) saves memory with a dynamic dictionary and momentum encoder.\nMeanwhile, there have been concerns about the contrastive learning framework. One concern is that postive and negative sample selection is sometimes time and memory consuming. To address this issue, BYOL (Grill et al., 2020) proposes to get rid of negative samples by learning a target network in an online fashion and gradually bootstrapping the latent space. Another concern is regarding the MI estimation. Though contrastive learning has an MI backbone, Tschannen et al. (2019) suggests that the inductive bias of the feature extractor and parametrization of estimators might contribute more than the MI estimate itself. Ozair et al. (2019); McAllester & Stratos (2020) raise the concern\nthat the MI lower bound used by contrastive learning might be too loose, and propose to use an estimate based on Wasserstein distance.\nUnlike prior work, our principle for sequence representation learning is to maximize the MI between past and future latent representations, rather than the MI between representations and inputs (or shallow features of inputs). Partially motivated by the above concerns, our mutual information estimate requires no sampling and is exact for Gaussian random variables. To keep useful information from input, we use a masked reconstruction loss which has been effective for sequence data (text and speech), with an intuition resembling that of denoising autoencoders (Vincent et al., 2010).\nNote that by the data processing inequality, methods that maximize mutual information between current representation and future inputs also implicitly maximizes an upper bound of mutual information between high level representations, since MI(Zpast, Zfuture) ≤ MI(Zpast, Xfuture). Our method explicitly maximizes the mutual information between high level representations itself, while having another regularization term (masked reconstruction) that maximizes information between current input and current representations. Our results indicate that explicitly modeling the trade-off between the two can be advantageous.\nIn the audio domain where we will demonstrate the applicability of our method, there has been significant interest in representation learning for reducing the need for supervised data. Both contrastive learning based (Schneider et al., 2019; Baevski et al., 2019; Jiang et al., 2019) and reconstructionbased (Chorowski et al., 2019; Chung et al., 2019a; Song et al., 2019; Wang et al., 2020; Chung & Glass, 2020; Ling et al., 2020; Liu et al., 2020) methods have been studied, as well as methods that incorporate multiple tasks (Pascual et al., 2019a; Ravanelli et al., 2020). Our work promotes the use of a different MI estimate and combines different intuitions synergistically." }, { "heading": "4 EXPERIMENTS", "text": "" }, { "heading": "4.1 NOISY LORENZ ATTRACTOR", "text": "Lorenz attractor (“butterfly effect”, see Appendix B) is a 3D time series depicting a chaotic system (Pchelintsev, 2014), as visualized in Figure 2. We design a challenging dimension reduction task for recovering the Lorenz attractor from high dimensional noisy measurements. We first lift the 3D clean signals to 30D with a neural network of 2 hidden layers, each with 128 elu units (Clevert et al., 2015). This lifting network has weights and biases drawn randomly from N (0, 0.2). In addition, we corrupt the 30D lifted signals with white noise to obtain three different signal-to-noise ratio (SNR) levels, 0.3, 1.0, and 5.0, and use the noisy 30D measurements (see bottom left of Figure 2) as input for representation learning methods to recover the true 3D dynamics.\nWe generate a long Lorenz attractor trajectory using the governing differential equations, and chunk the trajectory into segments of 500 time steps. We use 250, 25 and 25 segments for training, validation, and test splits respectively. After the model selection on the validation split based on the R2 regression score which measures the similarity between recovered and ground truth trajectories,\nour model is applied to the test split. The optimal model uses T = 4, s = 0, α = 0, γ = 0.1, and β = 0.1 is set to balance the importance of PI and the reconstruction error.\nWe compare DAPC to representative unsupervised methods including DCA (Clark et al., 2019), CPC (Oord et al., 2018), pure PI learning which corresponds to DAPC with β = 0, and masked reconstruction (MR, Wang et al., 2020) which corresponds to DAPC without the PI term. Except for DCA which corresponds to maximizing PI with a linear feedforward network, the other methods use bidirectional GRUs (Chung et al., 2014) for mapping the inputs into feature space (although uniGRU performs similarly well). A feedforward DNN is used for reconstruction in MR and DAPC.\nWe show the latent representations of different methods in Figure 2 (right panel). DCA fails completely since its encoder network has limited capacity to invert the nonlinear lifting process. CPC is able to recover the 2 lobes, but the recovered trajectory is chaotic. Maximizing PI alone largely ignores the global structure of the data. MR is able to produce smoother dynamics for high SNRs, but its performance degrades quickly in the noisier scenarios. DAPC recovers a latent representation which has overall similar shapes to the ground truth 3D Lorenz attractor, and exhibits smooth dynamics enforced by the PI term. In Appendix B, we provide the R2 scores for different methods. These results quantitatively demonstrate the advantage of DAPC across different noise levels." }, { "heading": "4.2 FORECASTING WITH LINEAR REGRESSION", "text": "We then demonstrate the predictive power of learnt representations in downstream forecasting tasks on 3 real-world datasets used by Clark et al. (2019), involving multi-city temperature time series data (Temp, Beniaguev (2017)), dorsal hippocampus study (HC, Glaser et al. (2020)), and motor cortex (M1, O’Doherty et al. (2018)). For each model, unsupervised representation learning is performed on the training set with a uni-directional GRU, which prevents information leakage from the future. After that, we freeze the model and use it as a feature extractor. The representations at each time step are used as inputs for predicting the target at a future time step. As an example, we can extract a representation for today’s weather based on past weather only (as the encoder is uni-directional), and use it to predict future temperature which is lag days away (a larger lag generally leads to a more difficult forecasting task). Following Clark et al. (2019), the predictor from the extracted\nfeature space to the target is a linear mapping, trained on samples of paired current feature and future target, using a least squares loss. We use the same feature dimensionality as in their work, for each dataset. These forecasting tasks are evaluated by the R2 regression score, which measures the linear predictability. More details can be found in Appendix C.\nBesides DCA, CPC and MR, we further include PCA and SFA (Wiskott & Sejnowski, 2002) (similar to DCA with T = 1 for PI estimation), which are commonly used linear dimension reduction methods in these fields. PCA serves as the baseline and we report R2 score improvements from other methods. Figure 3 gives the performances of different methods for the three datasets, with three different lags (the number of time steps between current and future for the forecasting task): 5, 10, and 15. DAPC consistently outperforms the other methods. In Table 1, we show how PI helps DAPC improve over either full reconstruction (e.g., classical auto-encoder) or masked reconstruction, and how reconstruction losses help DAPC improve over PI alone on this task. These results demonstrate that the two types of losses, or the two types of mutual information (MI between input and latent, and MI between past and future) can be complementary to each other." }, { "heading": "4.3 PRETRAINING FOR AUTOMATIC SPEECH RECOGNITION (ASR)", "text": "A prominent usage of representation learning in speech processing is to pretrain the acoustic model with an unsupervised objective, so that the resulting network parameters serve as a good initialization for the supervised training phase using labeled data (Hinton et al., 2012). As supervised ASR techniques have been improved significantly over the years, recent works start to focus on pre-training with large amounts of unlabeled audio data, followed by finetuning on much smaller amounts of supervised data, so as to reduce the cost of human annotation.\nWe demonstrate different methods on two commonly used speech corpora for this setup: Wall Street Journal (Paul & Baker, 1992) and LibriSpeech (Panayotov et al., 2015). For WSJ, we pretrain on si284 partition (81 hours), and finetune on si84 partition (15 hours) or the si284 partition itself. For Librispeech, we pretrain on the train 960 partition (960 hours) and finetune on the train clean 100 partition (100 hours). Standard dev and test splits for each corpus are used for validation and testing.\nIn the experiments, we largely adopt the transformers-based recipe from ESPnet (Watanabe et al., 2018), as detailed in Karita et al. (2019), for supervised finetuning. We provide the details regarding model architecture and data augmentation in Appendix D. Note that we have spent effort in building strong ASR systems, so that our baseline (without pretraining) already achieves low WERs and improving over it is non-trivial. This can be seen from the result table where our baseline is often stronger than the best performance from other works. In the pretraining stage, we pretrain an encoder of 14 transformer layers, which will be used to initialize the first 14 layers of ASR model. For masked reconstruction, we use 2 frequency masks as in finetuning, but found more time masks can improve pretraining performance. We set the number of time masks to 4 for WSJ, and 8 for LibriSpeech which has longer utterances on average.\nThe hyperparameters we tune include T, s, α, β and γ from our learning objective. We select hyperparameters which give the best dev set WER, and report the corresponding test set WER. In the end, we use T = 4 for estimating the PI term, γ = 0.05, β = 0.005 and set s = 2 for WSJ and s = 1 for LibriSpeech if we use shifted reconstruction. Since the pretraining objective is a proxy for extracting the structure of data and not fully aligned with supervised learning, we also tune the number of pretraining epochs, which is set to 5 for WSJ and 1 for LibriSpeech. Other parameters will be shown in the ablation studies presented in Appendix D.\nWe perform an ablation study for the effect of different variants of DAPC (MR+PI) on the WSJ dataset, and give dev/test WERs in Table 2. We tune the hyperparameters for multi-scale PI and shifted reconstruction for the 15-hour finetuning setup, and observe that each technique can lead to further improvement over the basic DAPC, while combining them delivers the best performance. The same hyperparameters are used for the 81-hour finetuning setup, and we find that with more supervised training data, the baseline without pretraining obtains much lower WERs, and pure masked reconstruction only slightly improves over the baseline, while the strongest DAPC variant still achieves 7.9% and 11.7% relative improvements on dev93 and eval92 respectively.\nIn Table 3, we provide a more thorough comparison with other representation learning methods on the LibriSpeech dataset. We compare with CPC-type mutual information learning methods including\nTable 3: WERs (%) obtained by ASR models pretrained with different representation learning methods on the test clean partition of Librispeech. Models are pretrained on 960h unlabeled data and finetuned on 100h labeled data. Our results are averaged over 3 random seeds.\nMethods WER (%) wav2vec (Schneider et al.,\n2019) 6.92\ndiscrete BERT+vq-wav2vec (Baevski et al., 2019) 4.5 wav2vec 2.0 (Baevski et al., 2020) 2.3\nDeCoAR (Ling et al., 2020) 6.10 TERA-large (Liu et al., 2020) 5.80\nMPE (Liu & Huang, 2020) 9.68 Bidir CPC (Kawakami et al.,\n2020) 8.70\nw.o. pretrain 5.11±0.20 MR 5.02±0.09\nDAPC 4.86±0.08 DAPC+multi-scale PI+shifted\nrecon 4.70±0.02\nwav2vec (Schneider et al., 2019), vq-wav2vec which performs CPC-type learning with discrete tokens followed by BERT-style learning (Baevski et al., 2019), and wav2vec 2.0 which incorporates masking into contrastive learning (Baevski et al., 2020). We also compare with two reconstructiontype learning approaches DeCoAR (Ling et al., 2020) and TERA (Liu et al., 2020); comparisons with more methods are given in Appendix D. Observe that DAPC and its variant achieve lower WER than MR: though our baseline is strong, DAPC still reduces WER by 8%, while MR only improves by 1.76%. This shows the benefit of PI-based learning in addition to masked reconstruction. Our method does not yet outperform vq-wav2vec and wav2vec 2.0; we suspect it is partly because our models have much smaller sizes (around 30M weight parameters) than theirs (vq-wav2vec has 150M weight parameters, and wav2vec has 300M weight parameters for the acoustic model, along with a very large neural language model) and it is future work to scale up our method. In Appendix D, we provide additional experimental results where the acoustic model targets are subwords. DAPC achieves 15% relative improvement over the baseline (without pretraining), showing that our method is generally effective for different types of ASR systems." }, { "heading": "5 CONCLUSIONS", "text": "In this work, we have proposed a novel representation learning method, DAPC, for sequence data. Our learnt latent features capture the essential dynamics of the underlying data, contain rich information of both input observations and context states, and are shown to be useful in a variety of tasks. As future work, we may investigate other predictive information estimators that further alleviate the Gaussian assumption. On the other hand, more advanced variational inference techniques may be applied to the probabilistic version of DAPC to boost the performance. DAPC provides a general alternative for mutual information-based learning of sequence data and we may investigate its potential usage in other domains such as NLP, biology, physics, etc." }, { "heading": "A DERIVATION OF PREDICTIVE INFORMATION", "text": "In this section, we give a self-contained derivation of the predictive information.\nA multivariate Gaussian random variable X ∈ RN has the following PDF:\np(x) = 1\n√ 2π N√|Σ| exp ( −1 2 (x− µ)T Σ−1(x− µ) ) where µ is the mean and Σ is the covariance matrix with Σ = E[(X − E[X])(X − E[X])]. From the definition of entropy\nH(X) = − ∫ p(x) ln p(x)dx\nwe can derive the entropy formula for multivariate Gaussian H(X) = − ∫ p(x) ln 1 √\n2π N√|Σ| exp(−12(x− µ)T Σ−1(x− µ))dx\n= − ∫ p(x) ln ( 1\n( √ 2π)N √ |Σ|\n) dx− ∫ p(x) ( −1\n2 (x− µ)T Σ−1(x− µ))\n) dx\n= 1\n2 ln((2πe)N |Σ|). (6)\nConsider the joint Gaussian distribution ( X Y ) ∼ N (µ,Σ) = N (( µX µY ) , ( ΣXX ΣXY ΣY X ΣY Y )) where X ∈ Rp, and Y ∈ Rq . We can plug in the entropy in (6) and obtained\nMI(X,Y ) = H(X) +H(Y )−H(X,Y )\n= 1\n2 ln((2πe)p|ΣXX |) +\n1 2 ln((2πe)q|ΣY Y |)− 1 2 ln((2πe)p+q|Σ|)\n= −1 2 ln |Σ| |ΣXX ||ΣY Y | .\nFor a latent sequence Z = {z1, z2, ...} where zi ∈ Rd, we define Zpastt = {zt−T+1, ..., zt}, and Zfuturet = {zt+1, ..., zt+T }. Based on our stationarity assumption, all the length-2T windows of states within the range are drawn from the same Gaussian distribution with covariance Σ2T (Z), and similarly for all the length-T windows. As a result and under the stationary assumption,H(Zpastt ) = H(Zfuturet ) = 1 2 ln((2πe) Td|ΣT (Z)|), H(Zpastt , Z future t ) = 1 2 ln((2πe) 2Td|Σ2T (Z)|), and\nIT = MI(Z past t , Z future t )\n= H(Zpastt ) +H(Z future t )−H(Z past t , Z future t )\n= 1\n2 ln((2πe)Td|ΣT (Z)|) +\n1 2 ln((2πe)Td|ΣT (Z)|)− 1 2 ln((2πe)2Td|Σ2T (Z)|)\n= ln |ΣT (Z)| − 1\n2 ln |Σ2T (Z)|\nThe predictive information IT only depend on T but not specific time index t." }, { "heading": "B LORENZ ATTRACTOR", "text": "The Lorenz attractor system (also called “butterfly effect”) is generated by the following differential equations: (Strogatz, 2018; Clark et al., 2019):\ndx = σ(y − x) dy = x(ρ− z)− y dz = xy − βz\n(7)\nwhere (x, y, z) are the 3D coordinates, and we use σ = 10, β = 8/3, ρ = 28. The integration step for solving the system of equations is 5× 10−3. We lift the 3D trajectory into 30D using a nonlinear neural network with 2 hidden layers, each with 128 neurons and the Exponential Linear Unit (Clevert et al., 2015). We further perturb the 30D trajectory with additive Gaussian noise to obtain datasets of three different Signal-to-Noise Ratios (SNRs, ratio between the power of signal and the power of noise): 0.3, 1.0, and 5.0. The smaller the SNR is, the more challenging it is to recover the clean 3D trajectory (see the left panel of Figure 2 for the comparison between noisy 30D trajectory and the clean 30D trajectory).\nTable 4: The R2 scores of recovered 3D trajectory of noisy Lorenz attractor by different methods.\nSNR DCA CPC PI MR DAPC-det DAPC-prob 0.3 0.084 0.676 0.585 0.574 0.865 0.816 1.0 0.153 0.738 0.597 0.885 0.937 0.943 5.0 0.252 0.815 0.692 0.929 0.949 0.949\n0.3\n1.0\n5.0\nDCA CPC DAPC (det) DAPC (prob)MRSNR\nFigure 4: Recovery of 3D trajectory of noisy Lorenz attractor by different methods.\nWe compare both deterministic and probabilistic DAPC against representative unsupervised methods including DCA (Clark et al., 2019), CPC (Oord et al., 2018), pure PI learning which corresponds to DAPC with β = 0, and masked reconstruction (MR) (Wang et al., 2020). Except for DCA which corresponds to maximizing PI with a linear orthogonal feedforward net, the other methods use bidirectional GRU (Chung et al., 2014) for mapping the inputs into feature space (although uni-GRU performs similarly well). A feedforward DNN is used for reconstruction in MR and DAPC.\nMore specifically, CPC, MR, DAPC all use the bidirectional GRU where the learning rate is 0.001, and dropout rate is 0.7. Our GRU has 4 encoding layers with hidden size 256. The batch size is (20, 500, 30). For CPC, the temporal lag k=4. For DAPC, β = 0.1, T = 4, s = 0, α = 0, γ = 0.1. For masked reconstruction, we use at most 2 masks on the frequency axis with width up to 5, and at most 2 masks on the time axis with width up to 40. The DNN decoder has 3 hidden layers, each with size 512. DCA’s setup is completely adopted from Clark et al. (2019). The same architectures are used in the forecasting tasks in Appendix C.\nFigure 4 provides qualitatively results for the recovered 3D trajectories by different methods (Figure 2 in the main text contains a subset of the results shown here). Observe that DCA fails in this scenario since its feature extraction network has limited capacity to invert the nonlinear lifting process. CPC is able to recover the 2 lobes, but the recovered signals are chaotic. Masked reconstruction is able to produce smoother dynamics for high SNRs, but its performance degrades quickly in the more noisy scenarios. Both deterministic and probabilistic DAPC recover a latent representation which has overall similar shapes to the ground truth 3D Lorenz attractor, and exhibits smooth dynamics enforced by the PI term.\nWe quantitatively measure the recovery performance with the R2 score, which is defined as coefficient of determination. R2 score normally ranges from 0 to 1 where 1 means the perfect fit.\nNegative scores indicate that the model fits the data worse than a horizontal hyperplane. The R2 results are given in Table 4. Our results quantitatively demonstrate the clear advantage of DAPC across different noise levels.\nTable 5: The R2 scores for the ablation study of (deterministic) DAPC for Lorenz attractor.\nSNR Full Recon uni-GRU Regular 0.3 0.803 0.857 0.865 1.0 0.812 0.905 0.937 5.0 0.852 0.903 0.949\nTable 6: The R2 scores for full reconstruction only and full reconstruction with PI.\nWe also give an ablation study on several components of DAPC in Table 5, where we attempt full reconstruction without masking, masked reconstruction with unidirectional encoder uni-GRU, and the regular setup (masked reconstruction + bi-GRU). Using full reconstruction yields worse results than using masked reconstruction at all noise levels, while uni-GRU degrades the performance less.\nWe show in Table 6 and Figure 5 how PI can improve both full reconstruction and masked reconstruction. In Table 6, when SNR=0.3, PI can greatly boost the performance of full reconstruction. We also tuned the temporal lag parameter k w.r.t. both quantitative and qualitative results (Table 7 and Figure 6). CPC performance starts to deteriorate after k=8, while k=4, 6, 8 have similar results. Based on the R2 scores, we select k=4 as our final temporal lag. Similar tuning is also performed for CPC on the downstream forecasting experiments in Appendix C." }, { "heading": "C DOWNSTREAM FORECASTING WITH LINEAR REGRESSION", "text": "For each of the three datasets used in downstream forecasting tasks, we divide the original dataset into training/validation/testing splits. Unsupervised representation learning is performed on the training split, validated on the validation split and learns an encoder e(·). Denote the test sequence X = {x1, x2, ..., xL}. The learnt e(·) transforms X into a sequence Z = {z1, z2, ..., xL} of the same length. Note that we use uni-directional GRU for representation learning so that and no future information is leaked. For the forecasting tasks, zi will be used to predict a target yi which corresponds to an event of interest. For the multi-city temperature dataset (Beniaguev, 2017), yi represents future multi-city temperatures, i.e., yi = xi+lag with lag > 0. For the hippocampus study (Glaser et al., 2020), xi is the multi-neuronal spiking activity of 55 single units recorded in rat hippocampal CA1 and yi is a future location of the rat. In the motor cortex dataset (O’Doherty et al., 2018), xi’s are collected from multi-neuronal spiking activity of 109 single units recorded in monkey primary motor cortex (M1), and yi’s are future behavior variables such as cursor kinematics. The problems tend to be more challenging with larger lag.\nThe performance for forecasting tasks is measured by the linear predictability from zi to yi. Specifically, we solve the linear regression problem with inputs being the (zi, yi) pairs, and measure the\nR2 score between the prediction and ground-truth target. We use the R2 score from the PCA projection as a baseline, and provide the improvements over PCA obtained by different representation learning methods in Figure 7 (so PCA’s ∆R2 is 0). Both deterministic and probabilistic DAPC consistently outperform other methods across all three datasets, with the deterministic version slightly outperforming the probabilistic one. Additionally, we provide sensitivity study of the latent dimensionality for all methods in Figure 8, and DAPC outperforms others consistently across different dimensionalities. Table 8 shows the temporal lag tuning for CPC on the temperature dataset." }, { "heading": "D ADDITIONAL RESULTS FOR AUTOMATIC SPEECH RECOGNITION (ASR)", "text": "The acoustic model is trained with a multi-task objective (Watanabe et al., 2017) which combines attention Chorowski et al. (2015); Chan et al. (2016) and CTC (Graves et al., 2006) losses, for predicting the output character sequence. We extract 80D fbank features plus 3D pitch features from audio, with a frame size of 25ms and hop size of 10ms. Every 3 consecutive frames are stacked to obtain the input sequence for the acoustic model. During ASR finetuning, the encoder shared by both attention and CTC consists of 14 transformer layers for WSJ and 16 layers for LibriSpeech, while the decoder consists of 6 transformer layers. All attention operations use 4 heads of 64 dimensions each, and the output of multi-head attention goes through a one-hidden-layer position-wise feedforward network of 2048 ReLU units, before it is fed into the next layer. During finetuning, we apply SpecAugment (Park et al., 2019) to reduce overfitting, with max time warp set to 5 (frames), two frequency masks of width up to 30 frequency bins, and two time masks of width up to 40 frames. We use the Adam optimizer with a warmup schedule for the learning rate. Weight parameters of the last 10 finetuning epochs is averaged to obtain the final model. For word-level decoding, we use a word RNNLM trained on the language model training data of each corpus, with a vocabulary size of 65K for WSJ, and 200K for LibriSpeech. We use the lookahead scores derived from word RNNLM during beam search, for selecting promising character tokens at each step, as done by Hori et al. (2018). A beam size of 20 is used for decoding.\nIn Table 9, we provide a thorough comparison with other representation learning methods on the LibriSpeech dataset. We compare with CPC-type mutual information learning methods including wav2vec (Schneider et al., 2019), vq-wav2vec which performs CPC-type learning with discrete tokens followed by BERT-stype learning (Baevski et al., 2019), and a more recent extension wav2vec 2.0 (Baevski et al., 2020). We also compare with two reconstruction-type learning approaches DeCoAR (Ling et al., 2020) and TERA (Liu et al., 2020). Note that TERA is quite similar to MR (Wang et al., 2020) in performing masked reconstruction, although it adds recurrent layers to transformerlearnt representations for finetuning, while our implementation of MR uses a pure transformer-based architecture throughout. We believe the advantage of MR over TERA mainly comes from the acoustic model (attention vs. CTC) and a stronger language model (RNNLM vs. n-gram). Observe that DAPC and its variant achieve lower WER than MR: though our baseline is strong, DAPC still reduces WER by 8%, while MR only improves by 1.76%. This shows the benefit of PI-based learning in addition to masked reconstruction. Compared to Table 3, Table 9 includes more recent works that are related to our DAPC. Futhermore, to show our improvement is robust to details of ASR recipe, we include the comparison between baseline (without pretraining), MR, and DAPC when the ASR recipe uses 5000 unigrams as token set and decodes with token-level RNNLM. These results are denoted with “(sub-word)” in Table 9. The relative merits between methods are consistent with those observed for the character recipe. DAPC obtains a relative improvement of 15% over the baseline on test clean (6.81%→ 5.79%). Furthermore, we compare DAPC with other state-of-the-art methods on WSJ in Table 10. Comparisons among different methods based on their key features are shown in Table 11." } ]
2,021
null
SP:d29300f18c72041296b43246711ffdfa1dc6681d
[ "The authors proposed a novel method for regression problems with outliers. The main idea is to first propose a mixed-integer optimization problem for the regression problem and then and the optimization procedure of finding the solutiuon of the problem differentiable, and the objective function of the problem are also be rephrased as a differentiable function. Based on this, an end-to-end learning approach can be established." ]
We consider a regression problem, where the correspondence between input and output data is not available. Such shuffled data is commonly observed in many real world problems. Taking flow cytometry as an example, the measuring instruments are unable to preserve the correspondence between the samples and the measurements. Due to the combinatorial nature, most of existing methods are only applicable when the sample size is small, and limited to linear regression models. To overcome such bottlenecks, we propose a new computational framework – ROBOT– for the shuffled regression problem, which is applicable to large data and complex models. Specifically, we propose to formulate the regression without correspondence as a continuous optimization problem. Then by exploiting the interaction between the regression model and the data correspondence, we propose to develop a hypergradient approach based on differentiable programming techniques. Such a hypergradient approach essentially views the data correspondence as an operator of the regression, and therefore allows us to find a better descent direction for the model parameter by differentiating through the data correspondence. ROBOT is quite general, and can be further extended to the inexact correspondence setting, where the input and output data are not necessarily exactly aligned. Thorough numerical experiments show that ROBOT achieves better performance than existing methods in both linear and nonlinear regression tasks, including real-world applications such as flow cytometry and multi-object tracking.
[ { "affiliations": [], "name": "Yujia Xie" }, { "affiliations": [], "name": "Yixiu Mao" }, { "affiliations": [], "name": "Simiao Zuo" }, { "affiliations": [], "name": "Hongteng Xu" }, { "affiliations": [], "name": "Xiaojing Ye" }, { "affiliations": [], "name": "Tuo Zhao" }, { "affiliations": [], "name": "Hongyuan Zha" } ]
[ { "authors": [ "Abubakar Abid", "James Zou" ], "title": "A stochastic expectation-maximization approach to shuffled linear regression", "venue": "56th Annual Allerton Conference on Communication, Control, and Computing (Allerton),", "year": 2018 }, { "authors": [ "Abubakar Abid", "Ada Poon", "James Zou" ], "title": "Linear regression with shuffled labels", "venue": "arXiv preprint arXiv:1705.01342,", "year": 2017 }, { "authors": [ "Jean-David Benamou", "Guillaume Carlier", "Marco Cuturi", "Luca Nenna", "Gabriel Peyré" ], "title": "Iterative bregman projections for regularized transportation problems", "venue": "SIAM Journal on Scientific Computing,", "year": 2015 }, { "authors": [ "Alex Bewley", "Zongyuan Ge", "Lionel Ott", "Fabio Ramos", "Ben Upcroft" ], "title": "Simple online and realtime tracking", "venue": "In 2016 IEEE International Conference on Image Processing (ICIP),", "year": 2016 }, { "authors": [ "Garrett Birkhoff" ], "title": "Three observations on linear algebra", "venue": "Univ. Nac. Tacuman, Rev. Ser. A,", "year": 1946 }, { "authors": [ "Léon Bottou", "Frank E Curtis", "Jorge Nocedal" ], "title": "Optimization methods for large-scale machine learning", "venue": "Siam Review,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Gabriel Peyré", "Bernhard Schmitzer", "François-Xavier Vialard" ], "title": "An interpolating distance between optimal transport and fisher–rao metrics", "venue": "Foundations of Computational Mathematics,", "year": 2018 }, { "authors": [ "Lenaic Chizat", "Gabriel Peyré", "Bernhard Schmitzer", "François-Xavier Vialard" ], "title": "Scaling algorithms for unbalanced optimal transport problems", "venue": "Mathematics of Computation,", "year": 2018 }, { "authors": [ "Lénaı̈c Chizat", "Gabriel Peyré", "Bernhard Schmitzer", "François-Xavier Vialard" ], "title": "Unbalanced optimal transport: Dynamic and kantorovich formulations", "venue": "Journal of Functional Analysis,", "year": 2018 }, { "authors": [ "Marco Cuturi" ], "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "venue": "In Advances in neural information processing systems,", "year": 2013 }, { "authors": [ "George Bernard Dantzig" ], "title": "Linear programming and extensions, volume 48", "venue": "Princeton university press,", "year": 1998 }, { "authors": [ "Philip David", "Daniel Dementhon", "Ramani Duraiswami", "Hanan Samet" ], "title": "Softposit: Simultaneous pose and correspondence determination", "venue": "International Journal of Computer Vision,", "year": 2004 }, { "authors": [ "P. Dendorfer", "H. Rezatofighi", "A. Milan", "J. Shi", "D. Cremers", "I. Reid", "S. Roth", "K. Schindler", "L. Leal-Taixé" ], "title": "Mot20: A benchmark for multi object tracking in crowded scenes", "venue": "URL http://arxiv.org/abs/1906.04567", "year": 2003 }, { "authors": [ "John Duchi", "Shai Shalev-Shwartz", "Yoram Singer", "Tushar Chandra" ], "title": "Efficient projections onto the l 1-ball for learning in high dimensions", "venue": "In Proceedings of the 25th international conference on Machine learning,", "year": 2008 }, { "authors": [ "Golnooshsadat Elhami", "Adam James Benjamin", "Bejar Haro", "Martin Vetterli" ], "title": "Unlabeled sensing: Reconstruction algorithm and theoretical guarantees", "venue": "42nd IEEE International Conference on Acoustics, Speech and Signal Processing,", "year": 2017 }, { "authors": [ "Pedro F Felzenszwalb", "Ross B Girshick", "David McAllester", "Deva Ramanan" ], "title": "Object detection with discriminatively trained part-based models", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2009 }, { "authors": [ "Tanner Fiez", "Benjamin Chasnov", "Lillian J Ratliff" ], "title": "Convergence of learning dynamics in stackelberg games", "venue": "arXiv preprint arXiv:1906.01217,", "year": 2019 }, { "authors": [ "Saeid Haghighatshoar", "Giuseppe Caire" ], "title": "Signal recovery from unlabeled samples", "venue": "IEEE Transactions on Signal Processing,", "year": 2017 }, { "authors": [ "Zhen He", "Jian Li", "Daxue Liu", "Hangen He", "David Barber" ], "title": "Tracking by animation: Unsupervised learning of multi-object attentive trackers", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2019 }, { "authors": [ "Daniel J Hsu", "Kevin Shi", "Xiaorui Sun" ], "title": "Linear regression without correspondence", "venue": "In Advances in Neural Information Processing Systems,", "year": 2017 }, { "authors": [ "Xiaoqiu Huang", "Anup Madan" ], "title": "Cap3: A dna sequence assembly program", "venue": "Genome research,", "year": 1999 }, { "authors": [ "Leonid Vitaliyevich Kantorovich" ], "title": "Mathematical methods of organizing and planning production", "venue": "Management science,", "year": 1960 }, { "authors": [ "Lorenzo Keller", "M Jafari Siavoshani", "Christina Fragouli", "Katerina Argyraki", "Suhas Diggavi" ], "title": "Identity aware sensor networks", "venue": "In IEEE INFOCOM", "year": 2009 }, { "authors": [ "Christopher G Knight", "Mark Platt", "William Rowe", "David C Wedge", "Farid Khan", "Philip JR Day", "Andy McShea", "Joshua Knowles", "Douglas B Kell" ], "title": "Array-based evolution of dna aptamers allows modelling of an explicit sequence-fitness landscape", "venue": "Nucleic acids research,", "year": 2009 }, { "authors": [ "Stanislav Kondratyev", "Léonard Monsaingeon", "Dmitry Vorotnikov" ], "title": "A new optimal transport distance on the space of finite radon measures", "venue": "Advances in Differential Equations,", "year": 2016 }, { "authors": [ "Bo Li", "Junjie Yan", "Wei Wu", "Zheng Zhu", "Xiaolin Hu" ], "title": "High performance visual tracking with siamese region proposal network", "venue": "In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,", "year": 2018 }, { "authors": [ "Matthias Liero", "Alexander Mielke", "Giuseppe Savaré" ], "title": "Optimal entropy-transport problems and a new hellinger–kantorovich distance between positive measures", "venue": "Inventiones mathematicae,", "year": 2018 }, { "authors": [ "Giulia Luise", "Alessandro Rudi", "Massimiliano Pontil", "Carlo Ciliberto" ], "title": "Differential properties of sinkhorn approximation for learning with wasserstein distance", "venue": "In Advances in Neural Information Processing Systems,", "year": 2018 }, { "authors": [ "Marvin Marcus", "Rimhak Ree" ], "title": "Diagonals of doubly stochastic matrices", "venue": "The Quarterly Journal of Mathematics,", "year": 1959 }, { "authors": [ "A. Milan", "L. Leal-Taixé", "I. Reid", "S. Roth", "K. Schindler" ], "title": "MOT16: A benchmark for multi-object tracking. arXiv:1603.00831 [cs], March 2016", "venue": "URL http://arxiv.org/abs/1603", "year": 2016 }, { "authors": [ "Ashwin Pananjady", "Martin J Wainwright", "Thomas A Courtade" ], "title": "Linear regression with an unknown permutation: Statistical and computational limits", "venue": "In 2016 54th Annual Allerton Conference on Communication,", "year": 2016 }, { "authors": [ "Ashwin Pananjady", "Martin J Wainwright", "Thomas A Courtade" ], "title": "Denoising linear models with permuted data", "venue": "IEEE International Symposium on Information Theory (ISIT),", "year": 2017 }, { "authors": [ "Ashwin Pananjady", "Martin J Wainwright", "Thomas A Courtade" ], "title": "Linear regression with shuffled data: Statistical and computational limits of permutation recovery", "venue": "IEEE Transactions on Information Theory,", "year": 2017 }, { "authors": [ "Liangzu Peng", "Manolis C Tsakiris" ], "title": "Linear regression without correspondences via concave minimization", "venue": "arXiv preprint arXiv:2003.07706,", "year": 2020 }, { "authors": [ "Shaoqing Ren", "Kaiming He", "Ross Girshick", "Jian Sun" ], "title": "Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing", "venue": null, "year": 2015 }, { "authors": [ "P. Rigollet", "J. Weed" ], "title": "Uncoupled isotonic regression via minimum wasserstein deconvolution", "venue": "arXiv preprint arXiv:1806.10648,", "year": 2018 }, { "authors": [ "Ergys Ristani", "Francesco Solera", "Roger Zou", "Rita Cucchiara", "Carlo Tomasi" ], "title": "Performance measures and a data set for multi-target, multi-camera tracking", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "William S Robinson" ], "title": "A method for chronologically ordering archaeological deposits", "venue": "American antiquity,", "year": 1951 }, { "authors": [ "X. Shi", "X. Lu", "T. Cai" ], "title": "Spherical regresion under mismatch corruption with application to automated knowledge translation", "venue": "arXiv preprint arXiv:1810.05679,", "year": 2018 }, { "authors": [ "Richard Sinkhorn", "Paul Knopp" ], "title": "Concerning nonnegative matrices and doubly stochastic matrices", "venue": "Pacific Journal of Mathematics,", "year": 1967 }, { "authors": [ "M. Slawski", "E. Ben-David" ], "title": "Linear regression with sparsely permuted data", "venue": "Electronic Journal of Statistics,", "year": 2019 }, { "authors": [ "M. Slawski", "E. Ben-David", "P. Li" ], "title": "A two-stage approach to multivariate linear regression with sparsely mismatched data", "venue": "arXiv preprint arXiv:1907.07148,", "year": 2019 }, { "authors": [ "Martin Slawski", "Mostafa Rahmani", "Ping Li" ], "title": "A sparse representation-based approach to linear regression with partially shuffled labels", "venue": "In 35th Conference on Uncertainty in Artificial Intelligence,", "year": 2019 }, { "authors": [ "Jeffrey M Stanton. Galton" ], "title": "pearson, and the peas: A brief history of linear regression for statistics instructors", "venue": "Journal of Statistics Education,", "year": 2001 }, { "authors": [ "Sebastian Thrun" ], "title": "Simultaneous localization and mapping. In Robotics and cognitive approaches to spatial mapping, pp. 13–41", "venue": null, "year": 2007 }, { "authors": [ "Manolis Tsakiris", "Liangzu Peng" ], "title": "Homomorphic sensing", "venue": "In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning,", "year": 2019 }, { "authors": [ "Manolis C Tsakiris", "Liangzu Peng", "Aldo Conca", "Laurent Kneip", "Yuanming Shi", "Hayoung Choi" ], "title": "An algebraic-geometric approach to shuffled linear regression", "venue": "arXiv preprint arXiv:1810.05440,", "year": 2018 }, { "authors": [ "Jayakrishnan Unnikrishnan", "Saeid Haghighatshoar", "Martin Vetterli" ], "title": "Unlabeled sensing with random linear measurements", "venue": "IEEE Transactions on Information Theory,", "year": 2018 }, { "authors": [ "Erdem Varol", "Amin Nejatbakhsh" ], "title": "Robust approximate linear regression without correspondence", "venue": "arXiv preprint arXiv:1906.00273,", "year": 2019 }, { "authors": [ "Titouan Vayer", "Rémi Flamary", "Romain Tavenard", "Laetitia Chapel", "Nicolas Courty" ], "title": "Sliced gromov-wasserstein", "venue": "arXiv preprint arXiv:1905.10124,", "year": 2019 }, { "authors": [ "John Von Neumann" ], "title": "A certain zero-sum two-person game equivalent to the optimal assignment problem", "venue": "Contributions to the Theory of Games,", "year": 1953 }, { "authors": [ "Mengdi Wang", "Yichen Chen", "Jialin Liu", "Yuantao Gu" ], "title": "Random multi-constraint projection: Stochastic gradient methods for convex optimization with many constraints", "venue": "arXiv preprint arXiv:1511.03760,", "year": 2015 }, { "authors": [ "Yujia Xie", "Hanjun Dai", "Minshuo Chen", "Bo Dai", "Tuo Zhao", "Hongyuan Zha", "Wei Wei", "Tomas Pfister" ], "title": "Differentiable top-k operator with optimal transport", "venue": "arXiv preprint arXiv:2002.06504,", "year": 2020 }, { "authors": [ "Hongteng Xu", "Dixin Luo", "Hongyuan Zha", "Lawrence Carin" ], "title": "Gromov-wasserstein learning for graph matching and node embedding", "venue": "arXiv preprint arXiv:1901.06003,", "year": 2019 }, { "authors": [ "Yihong Xu", "Yutong Ban", "Xavier Alameda-Pineda", "Radu Horaud" ], "title": "Deepmot: A differentiable framework for training multiple object trackers", "venue": "arXiv preprint arXiv:1906.06618,", "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Regression analysis has been widely used in various machine learning applications to infer the the relationship between an explanatory random variable (i.e., the input)X ∈ Rd and a response random variable (i.e., the output) Y ∈ Ro (Stanton, 2001). In the classical setting, regression is used on labeled datasets that contain paired samples {xi, yi}ni=1, where xi, yi are realizations of X , Y , respectively.\nUnfortunately, such an input-output correspondence is not always available in some applications. One example is flow cytometry, which is a physical experiment for measuring properties of cells, e.g., affinity to a particular target (Abid & Zou, 2018). Through this process, cells are suspended in a fluid and injected into the flow cytometer, where measurements are taken using the scattering of a laser. However, the instruments are unable to differentiate the cells passing through the laser, such that the correspondence between the cell proprieties (i.e., the measurements) and the cells is unknown. This prevents us from analyzing the relationship between the instruments and the measurements using classical regression analysis, due to the missing correspondence. Another example is multi-object tracking, where we need to infer the motion of objects given consecutive frames in\n∗Equal Contributions. †Yujia Xie, Simiao Zuo, Tuo Zhao are affiliated with Georgia Institute of Technology. Emails: {xieyujia, simiaozuo, tourzhao}@gatech.edu. Yixiu Mao is affiliated with Shanghai Jiao Tong University. Email: 956986044myx@gmail.com. Hongteng Xu is affiliated with Gaoling School of Artificial Intelligence, Renmin University of China, and Beijing Key Laboratory of of Big Data Management and Analysis Methods. Email: hongtengxu@ruc.edu.cn. Xiaojing Ye is affiliated with Georgia State University. Email: xye@gsu.edu. Hongyuan Zha is affiliated with School of Data Science, Shenzhen Institute of Artificial Intelligence and Robotics for Society, the Chinese University of Hong Kong, Shenzhen. Email: zhahy@cuhk.edu.cn.\na video. This requires us to find the correspondence between the objects in the current frame and those in the next frame.\nThe two examples above can be formulated as a shuffled regression problem. Specifically, we consider a multivariate regression model\nY = f (X,Z;w) + ε,\nwhere X ∈ Rd, Z ∈ Re are two input vectors, Y ∈ Ro is an output vector, f : Rd+e → Ro is the unknown regression model with parameters w and ε is the random noise independent on X and Z. When we sample realizations from such a regression model, the correspondence between (X,Y ) and Z is not available. Accordingly, we collect two datasets D1 = {xi, yi}ni=1 and D2 = {zj}nj=1, and there exists a permutation π∗ such that (xi, zπ(i)) corresponds to yi in the regression model. Our goal is to recover the unknown model parameterw. Existing literature also refer to the shuffled regression problem as unlabeled sensing, homomorphic sensing, and regression with an unknown permutation (Unnikrishnan et al., 2018). Throughout the rest of the paper, we refer to it as Regression WithOut Correspondence (RWOC).\nA natural choice of the objective for RWOC is to minimize the sum of squared residuals with respect to the regression model parameter w up to the permutation π(·) over the training data, i.e.,\nminw,π L(w, π) = ∑n i=1 ‖yi − f ( xi, zπ(i);w ) ‖22. (1)\nExisting works on RWOC mostly focus on theoretical properties of the global optima to equation 1 for estimating w and π (Pananjady et al., 2016; 2017b; Abid et al., 2017; Elhami et al., 2017; Hsu et al., 2017; Unnikrishnan et al., 2018; Tsakiris & Peng, 2019). The development of practical algorithms, however, falls far behind from the following three aspects:\n•Most of the works are only applicable to linear regression models. • Some of the existing algorithms are of very high computational complexity, and can only handle small number of data points in low dimensions (Elhami et al., 2017; Pananjady et al., 2017a; Tsakiris et al., 2018; Peng & Tsakiris, 2020). For example, Abid & Zou (2018) adopt an Expectation Maximization (EM) method where Metropolis-Hastings sampling is needed, which is not scalable. Other algorithms choose to optimize with respect to w and π in an alternating manner, e.g., alternating minimization in Abid et al. (2017). However, as there exists a strong interaction between w and π, the optimization landscape of equation 1 is ill-conditioned. Therefore, these algorithms are not effective and often get stuck in local optima.\n• Most of the works only consider the case where there exists an exact one-to-one correspondence between D1 and D2. For many more scenarios, however, these two datasets are not necessarily well aligned. For example, consider D1 and D2 collected from two separate databases, where the users overlap, but are not identical. As a result, there exists only partial one-to-one correspondence. A similar situation also happens to multiple-object tracking: Some objects may leave the scene in one frame, and new objects may enter the scene in subsequent frames. Therefore, not all objects in different frames can be perfectly matched. The RWOC problem with partial correspondence is known as robust-RWOC, or rRWOC (Varol & Nejatbakhsh, 2019), and is much less studied in existing literature.\nTo address these concerns, we propose a new computational framework – ROBOT (Regression withOut correspondence using Bilevel OptimizaTion). Specifically, we propose to formulate the regression without correspondence as a continuous optimization problem. Then by exploiting the interaction between the regression model and the data correspondence, we propose to develop a hypergradient approach based on differentiable programming techniques (Duchi et al., 2008; Luise et al., 2018). Our hypergradient approach views the data correspondence as an operator of the regression, i.e., for a given w, the optimal correspondence is\nπ̂(w) = arg minπ L(w, π). (2) Accordingly, when applying gradient descent to (1), we need to find the gradient with respect to w by differentiating through both the objective function L and the data correspondence π̂(w). For simplicity, we refer as such a gradient to “hypergradient”. Note that due to its discrete nature, π̂(w) is actually not continuous inw. Therefore, such a hypergradient does not exist. To address this issue, we further propose to construct a smooth approximation of π̂(w) by adding an additional regularizer to equation 2, and then we replace π̂(w) with our proposed smooth replacement when computing the hyper gradient of w. Moreover, we also propose an efficient and scalable implementation of\nhypergradient computation based on simple first order algorithms and implicit differentiation, which outperforms conventional automatic differentiation in terms of time and memory cost.\nROBOT can also be extended to the robust RWOC problem, where D1 and D2 are not necessarily exactly aligned, i.e., some data points inD1 may not correspond to any data point inD2. Specifically, we relax the constraints on the permutation π(·) (Liero et al., 2018) to automatically match related data points and ignore the unrelated ones.\nAt last, we conduct thorough numerical experiments to demonstrate the effectiveness of ROBOT. For RWOC (i.e., exact correspondence), we use several synthetic regression datasets and a real gated flow cytometry dataset, and we show that ROBOT outperforms baseline methods by significant margins. For robust RWOC (i.e., inexact correspondence), in addition to synthetic datasets, we consider a vision-based multiple-object tracking task, and then we show that ROBOT also achieves significant improvement over baseline methods. Notations. Let ‖·‖2 denote the `2 norm of vectors, 〈·, ·〉 the inner product of matrices, i.e., 〈A,B〉 =∑ i,j AijBij for matrices A and B. ai:j are the entries from index i to index j of vector a. Let 1n denote an n-dimensional vector of all ones. Denote d(·)d(·) the gradient of scalars, and ∇(·)(·) the Jacobian of tensors. We denote [v1, v2] the concatenation of two vectors v1 and v2. N (µ, σ2) is the Gaussian distribution with mean µ and variance σ2." }, { "heading": "2 ROBOT: A HYPERGRADIENT APPROACH FOR RWOC", "text": "We develop our hypergradient approach for RWOC. Specifically, we first introduce a continuous formulation equivalent to (1), and then propose a smooth bi-level relaxation with an efficient hypergradient descent algorithm." }, { "heading": "2.1 EQUIVALENT CONTINUOUS FORMULATION", "text": "We propose a continuous optimization problem equivalent to (1). Specifically, we rewrite an equivalent form of (1) as follows,\nminw minS∈Rn×n L(w, S) = 〈C(w), S〉 subject to S ∈ P, (3) where P denotes the set of all n× n permutation matrices, C(w) ∈ Rn×n is the loss matrix with\nCij(w) = ‖yi − f (xi, zj ;w) ‖22. Note that we can relax S ∈ P , which is the discrete feasible set of the inner minimization problem of (3), to a convex set, without affecting the optimality, as suggested by the next theorem. Proposition 1. Given any a ∈ Rn and b ∈ Rm, we define Π(a, b) = {A ∈ Rn×m : A1m = a,A>1n = b, Aij ≥ 0}. The optimal solution to the inner discrete minimization problem of (3) is also the optimal solution to the following continuous optimization problem,\nminS∈Rn×n〈C(w), S〉, s.t. S ∈ Π(1n,1n). (4)\nThis is a direct corollary of the Birkhoff-von Neumann theorem (Birkhoff, 1946; Von Neumann, 1953), and please refer to Appendix A for more details. Theorem 1 allows us to replace P in (3) with Π(1n,1n), which is also known as the Birkhoff polytope1(Ziegler, 2012). Accordingly, we obtain the following continuous formulation,\nminw minS∈Rn×n〈C(w), S〉 subject to S ∈ Π(1n,1n). (5) Remark 1. In general, equation 3 can be solved by linear programming algorithms (Dantzig, 1998)." }, { "heading": "2.2 CONVENTIONAL WISDOM: ALTERNATING MINIMIZATION", "text": "Conventional wisdom for solving (5) suggests to use alternating minimization (AM, Abid et al. (2017)). Specifically, at the k-th iteration, we first update S by solving\nS(k) = arg minS∈Π(1n,1n) L(w (k−1), S),\nand then given S(k), we update w using gradient descent or exact minimization, i.e., w(k) = w(k−1) − η∇wL(w(k−1), S(k)). However, AM works poorly for solving (5) in practice. This is because w and S have a strong interaction throughout the iterations: A slight change to w may lead to significant change to S. Therefore, the optimization landscape is ill-conditioned, and AM can easily get stuck at local optima.\n1This is a common practice in integer programming (Marcus & Ree, 1959)." }, { "heading": "2.3 SMOOTH BI-LEVEL RELAXATION", "text": "To tackle the aforementioned computational challenge, we propose a hypergradient approach, which can better handle the interaction between w and S. Specifically, we first relax (5) to a smooth bilevel optimization problem, and then we solve the relaxed bi-level optimization problem using the hypergradient descent algorithm.\nWe rewrite (5) as a smoothed bi-level optimization problem,\nminw F (w) = 〈C(w), S∗ (w)〉, subject to S∗ (w) = arg minS∈Π(1n,1n)〈C(w), S〉+ H(S), (6) where H(S) = 〈logS, S〉 is the entropy of S. The regularizer H(S) in equation 6 alleviates the sensitivity of S∗(w) to w. Note that if without such a regularizer, we solve\nS∗(w) = arg minS∈Π(1n,1n)〈C(w), S〉. (7) The resulting S∗(w) can be discontinuous in w. This is because S∗(w) is the optimal solution of a linear optimization problem, and usually lies on a vertex of Π(1n, 1n). This means that if we change w, S∗(w) either stays the same or jumps to another vertex of Π(1n, 1n). The jump makes S∗(w) highly sensitive to w. To alleviate this issue, we propose to smooth S∗(w) by adding an entropy regularizer to the lower level problem. The entropy regularizer enforces S∗ (w) to stay in the interior of Π(1n, 1n), and S∗ (w) changes smoothly with respect to w, as suggested by the following theorem. Theorem 1. For any > 0, S∗ (w) is differentiable, if the cost C(w) is differentiable with respect to w. Consequently, the objective F (w) = 〈C(w), S∗ (w)〉 is also differentiable.\nThe proof is deferred to Appendix C. Note that (6) provides us a new perspective to interpret the relationship between w and S. As can be seen from (6), w and S have different priorities: w is the parameter of the leader problem, which is of the higher priority; S is the parameter of the follower problem, which is of the lower priority, and can also be viewed as an operator of w – denoted by S∗ (w). Accordingly, when we minimize (6) with respect to w using gradient descent, we should also differentiate through S∗ . We refer to such a gradient as “hypergradient” defined as follows,\n∇wF (w) = ∂F (w) ∂C(w) ∂C(w) ∂w + ∂F (w) ∂S∗ (w) ∂S∗ (w) ∂w = ∇wL(w, S) + ∂F (w) ∂S∗ (w) ∂S∗ (w) ∂w .\nWe further examine the alternating minimization algorithm from the bi-level optimization perspective: Since∇wL(w(k−1), S(k)) is not differentiable through S(k), AM is essentially using an inexact gradient. From a game-theoretic perspective2, (6) defines a competition between the leader w and the follower S. When using AM, S only reacts to what w has responded. In contrast, when using the hypergradient approach, the leader essentially recognizes the follower’s strategy and reacts to what the follower is anticipated to response through ∂F (w)∂S∗ (w) ∂S∗ (w) ∂w . In this way, we can find a better descent direction for w. Remark 2. We use a simple example of quadratic minimization to illustrative why we expect the bilevel optimization formulation in (6) to enjoy a benign optimization landscape. We consider a quadratic function\nL(a1, a2) = a >Pa+ b>a, (8)\nwhere a1 ∈ Rd1 , a2 ∈ Rd2 , a = [a1, a2], P ∈ R(d1+d2)×(d1+d2), b ∈ Rd1+d2 . Let P = ρ1d1+d21 > d1+d2\n+ (1 − ρ)Id1+d2 , where Id1+d2 is the identity matrix, and ρ is a constant. We solve the following bilevel optimization problem,\nmina1 F (a1) = L(a1, a ∗ 2(a1)) subject to a ∗ 2(a1) = arg mina2 L(a1, a2) + λ‖a2‖ 2 2, (9)\nwhere λ is a regularization coefficient. The next proposition shows that ∇2F (a1) enjoys a smaller condition number than ∇2a1a1L(a1, a2), which corresponds to the problem that AM solves. Proposition 2. Given F defined in (9), we have λmax(∇2F (a1)) λmin(∇2F (a1)) = 1 + 1− ρ+ λ d2ρ− ρ+ λ+ 1 · d1ρ 1− ρ and λmax(∇2a1a1L(a1, a2)) λmin(∇2a1a1L(a1, a2)) = 1 + d1ρ 1− ρ .\nThe proof is deferred to Appendix B.1. As suggested by Proposition 2, F (a1) is much betterconditioned than L(a1, a2) in terms of a1 for high dimensional settings.\n2The bilevel formulation can be viewed as a Stackelberg game." }, { "heading": "2.4 SOLVING RWOC BY HYPERGRADIENT DESCENT", "text": "We present how to solve (6) using our hypergradient approach. Specifically, we compute the “hypergradient” of F (w) based on the following theorem. Theorem 2. The gradient of F with respect to w is\n∇wF (w) = 1 n,n∑ i,j=1 (1− Cij)S∗ ,ij + n,n∑ h,`=1 Ch`S ∗ ,h`Phij + n,n∑ h,`=1 Ch`S ∗ ,h`Q`ij ∇wCij . (10) The definition of P and Q and the proof is deferred to Appendix C. Theorem 2 suggests that we first solve the lower level problem in (6),\nS∗ = arg minS∈Π(1n,1n)〈C(w), S〉+ H(S), (11) and then substitute S∗ into (10) to obtain∇wF (w). Note that the optimization problem in (11) can be efficiently solved by a variant of Sinkhorn algorithm (Cuturi, 2013; Benamou et al., 2015). Specifically, (11) can be formulated as an entropic optimal transport (EOT) problem (Monge, 1781; Kantorovich, 1960), which aims to find the optimal way to transport the mass from a categorical distribution with weight µ = [µ1, . . . , µn]> to another categorical distribution with weight ν = [ν1, . . . , νm]>,\nΓ∗ = arg minΓ∈Π(µ,ν)〈M,Γ〉+ H(Γ),\nwith Π(µ, ν) = {Γ ∈ Rn×m : Γ1m = µ,Γ>1n = ν,Γij ≥ 0}, (12)\nwhere M ∈ Rn×m is the cost matrix with Mij the transport cost. When we set the two categorical distributions as the empirical distribution of D1 and D2, respectively,\nM = C(w) and µ = ν = 1n/n, one can verify that (12) is a scaled lower problem of (6), and their optimal solutions satisfies S∗ = nΓ∗. Therefore, we can apply Sinkhorn algorithm to solve the EOT problem in equation 12: At the `-th iteration, we take\np(`+1) = µ\nGq(`) and q(`+1) =\nν\nG>p(`+1) , where q(0) =\n1 n 1n and Gij = exp\n( −Cij(w) ) ,\nG ∈ Rn×n, and the division here is entrywise. Let p∗ and q∗ denote the stationary points. Then we obtain S∗ ,ij = np ∗ iGijq ∗ j .\nRemark 3. The Sinkhorn algorithm is iterative and cannot exactly solve (11) within finite steps. As the Sinkhorn algorithm is very efficient and attains linear convergence, it suffices to well approximate the gradient∇wF (w) using the output inexact solution." }, { "heading": "3 ROBOT FOR ROBUST CORRESPONDENCE", "text": "We next propose a robust version of ROBOT to solve rRWOC (Varol & Nejatbakhsh, 2019). Note that in (6), the constraint S ∈ Π(1n,1n) enforces a one-to-one matching between D1 and D2. For rRWOC, however, such an exact matching may not exist. For example, we have n < m, where n = |D1|, m = |D2|. Therefore, we need to relax the constraint on S. Motivated by the connection between (6) and (12), we propose to solve the lower problem3:\n(S∗r (w), µ̄ ∗, ν̄∗) =arg minS∈Π(µ̄,ν̄)〈C(w), S〉+ H(S), (13)\nsubject to µ̄>1n = n, ν̄ >1m = m, ‖µ̄− 1n‖22 ≤ ρ1, ‖ν̄ − 1m‖22 ≤ ρ2,\nwhere S∗r (w) ∈ Rn×m denotes an inexact correspondence between D1 and D2. As can be seen in (13), we relax the marginal constraint Π(1,1) in (6) to Π(µ̄, ν̄), where µ̄, ν̄ are required to not deviate much from 1. Problem (13) relaxes the marginal constraints Π(1,1) in the original problem to Π(µ̄, ν̄), where µ̄, ν̄ are picked such that they do not deviate too much from 14. Illustrative examples of the exact and robust alignments are provided in Figure 1.\n3The idea is inspired by the marginal relaxation of optimal transport, first independently proposed by Kondratyev et al. (2016) and Chizat et al. (2018a), and later developed by Chizat et al. (2018c); Liero et al. (2018). Chizat et al. (2018b) share the same formulation as ours.\n4Here we measure the deviation using the Euclidean distance, and more detailed discussions can be found in Appendix F\nComputationally, (13) can be solved by taking the Sinkhorn iteration and the projected gradient iteration in an alternating manner (See more details in Appendix D). Given S∗r (w), we solve the upper level optimization in (6) to obtain w∗, i.e.,\nw∗ = arg minw〈C(w), S∗r (w)〉. Similar to the previous section, we use a first-order algorithm to solve this problem, and we derive explicit expressions for the update rules. See Appendix E for details.\n4 EXPERIMENT\nWe evaluate ROBOT and ROBOTrobust on both synthetic and realworld datasets, including flow cytometry and multi-object tracking. We first present numerical results and then we provide insights in the discussion section. Experiment details and auxiliary results can be found in Appendix G." }, { "heading": "4.1 UNLABELED SENSING", "text": "Data Generation. We follow the unlabeled sensing setting (Tsakiris & Peng, 2019) and generate n = 1000 data points {(yi, zi)}ni=1, where zi ∈ Re. Note here we take d = 0. We first generate zi, w ∼ N (0e, Ie), and εi ∼ N (0, ρ2noise). Then we compute yi = z>i w+εi. We randomly permute the order of 50% of zi so that we lose the Z-to-Y correspondence. We generate the test set in the same way, only without permutation.\nBaselines and Training. We consider the following scalable methods: 1. Oracle: Standard linear regression where no data are permuted. 2. Least Squares (LS): Standard linear regression, i.e., treating the data as if they are not permuted. 3. Alternating Minimization (AM, Abid et al. (2017)): We iteratively solve the correspondence\ngiven w, and update w using gradient descent with the correspondence. 4. Stochastic EM (Abid & Zou, 2018): A stochastic EM approach to recover the permutation. 5. Robust Regression (RR, Slawski & Ben-David (2019); Slawski et al. (2019a)). A two-stage\nblock coordinate descent approach to discard outliers and fit regression models. 6. Random Sample (RS, Varol & Nejatbakhsh (2019)): A random sample consensus (RANSAC)\napproach to estimate w.\nWe initialize AM, EM and ROBOT using the output of RS with multi-start. We adopt a linear model f(Z;w) = Z>w. Models are evaluated by the relative error on the test set, i.e., error =∑ i(ŷi − yi)2/ ∑ i(yi − ȳ)2, where ŷi is the predicted label, and ȳ is the mean of {yi}.\nResults. We visualize the results in Figure 2. In all the experiments, ROBOT achieves better results than the baselines. Note that the relative error is larger for all methods except Oracle as the dimension and the noise increase. For low dimensional data, e.g., e = 5, our model achieves even better performance than Oracle. We have more discussions on using RS as initializations in Appendix G.5." }, { "heading": "4.2 NONLINEAR REGRESSION", "text": "Data Generation. We mimic the scenario where the dataset is collected from different platforms. Specifically, we generate n data points {(yi, [xi, zi])}ni=1, where xi ∈ Rd and zi ∈ Re. We first generate xi ∼ N (0d, Id), zi ∼ N (0e, Ie), w ∼ N (0d+e, Id+e), and εi ∼ N (0, ρ2noise). Then we\ncompute yi = f([xi, zi];w) + εi. Next, we randomly permute the order of {zi} so that we lose the data correspondence. Here, D1 = {(xi, yi)} and D2 = {zj} mimic two parts of data collected from two separate platforms. Since we are interested in the response on platform one, we treat all data from platform two, i.e., D2, as well as 80% of data in D1 as the training data. The remaining data from D1 are the test data. Notice that we have different number of data on D1 and D2, i.e., the correspondence is not exactly one-to-one.\nBaselines and Training. We consider a nonlinear function f(X,Z;w) = ∑d k=1 sin ([X,Z]kwk). In this case, we consider only two baselines — Oracle and LS, since the other baselines in the previous section are designed for linear models. We evaluate the regression models by the transport cost divided by ∑ i(yi − ȳ)2 on the test set.\nResults. As shown in Figure 3, ROBOT-robust consistently outperforms ROBOT and LS, demonstrating the effectiveness of our robust formulation. Moreover, ROBOT-robust achieves better performance than Oracle when the number of training data is large or when the noise level is high.\n4.3 FLOW CYTOMETRY\nIn flow cytometry (FC), a sample containing particles is suspended in a fluid and injected into the flow cytometer, but the measuring instruments are unable to preserve the correspondence between the particles and the measurements. Different from FC, gated flow cytometry (GFC) uses “gates” to sort the particles into one of many bins, which provides partial ordering information since the measurements are provided individually for each bin. In practice, there are usually 3 or 4 bins.\nSettings. We adopt the dataset from Knight et al. (2009). Following Abid et al. (2017), the outputs yi’s are normal-\nized, and we select the top 20 significant features by a linear regression on the top 1400 items in the dataset. We use 90% of the data as the training data, and the remaining as test data. For ordinary FC, we randomly shuffle all the labels in the training set. For GFC, the training set is first sorted by the labels, and then divided into equal-sized groups, mimicking the sorting by gates process. The labels in each group are then randomly shuffled. To simulate gating error, 1% of the data are shuffled across the groups. We compare ROBOT with Oracle, LS, Hard EM (a variant of Stochastic EM proposed in Abid & Zou (2018)), Stochastic EM, and AM. We use relative error on the test set as the evaluation metric.\nResults. As shown in Figure 4, while AM achieves good performance on GFC when the number of groups is 3, it behaves poorly on the FC task. ROBOT, on the other hand, is efficient on both tasks." }, { "heading": "4.4 MULTI-OBJECT TRACKING", "text": "In this section we extend our method to vision-based Multi-Object Tracking (MOT), a task with broad applications in mobile robotics and autonomous driving, to show the potential of applying RWOC to more real-world tasks. Given a video and the current frame, the goal of MOT is to predict the locations of the objects in the next frame. Specifically, object detectors (Felzenszwalb et al., 2009; Ren et al., 2015) first provide us the potential locations of the objects by their bounding boxes. Then, MOT aims to assign the bounding boxes to trajectories that describe the path of individual objects over time. Here, we formulate the current frame and the objects’ locations in the current frame as D2 = {zj}, while we treat the next frame and the locations in the next frame as D1 = {(xi, yi)}.\nExisting deep learning based MOT algorithms require large amounts of annotated data, i.e., the ground truth of the correspondence, during training. Different from them, our algorithm does not require the correspondence between D1 and D2, and all we need is the video. This task is referred to as unsupervised MOT (He et al., 2019).\nRelated Works. To the best of our knowledge, the only method that accomplishes unsupervised end-toend learning of MOT is He et al. (2019). However, it targets tracking with low densities, e.g., Sprites-MOT, which is different from our focus.\nSettings. We adopt the MOT17 (Milan et al., 2016) and the MOT20 (Dendorfer et al., 2020) datasets. Scene densities of the two datasets are 31.8 and 170.9, respectively, which means the scenes are pretty crowded as we illustrated in Figure 5. We adopt the DPM detector (Felzenszwalb et al., 2009) on MOT17 and the Faster-RCNN detector (Ren et al., 2015) on MOT20 to provide us the bounding boxes. Inspired by Xu et al. (2019b), the cost matrix is computed as the average of the Euclidean center-point distance and the Jaccard distance between the bounding boxes,\nCij(w) = 1\n2\n( ‖c(f(zj ;w))− c(yi)‖2√\nH2 +W 2 + J (f(zj ;w), yi)\n) ,\nwhere c(·) is the location of the box center, H and W are the height and the width of the video frame, and J (·, ·) is the Jaccard distance defined as 1-IoU (Intersection-over-Union). We utilize the single-object tracking model SiamRPN5 (Li et al., 2018) as our regression model f . We apply ROBOT-robust with ρ1 = ρ2 = 10−3. See Appendix G for more detailed settings.\nResults. We demonstrate the experiment results in Table 1, where the evaluation metrics follow Ristani et al. (2016). In the table, ↑ represents the higher the better, and ↓ represents the lower the better. ROBOT signifies the model trained by ROBOT-robust, and w/o ROBOT means the pretrained model in Li et al. (2018). The scores are improved significantly after training with ROBOT-robust.\nWe also include the scores of the SORT model (Bewley et al., 2016) obtained from the dataset platform. Different from SiamRPN and SiamRPN+ROBOT, SORT is a supervised learning model. As shown, our unsupervised training framework achieves comparable or even better performance." }, { "heading": "5 DISCUSSION", "text": "Sensitivity to initialization. As stated in Pananjady et al. (2017b), obtaining the global optima of (1) is in general an NP-hard problem. Some “global” methods use global optimization techniques and have exponential complexity, e.g., Elhami et al. (2017), which is not applicable to large data. The other “local” methods only guarantee converge to local optima, and the convergence is very sensitive to initialization. Compared with existing “local” methods, our method is computationally efficient and greatly reduces the sensitivity to initialization.\n5The initial weights of f are obtained from https://github.com/foolwood/DaSiamRPN.\n25 50 75 Percentile\n0\n1\n2\n3\n4\n5\nA vg\n.R es\nid ua\nl\n(a) Training residual\n25 50 75 Percentile\n0\n1\n2\nR el\n.E rr\nor\nAM ROBOT\n(b) Test error\nFigure 6: Results of different initialization of AM and ROBOT.\nTo demonstrate such an advantage, we run AM and ROBOT with 10 different initial solutions, and then we sort the results based on (a) the averaged residual on the training set, and (b) the relative prediction error on the test set. We plot the percentiles in Figure 6. Here we use fully shuffled data under the unlabeled sensing setting, and we set n = 1000, e = 5, ρ2noise = 0.1, and = 10−2. We can see that ROBOT can find “good” solutions in 30% of the cases (The relative prediction error is smaller than 1), but AM is more sensitive to the initialization and cannot find “good” solutions.\nROBOT v.s. Automatic Differentiation (AD). Our algorithm computes the Jacobian matrix directly based on the KKT condition of the lower problem (11). An alternative approach to approximate the Jacobian is the automatic differentiation through the Sinkhorn iterations for updating S when solving (11). As suggested by Figure 7 (a), running Sinkhorn iterations until conver-\ngence (200 Sinkhorn iterations) can lead to a better solution6. In order to apply AD, we need to store all the intermediate updates of all the Sinkhorn iterations. This require the memory usage to be proportional to the number of iterations, which is not necessarily affordable. In contrast, applying our explicit expression for the backward pass is memory-efficient. Moreover, we also observe that AD is much more time-consuming than our method. The timing performance and memory usage are shown in Figure 7 (b)(c), where we set n = 1000.\nConnection to EM. Abid & Zou (2018) adopt an Expectation Maximization (EM) method for RWOC, where S is modeled as a latent random variable. Then in the M-step, one maximizes the expected likelihood of the data over S. This method shares the same spirit as ours: We avoid updating w using one single permutation matrix like AM. However, this method is very dependent on a good initialization. Specifically, if we randomly initialize w, the posterior distribution of S in this iteration would be close to its prior, which is a uniform distribution. In this\nway, the follow-up update for w is not informative. Therefore, the solution of EM would quickly converge to an undesired stationary point. Figure 8 illustrates an example of converged correspondence, where we adopt n = 30, o = e = 1, d = 0. For this reason, we initialize EM with good initial points, either by RS or AM throughout all experiments.\nRelated works with additional constraints. There is another line of research which improves the computational efficiency by solving variants of RWOC with additional constraints. Specifically, Haghighatshoar & Caire (2017); Rigollet & Weed (2018) assume an isotonic function (note that such an assumption may not hold in practice), and Shi et al. (2018); Slawski & Ben-David (2019); Slawski et al. (2019a;b); Varol & Nejatbakhsh (2019) assume only a small fraction of the correspondence is missing. Our method is also applicable to these problems, as long as the additional constraints can be adapted to the implicit differentiation.\nMore applications of RWOC. RWOC problems generally appear for two reasons. First, the measuring instruments are unable to preserve the correspondence. In addition to GFC and MOT, we list a few more examples: SLAM tracking (Thrun, 2007), archaeological measurements (Robinson, 1951), large sensor networks (Keller et al., 2009), pose and correspondence estimation (David et al., 2004), and the genome assembly problem from shotgun reads (Huang & Madan, 1999). Second, the data correspondence is masked for privacy reasons. For example, we want to build a recommender system for a new platform, borrowing user data from a mature platform.\n6We remark that running one iteration sometimes cannot converge." }, { "heading": "ACKNOWLEDGEMENT", "text": "This works is partially supported by NSF IIS-2008334. Hongteng Xu is supported in part by Beijing Outstanding Young Scientist Program (NO. BJJWZYJH012019100020098) and National Natural Science Foundation of China (No. 61832017). Xiaojing Ye is partially supported by NSF DMS1925263. Hongyuan Zha is supported in part by a grant from Shenzhen Institute of Artificial Intelligence and Robotics for Society. We also appreciate the fruitful discussions with Bo Dai and Yan Li." }, { "heading": "A CONNECTION BETWEEN OT AND RWOC", "text": "Theorem 1. Denote Π(a, b) = {S ∈ Rn×m : S1m = a, S>1n = b, Sij ≥ 0} for any a ∈ Rn and b ∈ Rm. Then at least one of the optimal solutions of the following problem lies in P .\nminS∈Rn×n〈C(w), S〉, s.t. S ∈ Π(1n,1n). (14)\nProof. Denote the optimal solution of (14) as Z∗. As we mentioned earlier, this is a direct corollary of Birkhoff–von Neumann theorem (Birkhoff, 1946; Von Neumann, 1953). Specifically, Birkhoff–von Neumann theorem claims that the polytope Π(1n,1n) is the convex hull of the set of n × n permutation matrices, and furthermore that the vertices of Π(1n,1n) are precisely the permutation matrices.\nOn the other hand, (14) is a linear optimization problem. There would be at least one optimal solutions lies at the vertices given the problem is feasible. As a result, there would be at least one Z∗ being a permutation matrix." }, { "heading": "B TWO PERSPECTIVES OF THE MOTIVATIONS OF BILEVEL OPTIMIZATION", "text": "" }, { "heading": "B.1 FASTER CONVERGENCE", "text": "The bilevel optimization formulation has a better gradient descent iteration complexity than alternating minimization. To see this, consider a quadratic function F (a1, a2) = a>Pa + b>a, where a1 ∈ Rd1 , a2 ∈ Rd2 , a = [a>1 , a>2 ]> ∈ R(d1+d2), P ∈ R(d1+d2)×(d1+d2), b ∈ R(d1+d2). To further simplify the discussion, we assume P = ρ1(d1+d2)1 > (d1+d2)\n+ (1 − ρ)Id1+d2 , where Id1+d2 is the identity matrix. Then we have the following proposition.\nProposition 1. Given F defined in (9), we have λmax(∇2F (a1)) λmin(∇2F (a1)) = 1 + 1− ρ+ λ 1− ρ d1ρ d2ρ− ρ+ λ+ 1 and λmax(∇2a1a1L(a1, a2)) λmin(∇2a1a1L(a1, a2)) = 1 + d1ρ 1− ρ .\nProof. For alternating minimization, the Hessian for a1 is a submatrix of P , i.e., HAM = ρ1d11 > d1 + (1− ρ)Id1 , whose condition number is\nCAM = 1 + d1ρ\n1− ρ .\nWe now compute the condition number for ROBOT. Denote\nP = [ P11 P12 P21 P22 ] , b = [ b1 b2 ] ,\nwhere P11 ∈ Rd1×d1 , P12 ∈ Rd1×d2 , P21 ∈ Rd2×d1 , P22 ∈ Rd2×d2 , and b1 ∈ Rd1 , b2 ∈ Rd2 . ROBOT first minimize over a2,\na∗2(a1) = arg min a2 F (a1, a2) = −(P22 + λId2)−1(P21a1 + b2/2).\nSubstituting a∗2(a1) into F (a1, a2), we can obtain the Hessian for a1 is HROBOT = P11 − P12(P22 + λId2)−1P21. Using Sherman–Morrison formula, we can explicitly express P−122 as\nP−122 = 1\n1− ρ+ λ Id2 −\nρ\n(1− ρ+ λ)(1− ρ+ λ+ ρd2) 1d21\n> d2 .\nSubstituting it into HROBOT, HROBOT = P11 − P12P−122 P21 = (1− ρ)Id1 + ( ρ− d2ρ 2\nd2ρ− ρ+ λ+ 1\n) 1d11 > d1 .\nTherefore, the condition number is\nCROBOT = 1 + 1− ρ+ λ 1− ρ d1ρ d2ρ− ρ+ λ+ 1 .\nNote that CAM increases linearly with respect to d1. Therefore, the optimization problem inevitably becomes ill-conditioned as dimension increase. In contrast, CROBOT can stay in the same order of magnitude when d1 and d2 increase simultaneously.\nSince the iteration complexity of gradient descent is proportional to the condition number (Bottou et al., 2018), ROBOT needs fewer iterations to converge than AM." }, { "heading": "C DIFFERENTIABILITY", "text": "Theorem 2. For any > 0, S∗ (w) is differentiable, as long as the cost C(w) is differentiable with respect to w. As a result, the objective L (w) = 〈C(w), S∗ (w)〉 is also differentiable.\nProof. The proof is analogous to Xie et al. (2020).\nWe first prove the differentiability of S∗ (w). This part of proof mirrors the proof in Luise et al. (2018). By Sinkhorn’s scaling theorem (Sinkhorn & Knopp, 1967),\nS∗ (w) = diag(e ξ∗(w) )e− C(w) diag(e ζ∗(w) ).\nTherefore, since Cij(w) is differentiable, Γ∗, is differentiable if (ξ∗(w), ζ∗(w)) is differentiable as a function of w.\nLet us set\nL(ξ, ζ;µ, ν, C) = ξTµ+ ζT ν − n,m∑ i,j=1 e− Cij−ξi−ζj .\nand recall that (ξ∗, ζ∗) = arg maxξ,ζ L(ξ, ζ;µ, ν, C). The differentiability of (ξ ∗, ζ∗) is proved using the Implicit Function theorem and follows from the differentiability and strict convexity in (ξ∗, ζ∗) of the function L.\nTheorem 3. Denoting L = 〈C(w), S∗ (w)〉. The gradient of L with respect to w is\n∇wL = 1 n,n∑ i,j=1 (1− Cij)S∗ ,ij + n,n∑ h,`=1 Ch`S ∗ ,h` dξ∗h dCij + n,n∑ h,`=1 Ch`S ∗ ,h` dζ∗` dCij ∇wCij , (15) where [ ∇Cξ∗ ∇Cζ∗ ] = [ −H−1D 0 ] with −H−1D ∈ R(2n−1)×n×n,0 ∈ R1×n×n,\nD`ij = 1\n{ δ`iS ∗ ,ij , ` = 1, · · · , n;\nδ`jS ∗ ,ij , ` = n+ 1, · · · , 2n− 1,\nH−1 = −\n[ (diag(µ))−1 + (diag(µ))−1S̄∗ K−1S̄∗ T (diag(µ))−1 −(diag(µ))−1S̄∗ K−1\n−K−1S̄∗ T (diag(µ))−1 K−1\n] ,\nand K = diag(ν̄)− S̄∗ T (diag(µ))−1S̄∗ , ν̄ = ν1:n−1, S̄∗ = S ∗ ,1:n,1:n−1.\nProof. This result is straightforward combining the Sinkhorn’s scaling theorem and Theorem 3 in Xie et al. (2020)." }, { "heading": "D ALGORITHM OF THE FORWARD PASS FOR ROBOT-ROBUST", "text": "For better numerical stability, in practice we add two more regularization terms, S∗r (w), µ̄\n∗, ν̄∗ = arg minS∈Π(µ̄,ν̄), µ̄,ν̄∈∆n〈C(w), S〉+ H(S) + 1h(µ̄) + 2h(ν̄), (16) s.t. F(µ̄, µ) ≤ ρ1, F(ν̄, ν) ≤ ρ2,\nwhere h(µ̄) = ∑ i µ̄i log µ̄i is the entropy function for vectors. This can avoid the entries of µ̄ and ν̄ shrink to zeros when updated by gradient descent. We remark that since we have entropy termH(S), the entries of S would not be exactly zeros. Furthermore, we have µ̄ = S1 and µ̄ = S1. Therefore, theoretically the entries of µ̄ and ν̄ will not be zeros. We only add the two more entropy terms for numerical consideration. The detailed algorithm is in Algorithm 1. Although the algorithm is not guaranteed to converge to a feasible solution, in practice it usually converges to a good solution (Wang et al., 2015).\nAlgorithm 1 Solving S∗r for robust matching\nRequire: C ∈ Rm×n, µ, ν,K, , L, η Gij = e − Cij\nµ̄ = µ, ν̄ = ν b = 1n for l = 1, · · · , L do a = µ̄/(Gb), b = ν̄/(GTa)\nµ̄ = µ̄− η(e a + 1 ∗ log µ̄), ν̄ = ν̄ − η(e b + 2 ∗ log ν̄) µ̄ = max{µ̄, 0}, ν̄ = max{ν̄, 0} µ̄ = µ̄/(µ̄>1), ν̄ = ν̄/(ν̄>1) if ‖µ̄− µ‖22 > ρ1 then µ̄ = µ+ √ ρ1\nµ̄−µ ‖µ̄−µ‖2\nend if if ‖ν̄ − ν‖22 > ρ2 then ν̄ = ν + √ ρ2\nν̄−ν ‖ν̄−ν‖2\nend if end for S = diag(a) G diag(b)" }, { "heading": "E ALGORITHM OF THE BACKWARD PASS FOR ROBOT-ROBUST", "text": "Since the derivation is tedious, we first summarize the outline of the derivation, then provide the detailed derivation." }, { "heading": "E.1 SUMMARY", "text": "Given µ̄∗, ν̄∗, S∗r (w), we compute the Jacobian matrix dS ∗ r (w)/dw using implicit differentiation and differentiable programming techinques. Specifically, the Lagrangian function of Problem (16) is\nL =〈C, S〉+ H(S) + 1h(µ̄) + 2h(ν̄)− ξ>(Γ1m − µ)− ζ>(Γ>1n − ν) + λ1(µ̄\n>1n − 1) + λ2(ν̄>1m − 1) + λ3(‖µ̄− µ‖22 − ρ1) + λ4(‖ν̄ − ν‖22 − ρ2). where ξ and ζ are dual variables. The KKT conditions (Stationarity condition) imply that the optimal solution Γ∗, can be formulated using the optimal dual variables ξ∗ and ζ∗ as,\nS∗r = diag(e ξ∗ )e− C diag(e ζ∗ ). (17)\nBy the chain rule, we have dS∗r dw = dS∗r dC dC dw = ( ∂S∗r ∂C + ∂S∗r ∂ξ∗ dξ∗ dC + ∂S∗r ∂ζ∗ dζ∗ dC ) dC dw .\nTherefore, we can compute dS∗r (w)/dw if we obtain dξ∗ dC and dζ∗ dC .\nSubstituting (17) into the Lagrangian function, at the optimal solutions we obtain L = L(ξ∗, ζ∗, µ̄∗, ν̄∗, λ∗1, λ∗2, λ∗3, λ∗4;C).\nDenote r∗ = [(ξ∗)>, (ζ∗)>, (µ̄)>, (ν̄)>, λ∗1, λ ∗ 2, λ ∗ 3, λ ∗ 4] >, and φ(r∗;C) = ∂L(r∗;C)/∂r∗. At the optimal dual variable r∗, the KKT condition immediately yields φ(r∗;C) ≡ 0. By the chain rule, we have\ndφ(r∗;C) dC = ∂φ(r∗;C) ∂C + ∂φ(r∗;C) ∂r∗ dr∗ dC = 0. (18)\nRerranging terms, we obtain\ndr∗ dC = −\n( ∂φ(r∗;C)\n∂r∗\n)−1 ∂φ(r∗;C)\n∂C . (19)\nCombining (17), (18), and (19), we can then obtain dS∗r (w)/dw." }, { "heading": "E.2 DETAILS", "text": "Now we provide the detailed derivation for computing dS∗r /dw.\nSince S∗r is the optimal solution of an optimization problem, we can follow the implicit function theorem to solve for the closed-form expression of the gradient. Specifically, we adopt F(µ̄, ν) =∑ i(µ̄i − µi)2, and rewrite the optimization problem as\nmin µ̄,ν̄,S 〈C, S〉+ ∑ ij Sij(logSij − 1) + 1 ∑ i µ̄i(log µ̄i − 1) + 2 ∑ j ν̄j(log ν̄j − 1),\ns.t., ∑ j Sij = µ̄i, ∑ i\nSij = ν̄j ,∑ i µ̄i = 1, ∑ j\nν̄j = 1,∑ i (µ̄i − µi)2 ≤ ρ1, ∑ j (ν̄j − νj)2 ≤ ρ2.\nThe Language of the above problem is L(C, S, µ̄, ν̄, ξ, ζ, λ1, λ2, λ3, λ4)\n= 〈C, S〉+ ∑ ij Sij(logSij − 1) + 1 ∑ i µ̄i(log µ̄i − 1) + 2 ∑ j ν̄j(log ν̄j − 1)\n− ξ>(S1m − µ̄)− ζ>(S>1n − ν̄) + λ1( ∑ i µ̄i − 1) + λ2( ∑ j ν̄j − 1) + λ3( ∑ i (µ̄i − µi)2 − ρ1) + λ4( ∑ j (ν̄j − νj)2 − ρ2).\nEasy to see that the Slater’s condition holds. Denote L∗ = L(C, S∗r , µ̄∗, ν̄∗, ξ∗, ζ∗, λ∗1, λ∗2, λ∗3, λ∗4).\nFollowing the KKT conditions, dL∗\ndS∗r,ij = Cij + logS\n∗ r,ij − ξ∗i − ζ∗j = 0.\nTherefore, S∗r,ij = e ξ∗i +ζ ∗ j−Cij . Then we have\ndS∗r dw = ( ∂S∗r ∂C + ∂S∗r ∂ξ∗ dξ∗ dC + ∂S∗r ∂ζ∗ dζ∗ dC ) dC dw .\nSo all we need to do is to compute dξ ∗ dC and dζ∗ dC . Denote Fij = e ξi+ζj−Cij . Denote\nφ = dL dξ = µ̄− F1m,\nψ = dL dζ = ν̄ − F>1n,\np = dL dµ̄ = ξ + λ11n + 2λ3(µ̄− µ) + 1 log µ̄,\nq = dL dν̄ = ζ + λ21m + 2λ4(ν̄ − ν) + 2 log ν̄, χ1 = dL dλ1 = µ̄>1n − 1, χ2 = dL dλ2 = ν̄>1m − 1, χ3 = λ3(‖µ̄− µ‖22 − ρ1), χ4 = λ4(‖ν̄ − ν‖22 − ρ2).\nDenote χ = [χ1, χ2, χ3, χ4], and λ = [λ1, λ2, λ3, λ4]. Following the KKT conditions, we have φ = 0, ψ = 0, p = 0, q = 0, χ = 0,\nat the optimal solutions. Therefore, for the optimal solutions we have dφ\ndC = ∂φ ∂C + ∂φ ∂ξ∗ dξ∗ dC + ∂φ ∂ζ∗ dζ∗ dC + ∂φ ∂µ̄∗ dµ̄∗ dC + ∂φ ∂ν̄∗ dν̄∗ dC + ∂φ ∂λ∗ dλ∗ dC = 0,\ndψ dC = ∂ψ ∂C + ∂ψ ∂ξ∗ dξ∗ dC + ∂ψ ∂ζ∗ dζ∗ dC + ∂ψ ∂µ̄∗ dµ̄∗ dC + ∂ψ ∂ν̄∗ dν̄∗ dC + ∂ψ ∂λ∗ dλ∗ dC = 0,\ndp dC = ∂p ∂C + ∂p ∂ξ∗ dξ∗ dC + ∂p ∂ζ∗ dζ∗ dC + ∂p ∂µ̄∗ dµ̄∗ dC + ∂p ∂ν̄∗ dν̄∗ dC + ∂p ∂λ∗ dλ∗ dC = 0,\ndq dC = ∂q ∂C + ∂q ∂ξ∗ dξ∗ dC + ∂q ∂ζ∗ dζ∗ dC + ∂q ∂µ̄∗ dµ̄∗ dC + ∂q ∂ν̄∗ dν̄∗ dC + ∂q ∂λ∗ dλ∗ dC = 0\ndχ dC = ∂χ ∂C + ∂χ ∂ξ∗ dξ∗ dC + ∂χ ∂ζ∗ dζ∗ dC + ∂χ ∂µ̄∗ dµ̄∗ dC + ∂χ ∂ν̄∗ dν̄∗ dC + ∂χ ∂λ∗ dλ∗ dC = 0.\nTherefore, we have dξ∗ dC dζ∗ dC dµ̄∗ dC dν̄∗\ndC dλ∗\ndC\n = − ∂φ ∂ξ∗ ∂φ ∂ζ∗ ∂φ ∂µ̄∗ ∂φ ∂ν̄∗ ∂φ ∂λ∗ ∂ψ ∂ξ∗ ∂ψ ∂ζ∗ ∂ψ ∂µ̄∗ ∂ψ ∂ν̄∗ ∂ψ ∂λ∗ ∂p ∂ξ∗ ∂p ∂ζ∗ ∂p ∂µ̄∗ ∂p ∂ν̄∗ ∂p ∂λ∗ ∂q ∂ξ∗ ∂q ∂ζ∗ ∂q ∂µ̄∗ ∂q ∂ν̄∗ ∂q\n∂λ∗ ∂χ\n∂ξ∗ ∂χ ∂ζ∗ ∂χ ∂µ̄∗ ∂χ ∂ν̄∗ ∂χ ∂λ∗\n −1 ∂φ ∂C ∂ψ ∂C ∂p ∂C ∂q ∂C ∂χ\n∂C\n .\nAfter some derivation, we have dξ∗ dC dζ∗ dC dµ̄∗ dC dν̄∗ dC dλ∗1 dC dλ∗2 dC dλ∗3 dC dλ∗4 dC = − − 1 diag(µ̄) − 1 S∗r In 0 0 0 0 0 − 1 (S∗r ) > − 1 diag(ν̄) 0 Im 0 0 0 0 In 0 2λ3In + diag( 1 µ̄ ) 0 1n 0 2(µ̄− µ) 0 0 Im 0 2λ4Im + diag( 2 ν̄ ) 0 1m 0 2(ν̄ − ν) 0 0 1>n 0 0 0 0 0 0 0 0 1>m 0 0 0 0 0 0 2λ3(µ̄− µ)> 0 0 0 ‖µ̄− µ‖22 − ρ1 0 0 0 0 2λ4(ν̄ − ν)> 0 0 0 ‖ν̄ − ν‖22 − ρ2 −1 ∂φ ∂C ∂ψ ∂C 0 0 0 0 0 0 ,\nand ∂φh ∂Cij = 1 δhiSij ,∀h = 1, · · · , n, i = 1, · · · , n, j = 1, · · · ,m\n∂ψ` ∂Cij = 1 δ`jSij ,∀` = 1, · · · ,m− 1, i = 1, · · · , n, j = 1, · · · ,m.\nTo efficiently solve for the inverse in the above equations, we denote\nA = −1 diag(µ̄) −1 S∗r In 0 −1 (S∗r ) > −1 diag(ν̄) 0 Im\nIn 0 2λ3In + diag( 1 µ̄ ) 0\n0 Im 0 2λ4Im + diag( 2 ν̄ )\n ,\nB1 = [ 1n 0 2(µ̄− µ) 0 0 1m 0 2(ν̄ − ν) ] ,\nC1 = 1>n 0 0 1>m\n2λ3(µ̄− µ)> 0 0 2λ4(ν̄ − ν)>\n ,\nD = 0 0 0 00 0 0 00 0 ‖µ̄− µ‖22 − ρ1 0 0 0 0 ‖ν̄ − ν‖22 − ρ2 . We first A−1 using the rules for inverting a block matrix,\nA−1 = [ K −KL −LK L+ LKL ] =: [ A1 A2 A3 A4 ] where\nL =\n[ 2λ3In + diag(\n1 µ̄ ) 0\n0 2λ4Im + diag( 1 ν̄ )\n]−1 , K = ( 1 [ diag(µ̄) S∗r (S∗r ) > diag(ν̄) ] + L )−1 .\nThen using the rules of inverting a block matrix again, we havedξ ∗\ndC dζ∗\ndC = (A1 +A2B1(D − C1A4B1)−1C1A3) ∂φ∂C∂ψ ∂C . Therefore, the bottleneck of computation is the inverting step in computing K. Note L is a diagonal matrix, we can further lower the computation cost by applying the rules for inverting a block matrix again. The value of λ3 and λ4 can be estimated from the fact p = 0, q = 0 . We detail the algorithm in Algorithm 2.\nAlgorithm 2 Computing the gradient for w\nRequire: C ∈ Rm×n, µ, ν, , dCdw Run forward pass to get S = S∗r , µ̄, ν̄, ξ, ζ x1 = ∑dn/2e i=1 (µ̄i − µi), x2 = ∑n i=dn/2e(µ̄i − µi), b1 = − ∑dn/2e i=1 ξi, b2 = − ∑n i=dn/2e ξi\n[λ1, λ3] > = [dn/2e, x1;n− dn/2e, x2]−1[b1, b2]> x1 = ∑dm/2e j=1 (ν̄j − νj), x2 = ∑m j=dm/2e(ν̄j − νj), b1 = − ∑dm/2e j=1 ζj , b2 = − ∑m j=dm/2e ζj [λ2, λ4] > = [dm/2e, x1;m− dm/2e, x2]−1[b1, b2]> µ̄ = µ̄+ (2λ31n + 1 µ̄ ) −1, ν̄ = ν̄ + (2λ41m + 2 ν̄ ) −1 ν̄′ = ν̄[: −1], S′ = S[:, : −1] K ← diag(ν̄′)− (S′)T (diag(µ̄))−1S′ H1 ← (diag(µ̄))−1 + (diag(µ̄))−1S′K−1(S′)>(diag(µ̄))−1 H2 ← −(diag(µ̄))−1S′K−1 H3 ← (H2)> H4 ← K−1 Pad H2 to be [n,m] with value 0 Pad H3 to be [m,n] with value 0 Pad H4 to be [m,m] with value 0 L = diag([ (2λ31n + 1 µ̄ ) −1, (2λ41m + 2 ν̄ ) −1]) A1 = [H1, H2;H3, H4] A2 = −A1 · L A3 = A > 2 A4 = L+ L ·A1 · L E = A1 +A2 ·B1(D − C ·A4 ·B)−1C ·A3, where B1, C1, D defined above [J1, J2; J3, J4] = E, where J1 ∈ Rn×n, J2 ∈ Rn×m, J3 ∈ Rm×n, J4 ∈ Rm×m [dξ ∗\ndC ]nij ← [J1]niSij + [J2]njSij [dζ ∗\ndC ]mij ← [J3]miSij + [J4]mjSij Pad dζ ∗\ndC to be [m,n,m] with value 0 [ dLdC ]ij ← 1 (−CijSij + ∑ n,m CnmSnm[ da∗ dC ]nij + ∑ n,m CnmSnm[ db∗ dC ]mij) + Sij return dL dC dC dw" }, { "heading": "F DIFFERENT FORMS OF MARGINAL RELAXATION", "text": "In this paper we adopt F to be the Euclidean distance. This is because this choice provides an OT plan that fits our intuition – the data points with significantly larger transportation cost should not be considered. Figure 9 shows an illustration. Here, the input distributions are the empirical distributions of the scalars on the left and the bottom. Notice that there are three support points in µ that are far away from others, i.e., 10.72, 10.89, 10.96. In Figure 9 (a), the optimal solution Γ∗r automatically ignores them, matching only the rest of the scalars. One alternative choice of F is the Kullback–Leibler (KL) divergence (Chizat et al., 2018b), whose resulted formulation possesses an efficient algorithm for the forward pass, and the differentiability for the backward pass. We do not adopt it because the OT plan generated by this choice does not fit out intuition: As shown in Figure 9 (b), the OT plan tends to ignore the points that are away from the mean, even with a very small ρ1 and ρ2. For both figures, we adopt = 10−5." }, { "heading": "G MORE ON EXPERIMENTS", "text": "" }, { "heading": "G.1 UNLABELED SENSING", "text": "We now provide more training details for experiments in Section 4.1. Here, AM and ROBOT is trained with batch size 500 and learning rate 10−4 for 2, 000 iterations. For the Sinkhorn algorithm in ROBOT we set = 10−4. We run RS for 2× 105 iterations with inlier threshold as 10−2. Other settings for the hyper-parameters in the baselines follows the default settings of their corresponding papers." }, { "heading": "G.2 NONLINEAR REGRESSION", "text": "For the nonlinear regression experiment in Section 4.2, ROBOT and ROBOT-robust is trained with learning rate 10−4 for 80 iterations. For n = 100, 200, 500, 1000, 2000, we set batch size 10, 30, 50, 100, 300, respectively.We set = 10−4 for the Sinkhorn algorithm in ROBOT. For Oracle and LS, we perform ordinary regression model and ensure convergence, i.e., learning rate 5× 10−2 for 100 iterations." }, { "heading": "G.3 FLOW CYTOMETRY", "text": "We provide more details for the Flow Cytometry experiment in Section 4.3. In the FC seting, ROBOT is trained with batch size 1260 and learning rate 10−4 for 80 iterations. In the GFC seting, ROBOT is trained with batch size 1260 and learning rate 6×10−4 for 60 iterations. We set = 10−4 for the Sinkhorn algorithm in ROBOT. Other settings for the hyper-parameters in the baselines follows the default settings of their corresponding papers. EM is initialized by AM." }, { "heading": "G.4 MULTI-OBJECT TRACKING", "text": "For the MOT experiments in Section 4.4, the reported results of MOT17 (train) and MOT17 (dev) is trained on MOT17 (train), and the reported results of MOT20 (train) and MOT20 (dev) is trained on MOT20 (train). Each model is trained for 1 epoch. We adopt Adam optimizer with learning rate= 10−5, = 10−4, and η = 10−3. To track the birth and death of the tracks, we adapt the inference code of Xu et al. (2019b)." }, { "heading": "G.5 COMBINATION WITH RS", "text": "As suggested in Figure 2, although RS cannot perform well itself, retraining the output of RS using our algorithms increases the performance by a large margin. To show that combining RS and ROBOT can achieve better results than RS alone, we compare the following two cases: i). Subsample 2 × 105 times using RS; ii). Subsample 105 times us-\ning RS followed by ROBOT for 50 training steps. The result is shown in Table 2. For a larger permutation proportion, RS alone cannot perform as well as RS+ROBOT combination. Here, we have 10 runs for each proportion. We adopt SNR= 100, d = 5 for data, and = 10−4, learning rate 10−4 for ROBOT training.\nG.6 THE EFFECT OF ρ1 AND ρ2\nWe visualize S∗r computed from the robust optimal transport problem in Figure 10. The two input distributions are Unif(0, 2) and Unif(0, 1). We can see that with large enough ρ1 and ρ2, Unif(0, 1) would be aligned with the first half of Unif(0, 2)." }, { "heading": "G.7 COMPARISON OF RESIDUALS IN LINEAR REGRESSION", "text": "Settings. We generate n data points {(yi, [xi, zi])}ni=1, where xi ∈ Rd and zi ∈ Re. We first generate xi ∼ N (0d, Id), zi ∼ N (0e, Ie), w ∼ N (0d+e, Id+e), and εi ∼ N (0, ρ2noise). Then we compute yi = f([xi, zi];w) + εi. Next, we randomly permute the order of {zi} so that we lose the data correspondence. Here, D1 = {(xi, yi)} and D2 = {zj} mimic two parts of data collected from two separate platforms.\nWe adopt a linear model f(x;w) = x>w. To evaluate model performance, we use error= ∑ i(ŷi −\nyi) 2/ ∑ i(yi − ȳ)2, where ŷi is the predicted label, and ȳ is the mean of {yi}.\nBaselines. We use Oracle, LS, Stochastic-EM as the baselines. Notice that without a proper initialization, Stochastic-EM performs well in partially permuted cases, but not in fully shuffled cases.\nFor better visualization, we only include this baseline in one experiment. Furthermore, we adopt two new baselines: Sliced-GW (Vayer et al., 2019) and Sinkhorn-GW (Xu et al., 2019a), which can be used to align distributions and points sets.\nResults. We visualize the fitting error of regression models in Figure 11. We can see that ROBOT outperforms all the baselines except Oracle. Also, our model can beat the Oracle model when the dimension is low or when the noise is large." } ]
2,021
null
SP:51c6ba2bd4d1dafe78e6da30e18577e16ba4fec9
[ "This is yet another paper on graph convolutional networks (GCNs). The investigated SoGCN is a second-order GCN, thus a special case of high-order GCNs (namely with multi-hop graph kernels), which have has been proposed earlier by many researchers, such as by Defferrard et al. (2016), by Kipf & Welling (2017) and by Abu-El-Haija et al. (2019). " ]
We introduce a second-order graph convolution (SoGC), a maximally localized kernel, that can express a polynomial spectral filter with arbitrary coefficients. We contrast our SoGC with vanilla GCN, first-order (one-hop) aggregation, and higher-order (multi-hop) aggregation by analyzing graph convolutional layers via generalized filter space. We argue that SoGC is a simple design capable of forming the basic building block of graph convolution, playing the same role as 3 × 3 kernels in CNNs. We build purely topological Second-Order Graph Convolutional Networks (SoGCN) and demonstrate that SoGCN consistently achieves state-ofthe-art performance on the latest benchmark. Moreover, we introduce the Gated Recurrent Unit (GRU) to spectral GCNs. This explorative attempt further improves our experimental results.
[]
[ { "authors": [ "Sami Abu-El-Haija", "Bryan Perozzi", "Amol Kapoor", "Nazanin Alipourfard", "Kristina Lerman", "Hrayr Harutyunyan", "Greg Ver Steeg", "Aram Galstyan" ], "title": "Mixhop: Higher-order graph convolutional architectures via sparsified neighborhood mixing", "venue": null, "year": 2019 }, { "authors": [ "Xavier Bresson", "Thomas Laurent" ], "title": "Residual gated graph convnets", "venue": null, "year": 2017 }, { "authors": [ "Joan Bruna", "Wojciech Zaremba", "Arthur Szlam", "Yann LeCun" ], "title": "Spectral networks and locally connected networks on graphs", "venue": null, "year": 2014 }, { "authors": [ "Chen Cai", "Yusu Wang" ], "title": "A note on over-smoothing for graph neural networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Deli Chen", "Yankai Lin", "Wei Li", "Peng Li", "Jie Zhou", "Xu Sun" ], "title": "Measuring and relieving the oversmoothing problem for graph neural networks from the topological view", "venue": "In AAAI,", "year": 2020 }, { "authors": [ "Kyunghyun Cho", "Bart Van Merriënboer", "Caglar Gulcehre", "Dzmitry Bahdanau", "Fethi Bougares", "Holger Schwenk", "Yoshua Bengio" ], "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "venue": "arXiv preprint arXiv:1406.1078,", "year": 2014 }, { "authors": [ "Fan RK Chung", "Fan Chung Graham" ], "title": "Spectral graph theory", "venue": "American Mathematical Soc.,", "year": 1997 }, { "authors": [ "Gabriele Corso", "Luca Cavalleri", "Dominique Beaini", "Pietro Lio", "Petar Velickovic" ], "title": "Principal neighbourhood aggregation for graph nets", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Michaël Defferrard", "Xavier Bresson", "Pierre Vandergheynst" ], "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "venue": "In NeurIPS,", "year": 2016 }, { "authors": [ "Nima Dehmamy", "Albert-László Barabási", "Rose Yu" ], "title": "Understanding the representation power of graph neural networks in learning graph topology", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Vijay Prakash Dwivedi", "Chaitanya K Joshi", "Thomas Laurent", "Yoshua Bengio", "Xavier Bresson" ], "title": "Benchmarking graph neural networks", "venue": null, "year": 2003 }, { "authors": [ "Matthias Fey", "Jan Eric Lenssen", "Frank Weichert", "Heinrich Müller" ], "title": "Splinecnn: Fast geometric deep learning with continuous b-spline kernels", "venue": null, "year": 2018 }, { "authors": [ "Justin Gilmer", "Samuel S Schoenholz", "Patrick F Riley", "Oriol Vinyals", "George E Dahl" ], "title": "Neural message passing for quantum chemistry", "venue": null, "year": 2017 }, { "authors": [ "Will Hamilton", "Zhitao Ying", "Jure Leskovec" ], "title": "Inductive representation learning on large graphs", "venue": "In NeurIPS,", "year": 2017 }, { "authors": [ "NT Hoang", "Takanori Maehara" ], "title": "Revisiting graph neural networks: All we have is low-pass filters", "venue": null, "year": 1905 }, { "authors": [ "Diederik P Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": null, "year": 2015 }, { "authors": [ "Thomas N Kipf", "Max Welling" ], "title": "Semi-supervised classification with graph convolutional networks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Guohao Li", "Matthias Müller", "Ali Thabet", "Bernard Ghanem" ], "title": "Deepgcns: Can gcns go as deep as cnns", "venue": "In ICCV,", "year": 2019 }, { "authors": [ "Qimai Li", "Zhichao Han", "Xiao-Ming Wu" ], "title": "Deeper insights into graph convolutional networks for semi-supervised learning", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Wembo Li" ], "title": "Probability of all real zeros for random polynomial with the exponential ensemble", "venue": null, "year": 2011 }, { "authors": [ "Yujia Li", "Daniel Tarlow", "Marc Brockschmidt", "Richard Zemel" ], "title": "Gated graph sequence neural networks", "venue": "In ICLR,", "year": 2016 }, { "authors": [ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard S Zemel" ], "title": "Lanczosnet: Multi-scale deep graph convolutional networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Sitao Luan", "Mingde Zhao", "Xiao-Wen Chang", "Doina Precup" ], "title": "Break the ceiling: Stronger multiscale deep graph convolutional networks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Nadav Shamir", "Yaron Lipman" ], "title": "Invariant and equivariant graph networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Haggai Maron", "Heli Ben-Hamu", "Hadar Serviansky", "Yaron Lipman" ], "title": "Provably powerful graph networks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Zhewei Wei Ming Chen", "Bolin Ding Zengfeng Huang", "Yaliang Li" ], "title": "Simple and deep graph convolutional networks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Federico Monti", "Davide Boscaini", "Jonathan Masci", "Emanuele Rodola", "Jan Svoboda", "Michael M Bronstein" ], "title": "Geometric deep learning on graphs and manifolds using mixture model cnns", "venue": null, "year": 2017 }, { "authors": [ "Christopher Morris", "Martin Ritzert", "Matthias Fey", "William L Hamilton", "Jan Eric Lenssen", "Gaurav Rattan", "Martin Grohe" ], "title": "Weisfeiler and leman go neural: Higher-order graph neural networks", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Kenta Oono", "Taiji Suzuki" ], "title": "Graph neural networks exponentially lose expressive power for node classification", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Antonio Ortega", "Pascal Frossard", "Jelena Kovačević", "José MF Moura", "Pierre Vandergheynst" ], "title": "Graph signal processing: Overview, challenges, and applications", "venue": null, "year": 2018 }, { "authors": [ "Hongbin Pei", "Bingzhe Wei", "Kevin Chen-Chuan Chang", "Yu Lei", "Bo Yang" ], "title": "Geom-gcn: Geometric graph convolutional networks", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Yu Rong", "Wenbing Huang", "Tingyang Xu", "Junzhou Huang" ], "title": "Dropedge: Towards deep graph convolutional networks on node classification", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Aliaksei Sandryhaila", "José MF Moura" ], "title": "Discrete signal processing on graphs", "venue": "IEEE Trans. Signal Process,", "year": 2013 }, { "authors": [ "Franco Scarselli", "Marco Gori", "Ah Chung Tsoi", "Markus Hagenbuchner", "Gabriele Monfardini" ], "title": "Computational capabilities of graph neural networks", "venue": "TNN,", "year": 2008 }, { "authors": [ "Jianbo Shi", "J. Malik" ], "title": "Normalized cuts and image segmentation", "venue": null, "year": 2000 }, { "authors": [ "Petar Veličković", "Guillem Cucurull", "Arantxa Casanova", "Adriana Romero", "Pietro Lio", "Yoshua Bengio" ], "title": "Graph attention networks", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Felix Wu", "Tianyi Zhang", "Amauri Holanda de Souza Jr.", "Christopher Fifty", "Tao Yu", "Kilian Q Weinberger" ], "title": "Simplifying graph convolutional networks", "venue": null, "year": 2019 }, { "authors": [ "Keyulu Xu", "Chengtao Li", "Yonglong Tian", "Tomohiro Sonobe", "Ken-ichi Kawarabayashi", "Stefanie Jegelka" ], "title": "Representation learning on graphs with jumping knowledge", "venue": null, "year": 2018 }, { "authors": [ "Keyulu Xu", "Weihua Hu", "Jure Leskovec", "Stefanie Jegelka" ], "title": "How powerful are graph neural networks", "venue": "In ICLR,", "year": 2019 }, { "authors": [ "Ding-Xuan Zhou" ], "title": "Universality of deep convolutional neural networks", "venue": "ACHA,", "year": 2020 }, { "authors": [ "Ortega" ], "title": "malized adjacency matrix A and node signal X ∈ RN×D, where N is the number of nodes and D is the number of signal channels. Since A is symmetric, we perform an eigen-decomposition on the adjacency matrix A = UΛU . Then the spectrum of X is computed by S = UX . More information about the graph spectrum and graph Fourier transformation", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep localized convolutional filters have achieved great success in the field of deep learning. In image recognition, the effectiveness of 3 × 3 kernels as the basic building block in Convolutional Neural Networks (CNNs) is shown both experimentally and theoretically (Zhou, 2020). We are inspired to search for the maximally localized Graph Convolution (GC) kernel with full expressiveness power for Graph Convolutional Networks (GCNs).\nMost existing GCN methods utilize localized GCs based on one-hop aggregation scheme as the basic building block. Extensive works have shown performance limitations of such design due to over-smoothing (Li et al., 2018; Oono & Suzuki, 2019; Cai & Wang, 2020). In vanilla GCNs (Kipf & Welling, 2017) the root cause of its deficiency is the lumping of the graph node self-connection with pairwise neighboring connections. Recent works of Xu et al. (2019); Dehmamy et al. (2019); Ming Chen et al. (2020) disentangle the effect of self-connection by adding an identity mapping (so-called first-order GC). However, its lack of expressive power in filter representation remains (Abu-El-Haija et al., 2019). The work of (Ming Chen et al., 2020) conjectured that the ability to express a polynomial filter with arbitrary coefficients is essential for preventing over-smoothing.\nA longer propagation distance in the graph facilitates GCNs to retain its expressive power, as pointed out by (Liao et al., 2019; Luan et al., 2019; Abu-El-Haija et al., 2019). The minimum propagation distance needed to construct our building block of GCN remains the open question. We show that the minimum propagation distance is two: a two-hop graph kernel with the second-order polynomials in adjacency matrices is sufficient. We call our graph kernel Second-Order GC (SoGC).\nWe introduce a Layer Spanning Space (LSS) framework to quantify the expressive power of multilayer GCs for modeling a polynomial filter with arbitrary coefficients. By relating low-pass filtering on the graph spectrum (Hoang & Maehara, 2019) with over-smoothing, one can see the lack of filter representation power (Ming Chen et al., 2020) can lead to the performance limitation of GCN.\nUsing the LSS framework, we show that SoGCs can approximate any linear GCNs in channel-wise filtering. Furthermore, higher-order GCs do not contribute more expressiveness, and vanilla GCN or first-order GCs cannot represent all polynomial filters in general. In this sense, SoGC is the maximally localized graph kernel with the full representation power.\nTo validate our theory, we build Second-Order Graph Convolutional Networks (SoGCN) using SoGC kernels. Our SoGCN using simple graph topological features consistently achieves state-\nof-the-art performance on the GNN benchmark datasets (Dwivedi et al., 2020), including citation networks, super-pixel classification, and molecule regression.\nTo our best knowledge, this work is the first study that identifies the importance of the two-hop neighborhood in the context of GCNs’ ability to express a polynomial filter with arbitrary coefficients. Our model is a special but non-trivial case of Defferrard et al. (2016). Kipf & Welling (2017) conducted an ablation study with GC kernels of different orders but missed the effectiveness of the second-order relationships. The work of Abu-El-Haija et al. (2019) talked about muti-hop graph kernels; however, they did not identify the critical importance of the second-order form. In contrast, we clarify the prominence of SoGCs in theories and experiments.\nOur research on graph convolution using pure topologically relationship is orthogonal to those uses geometric relations (Monti et al., 2017; Fey et al., 2018; Pei et al., 2020), or those with expressive edge features (Li et al., 2016; Gilmer et al., 2017; Corso et al., 2020), and hyper-edges (Morris et al., 2019; Maron et al., 2018; 2019). It is also independent with graph sampling procedures (Rong et al., 2019; Hamilton et al., 2017; Li et al., 2019)." }, { "heading": "2 PRELIMINARIES", "text": "We begin by reformulating spectral GCNs and introducing our notation. We are interested in a finite graph set G = { G1, · · · , G|G| } . Assume each graph G ∈ G is simple and undirected, associated with a finite vertex set V(G), an edge set E(G) = {(u, v) : ∀u ↔ v}, and a symmetric normalized adjacency matrix A(G) (Chung & Graham, 1997; Shi & Malik, 2000). Without loss of generality and for simplicity, |V(G)| = N for every G ∈ G. Single-channel features x ∈ RN supported in graph G ∈ G is a vectorization of function V(G)→ R. Graph Convolutions (GCs) is known as Linear Shift-Invariant (LSI) operators to adjacency matrices (Sandryhaila & Moura, 2013). By this definition, GCs can extract features regardless of where local structures fall. Given parameter space Ω ⊆ R, we write a single-channel GC (Sandryhaila & Moura, 2013; Defferrard et al., 2016) as a mapping fθ : G × RN → RN such that 1:\nfθ(G,x) = K∑ k=0 θkA(G) kx, (1)\nwhere θ = [θ0 · · · θK ]T ∈ ΩK+1 parameterizes the GC. K reflects the localization of fθ: a linear combination of features aggregated by A(G)k. Moreover, we reformulate two popular models, vanilla GC (Figure 1a) and first-order GC (Figure 1b), as below:\nf0(G,x) = θ (A(G) + I)x, f1(G,x) = (θ1A(G) + θ0I)x. (2)\nThe general spectral GCNs stack L layers of GCs (Equation 1) with nonlinear activations. Let f (l) be GC layers with parameters θ(l) ∈ ΩK+1, l ∈ [L], the single-channel GCNs can be written as:\nF (G,x) = g ◦ f (L) ◦ σ ◦ f (L−1) ◦ · · · ◦ σ ◦ f (1)(G,x), (3) 1We can replace the Laplacian matrix L in Defferrard et al. (2016) with the normalized adjacency matrix A\nsince L = I −A.\nwhere σ is an element-wise activation function, the superscripts denote the corresponding layer number, g is a task-specified readout function (e.g., softmax), the inputs are graph G ∈ G and signals x ∈ RN . The compositionality principle of deep learning suggests L being large, while K being small and localized (LeCun et al., 2015)." }, { "heading": "3 OVERVIEW: SECOND-ORDER GRAPH CONVOLUTION", "text": "We are interested in the overall graph convolution network’s representation power of expressing a polynomial filter (Equation 1) with arbitrary coefficients. A multi-layer GCN approximate a Korder polynomial filter ∑K k=0 θkA(G)\nk, θk ∈ Ω, k = 0, · · ·K, by stacking basic building blocks of graph convolution (GC) layers (Wu et al., 2019; Ming Chen et al., 2020).\nWe formally define the second-order GC (SoGC) using the second-order polynomials of adjacency matrices: f2(G,x) = ( θ2A(G) 2 + θ1A(G) + θ0I ) x, (4)\nwhere θi ∈ R, i = 0, 1, 2 are trainable paremeters in the context of machine learning. Its vertexdomain interpretation is illustrated in Figure 1c. At first glance, it seems that we could stack two one-hop graph convolution (GC) kernels to approximate a SoGC. However, as shown in Section 4.3, that is not the case.\nThe critical insight is that graph filter approximation can be viewed as a polynomial factorization problem. It is known that any univariate polynomial can be factorized into sub-polynomials of degree two. Based on this fact, we show by stacking enough SoGCs (and varying their parameters) can achieve decomposition of any polynomial filters.\nIn contrast, first-order GCs are not universal approximators; two stacked one-hop GCs cannot model every two-hop GC. Polynomial filter completeness of SoGC leads to better performance of GCNs. As shown in Figure 2, networks built with SoGC can overcome over-smoothing and extract features on high-frequency bands. In the next section, we demonstrate our formal arguments on polynomial approximation." }, { "heading": "4 REPRESENTATION POWER ANALYSIS", "text": "" }, { "heading": "4.1 LAYER SPANNING SPACE FRAMEWORK", "text": "To illustrate the representation power of GC layers, we establish a Layer Spanning Space (LSS) framework to study the graph filter space spanned by stacking multiple graph kernels.\nFirst, we present our mathematical devices in Definition 1, 2 with Lemma 1 as below.\nDefinition 1. Suppose the parameter space Ω = R. The Linear Shift-Invariant (LSI) graph filter space of degree K > 0 with respect to a finite graph set G is defined as A ={ fθ : G × RN → RN ,∀θ ∈ RK+1 } , where fθ follows the definition in Equation 1.\nDefinition 2. Let spectrum set S(G) = {λ : λ ∈ S(A(G)),∀G ∈ G}, where S(A) denotes the eigenvalues of A. Define spectrum capacity Γ = |S(G)|. In particular, Γ = (N − 1)|G| if every graph adjacency matrix has no common eigenvalues other than 1. Lemma 1. A of degree K > 0 has dimension min{K + 1,Γ} as a vector space.\nProof of Lemma 1 follows from Theorem 3 of Sandryhaila & Moura (2013). The complete version can be found in Appendix C. Here, we induce a finite-dimension filter space by Lemma 1.\nFor simplicity, we will model the linear composition of filters to analyze its representation power. The nonlinear activation effects are beyond the scope of this work. Following Definition 1, let A be the full filter space of degree Γ− 1 and B be the low-level filter space as a set of polynomials in adjacency matrices (Equation 1). Denote the GC at l-th layer by f (l) ∈ B, then we yield the LSS of stacking L layers as below:\nBL = { f : f(G,x) = f (L) ◦ · · · ◦ f (1)(G,x) =\nL∏ l=1 p(l)(A(G))x\n} , (5)\nwhere p(l)(x) varies in a certain class of polynomials. We can assess the expressive capability of GC layers by comparing the LSS with A. Kernels in B have full representation power if A ⊆ BL. We are interested in BK , which denotes all localized filters of degree at most K. The LSS of BK is modeled as:\nBLK =\n{ f : f(G,x) =\nL∏ l=1 K∑ k=0 θ (l) k A(G) kx, θ (l) k ∈ R\n} , (6)\nwhere the number of layers is bounded by L ≤ d(Γ− 1)/Ke, according to Lemma 1." }, { "heading": "4.2 UNIVERSAL REPRESENTATION POWER OF SOGC", "text": "In this section, we present Theorem 1 to demonstrate the universal representation power of SoGCs as claimed in Section 3. Formally, we add superscripts to Equation 4 to indicate the layer number. Then we leverage a fundamental polynomial factorization theorem to conclude Theorem 1 as below.\nTheorem 1. For any f ∈ A, there exists f (l)2 ∈ B2 with coefficients θ (l) 0 , θ (l) 1 , θ (l) 2 ∈ R, l = 1, · · · , L such that f = f (L)2 ◦ · · · ◦ f (1) 2 where L ≤ d(Γ− 1)/2e.\nThe complete proof is presented Appendix D. Theorem 1 can be regarded as the universal approximation theorem of linear GCNs, which implies multi-layer SoGCs have full filter expressiveness.\nTheorem 1 also demonstrates how GCNs with SoGCs benefit from depth, which coincides with the view of Dehmamy et al. (2019). Figure 3a verifies our SoGCN can overcome over-smoothing and successfully utilize depth to attain performance gain." }, { "heading": "4.3 REPRESENTATION POWER OF OTHER GRAPH CONVOLUTION", "text": "In this section, we show that vanilla and first-order GCs lack expressiveness, while higher-order GCs reduce compactness and increase fitting difficulty.\nVanilla vs. second-order. Extensive works have shown the performance deficiency of vanilla GCNs (Hoang & Maehara, 2019; Luan et al., 2019; Oono & Suzuki, 2019; Cai & Wang, 2020). Based on the LSS framework, we can point out a similar issue but from a novel perspective. Let us write f (l)0 (G,x) = θ\n(l)(A(G) + I)x ∈ B0 as the l-th GC layer2. Then L of them can represent a LSS as follows:\nBL0 =\n{ f : f(G,x) = θ\nL∑ l=0 ( l L ) A(G)lx } , (7)\n2Notice that we use B0 to denote the filter space with lumping of self-connection with pairwise neighbor nodes, since the zero-degree ones are too trivial. We assume the renormalization trick can be merged into θ.\nby letting θ = θ(L) · · · θ(1). No matter how large L is or how a optimizer tunes the parameters θ(l), dimBL0 = 1 which signifies BL0 degenerates to a negligible subspace of A.\nFirst-order vs. second-order. We denote first-order GCs as f (l)1 (G,x) = (θ (l) 1 A(G)+θ (l) 0 I)x ∈ B1. In the spirit of Section 4.1, write the LSS as:\nBL1 =\n{ f : f(G,x) =\nL∏ l=1 ( θ (l) 1 A(G) + θ (l) 0 I ) x, θ (l) 0 , θ (l) 1 ∈ R } , (8)\nwhich is isomorphic to a polynomial space whose elements split over the real domain. Compared with BL0 (Equation 7), BL1 represents a much larger subset of A. This highlights the importance of the first-order term or the identity mapping mentioned in (Xu et al., 2019; Dehmamy et al., 2019; Ming Chen et al., 2020).\nThe limitations also become obvious since not all polynomials can be factorized into first-order polynomials. These polynomials only occupy a small proportion in the ambient polynomial space (Li, 2011), which indicates first-order GCs are not universal approximators in general.\nHigher-order vs. second-order. GCs of degree K ≥ 2 are called higher-order GCs. They can model multi-hop GCNs such as Luan et al. (2019); Liao et al. (2019); Abu-El-Haija et al. (2019). Higher-order GCs have equivalent expressive power to SoGCs, since they can be reduced to SoGCs as long as coefficient sparsity can be achieved. But this by-product–an uncertain sparsity of coefficients–is not compatible with gradient-based optimization algorithms. Extensive experiments (Defferrard et al., 2016) have shown the ineffectiveness of learning higher-order kernels, because eigenvalues of graph adjacency matrices diminish when powered. This results in a decreasing numerical rank ofA(G)k, which prevent higher-order GCs from aggregating larger-scale information. SoGCs can alleviate this problem by preventing the loss of information due to higher-order powering operation. Finally, higher-order GC lacks nonlinearity. SoGCN can bring a better balance between the expressive power of low-level layers and nonlinearity among them." }, { "heading": "5 SECOND-ORDER GRAPH CONVOLUTIONAL NETWORKS", "text": "In this section, we introduce other building blocks of GCNs and establish our Second-Order Graph Convolutional Networks (SoGCN) following the fashion of deep learning. First, we promote SoGC to the multi-channel version analogous to Kipf & Welling (2017). Then we cascade a feature embedding layer, multiple SoGC layers, and append a readout module. Suppose the multi-channel input is X ∈ RN×D supported in graph G ∈ G, denote the output of l-th layer as X(l) ∈ RN×E , the final node-level output as Y ∈ RN×F , or graph-level output as Y ∈ RE , we formulate our novel deep GCN based on SoGC as follows:\nX(0) = ρ (X; Φ) , (9) X(l+1) = σ ( A(G)2X(l)Θ\n(l+1) 2 +A(G)X (l)Θ (l+1) 1 +X (l)Θ (l+1) 0\n) , (10)\nY = τ ( X(L); Ψ ) , (11)\nwhere Θ(l)i ∈ RE×E , i = 0, 1, 2 are trainable weights for linear filters; ρ : RN×D → RN×E is an equivariant embedder (Maron et al., 2018) with parameters Φ; σ : RN×E → RN×E is an activation function. For node-level readout, τ : RN×E → RN×F can be a decoder (with parameters Ψ) or a nonlinear activation (e.g., softmax) in place of the prior layer. For graph-level output, τ : RN×E → RE should be an invariant readout function (Maron et al., 2018), e.g., channel-wise sum, mean or max (Hamilton et al., 2017). In practice, we adopt ReLU as nonlinear activation (i.e., σ = ReLU), a multi-layer perceptron (MLP) as the embedding function ρ, another MLP for node regression readout, and sum (Xu et al., 2019) for graph classification readout." }, { "heading": "5.1 GATED RECURRENT UNIT", "text": "Gated Recurrent Unit (GRU) has been served as a basic building block in message-passing GNN architectures (Li et al., 2016; Gilmer et al., 2017; Corso et al., 2020). In this subsection, we explore its application in spectral GCNs.\nAccording to Cho et al. (2014), GRU can utilize gate mechanism to preserve and forget information. We hypothesize that a GRU can be trained to remove redundant signals and retain lost features on the spectrum. This function can be used to alleviate the oversmoothing problem of vanilla GCNs by maintaining information from previous layer and canceling the dominance of low-frequencies. By the same means, GRU can also relieve the side-effect of ReLU, which is proved to be a special low-pass filter (Oono & Suzuki, 2019; Cai & Wang, 2020). Even though piled-up SoGCs attain full expressiveness, we show by our experiment that GRU can still facilitate SoGCN in avoiding noises and enhancing features on the spectrum (Figure 2)\nSimilar to Li et al. (2016); Gilmer et al. (2017), we appends a shared GRU module after each GC layer, which takes the signal before the GC layer as the hidden state, after the GC layer as the current input. We note that GRU can cooperate with any spectral GCs (Equation 1). When integrated with SoGCN, we call this special variant SoGCN-GRU. We formulate its implementation by replacing Equation 10 with Equation 12 as below.\nX(l+1)conv = A(G) 2X(l)Θ (l+1) 2 +A(G)X (l)Θ (l+1) 1 +X (l)Θ (l+1) 0 , X(l+1) = GRU ( ReLU ( X(l+1)conv ) ,X(l); Ω ) ,\n(12)\nwhereX(l+1)conv is the input,X(l) represents the hidden state, Ω denotes parameters of the GRU.\nFigure 2 and Figure 6 verify our conjecture. We observe more steady low-frequency component on the spectrum head and more characteristic bands on the high-frequency tail. Our empirical study in Table 3 also indicates the effectiveness of GRU for spectral GCNs in general. Hence, we suggest including this recurrent module as another basic building block of our SoGCNs." }, { "heading": "5.2 COMPARISON TO RELATED WORK", "text": "Spectral GCNs. Spectral GCN leverages polynomials in the adjacency matrix to represent graph convolutional layers (Bruna et al., 2014). Many works have been discussing how to design the polynomial and choose its degree to compose a localized GC layer. ChebyNet (Defferrard et al., 2016) approximates graph filters using Chebyshev polynomials. Vanilla GCN (Kipf & Welling, 2017; Wu et al., 2019) further reduces the GC layer to a degree-one polynomial with first-order and constant terms merged. However, these simplifications cause over-smoothing and performance loss. Our SoGC incorporates only one hop longer but obtains the full representation power. This design keeps each layer localized, simple, and easy to implement but makes the whole GCN much more powerful. Our work reveals the critical degree of polynomial filters where kernel size is minimized while filter representation power is maximized.\nMulti-Hop GCNs. To exploit multi-scale information, Luan et al. (2019) devises Snowball GCN and Truncated Krylov GCN to capture neighborhoods at different distances. To simulate hop delta functions, Abu-El-Haija et al. (2019) repeat mixing multi-hop features to identify more topological information. These models exhibit the strength of multi-hop GCNs over one-hop GCNs while leaving the propagation length as a hyper-parameter. Modeling those multi-hop GCNs as our higherorder models, SoGCN possesses the identical representation power but has fixed size and better localization, making SoGC more suitable to be the basic building block in GCNs. It is noteworthy that, although Abu-El-Haija et al. (2019) investigates the two-hop delta function (a Gabor-like filter), their final proposed solution is only a generic class of multi-hop GCNs. The discussion on two-hop delta functions cannot attain our theoretical results.\nExpressiveness of GCNs. Most of the works on GCN’s expressiveness are restricted to the oversmoothing problem: Li et al. (2018) first poses the over-smoothing problem; Hoang & Maehara (2019) indicates GCNs are no more than low-pass filters; Luan et al. (2019); Oono & Suzuki (2019) demonstrate the asymptotic behavior of feature activation to a subspace; Cai & Wang (2020) examines the decreasing Dirichlet energy. Unlike them, our established LSS framework can identify specific issues of GCNs with algebraic and geometric interpretations. The over-smoothing problem can be formulated as one of our sub-problems (Section 4.3). In this sense, SoGCN solves a more general expressiveness issue than those relieving over-smoothing problem only (Xu et al., 2018; Rong et al., 2019; Chen et al., 2020). Ming Chen et al. (2020) introduces identity and initial mapping to recover filter expressiveness. Their analytic framework is also similar to ours. But we\ngeneralize their filter space to a graph set, and upper bound its dimension. In the meanwhile, our SoGCN’s architecture is more lightweight. We investigate the overall expressive power of GCNs by discussing filter completeness. This direction is orthogonal to those studying message-passing GNNs (Scarselli et al., 2008) and Weisfeiler-Leman GNNs (Xu et al., 2019; Morris et al., 2019)." }, { "heading": "6 EXPERIMENTS", "text": "Experiments are conducted on the synthetic dataset in Section 6.1 and on the GNN benchmarks (Dwivedi et al., 2020) in Section 6.2." }, { "heading": "6.1 SYNTHETIC GRAPH SPECTRUM DATASET FOR NODE REGRESSION", "text": "To validate the expressiveness of SoGCN, and its power to preserve higher-order graph signal, we build a Synthetic Graph Spectrum (SGS) dataset for the node signal filtering regression task. We construct SGS dataset with random graphs. The learning task is to mimic three types of hand-crafted filtering functions: high-pass, low-pass, and band-pass on the graph spectral space (defined over the graph eigenvectors). There are 1k training graphs, 1k validation graphs, and 2k testing graphs for each filtering function. Each graph is undirected and comprises 80 ˜ 120 nodes. Appendix E covers more details of our SGS dataset. We choose Mean Absolute Error (MAE) as the evaluation metric.\nExperimental Setup. We compare SoGCN with vanilla GCN (Kipf & Welling, 2017), first-order GCN, and higher-order GCNs on the synthetic dataset.\nTo evaluate each model’s expressiveness purely on the graph kernel design, we remove ReLU activations for all tested models. We adopt the Adam optimizer (Kingma & Ba, 2015) in our training process, with a batch size of 128. The learning rate begins with 0.01 and decays by half once the validation loss stagnates for more than 10 training epochs.\nResults and Discussion. Table 1 summarizes the quantitative comparisons. SoGCN achieves the superior performance on all of the 3 tasks outperforming vanilla GCN and 1st-order GCN, which implies that SoGC graph convolutional kernel does benefit from explicit disentangling of the θ0I (zero-hop) and θ2A2 (second-hop) terms. Our results also show that higher-order (third-order and fourth-order) GCNs do not improve the performance further, even though they have many more parameters. SoGCN possesses a more expressive representation ability and a good trade-off between performance and model size.\nFigure 3 plots MAE results as we vary the depth and channel width of GC layers. Vanilla GCN can benefit from neither depth nor width. First-order GC, SoGC, and higher-order GC can leverage depth to span larger LSS. Figure 3a illustrates the corresponding performance for each graph kernel types. SoGC and higher-order GC both outperform first-order GC as depth increases. Figure 3b shows the\n3 This experimental group shows that ReLUs are not always such beneficial on our synthetic dataset.\nbenefits of SoGC remain as we move to multi-channel construction. Comparing Figure 3a and 3b, we find that depth has larger effect on GCNs." }, { "heading": "6.2 GNN BENCHMARKS", "text": "We follow the benchmarks outlined in Dwivedi et al. (2020) for evaluating GNNs on several datasets across a variety of artificial and real-world tasks. We choose to evaluate our SoGCN on a real-world chemistry dataset (ZINC molecules) for the graph regression task, two semi-artificial computer vision datasets (CIFAR10 and MNIST superpixels) for the graph classification task, and two artificial social network datasets (CLUSTER and PATTERN) for the node classification task.\nExperimental Setup. We compare our proposed SoGCN and SoGCN-GRU with state-of-the-art GNNs: vanilla GCN (Kipf & Welling, 2017), GAT (Veličković et al., 2018), MoNet (Monti et al., 2017), GIN (Xu et al., 2019), GraphSage (Hamilton et al., 2017), GatedGCN (Bresson & Laurent, 2017) and 3WL-GNN (Maron et al., 2019). To ensure fair comparisons, we follow the same training and evaluation pipelines (including optimizer settings) and data splits of benchmarks. Furthermore,\n4 This is the result of 3WLGNN with 100k parameters. The test MAE of 3WLGNN with 500k parameters is increased to 0.427±0.011.\nwe adjust our model’s depth and width to ensure it satisfies parameter budgets as specified in the benchmark. Note that we do not use any geometrical information to encode rich graph edge relationship, as in models such as GatedGCN-E-PE. We only employ graph connectivity information for all tested models.\nResults and Discussion. Table 2 reports the benchmark results. Our model SoGCN makes small computational changes to GCN by adopting second-hop and zero-hop neighborhoods, and it outperforms models with complex message-passing mechanisms. With GRU module, SoGCN-GRU tops almost all state-of-the-art GNNs on the ZINC, MNIST and CIFAR10 datasets. Whereas, GRU does not lift performance on the CLUSTER and PATTERN datasets for node classification task. As suggested by Li et al. (2018), graph node classification benefits from low-frequency features. That GRU suppresses low-frequency band will result in a slight performance drop on the CLUSTER and PATTERN datasets.\nAblation Study on High-Order GCNs. To contrast the performance gain produced by different orders on the benchmarks, we evaluate 1st-Order GCN, 3rd-Order GCN, 4th-Order GCN as well as their GRU variants on the ZINC, MNIST and CIFAR10 datasets. Table 3 presents the results of our ablation study, which are consistent to our experiments on the synthetic datasets (Section 6.1). As shown by our ablation study, aggregating zero-hop features brings about significant improvements (Vanilla GCN vs. 1st-Order GCN), and adopting the second-hop features further promotes the performance (1st-Order GCN vs. SoGCN). However, high-order GCNs are not capable of boosting the performance over SoGCN. On the contrary, high-order GCs can even lead to the performance decline (3rd-Order GCN vs. 4th-Order GCN vs. SoGCN). On the ZINC and MNIST datasets, we testify GRU’s effectiveness for each tested model, but the gain brought by GRU is not as significant as aggregating the second-hop features. On the CIFAR10 dataset, GRU fails to improve performance for 1st-Order GCN and 3rd-Order GCN." }, { "heading": "7 CONCLUSION", "text": "What should be the basic building blocks for GCNs? To answer this, we seek the most localized graph convolution kernel (GC) with full expressiveness. We generalize the filter space to a finite graph set and establish our LSS framework to assess GC layers functioning on different hops. We show the second-order localized graph convolutional filter, called SoGC, possesses the full representation power than one-hop GCs. Thus, it becomes the most localized GC that we adopt as the basic building block to establish our SoGCN. Both synthetic and benchmark experiments exhibit the prominence of our theoretic design. We also make an empirical study on the GRU module cooperating with spectral GCNs. Interesting directions for future work include analyzing two-hop aggregation schemes with message-passing GNNs and proving the universality of nonlinear GCNs." }, { "heading": "A REMARK ON DEFINITION 1", "text": "Let us rewrite the A of degree K in Definition 1:\nA = { f : f(G,x) =\nK∑ k=0 θkA(G) kx,∀θk ∈ R\n} , (13)\nwhich contains all LSI functions f : G × RN → RN with adjacency matrix as the graph shift. We show this in the following way. First, all linear transformations H invariant to the shift S should have HS = SH . Second, specifying arbitrary graph G ∈ G, any filter associated with it can be written as below:\nH(G) = K∑ k=0 θkA(G) k = U\n( K∑\nk=0\nθkΛ k ) UT , (14)\nwhere A(G) = UΛUT is the eigendecomposition of A(G). Then we can conclude the shiftinvariance property by the following Lemma 2.\nLemma 2. Diagonalizable matrices A1 and A2 are simultaneously diagonalized if and only if A1A2 = A2A1." }, { "heading": "B RING ISOMORPHISM: A → T", "text": "We derive an equivalent form of A, namely construct a tractable space for A. Notice that, this construction is essential to the proof of Lemma 1 and Theorem 1.\nSince G is finite, we can construct a block diagonal matrix T ∈ RN |G|×N |G|, consisting of all adjacency matrices on the diagonal.\nT = A(G1) . . . A(G|G|) ∈ RN |G|×N |G|. (15) Hereby, we stop to explain the big picture of Definition 2 with Equation 15. Obviously, the spectrum capacity Γ represents the number of eigenvalues of T without multiplicity. Note that, eigenvalues of adjacency matrices signify graph similarity. The spectrum capacity Γ identifies a set of graphs by enumerating the structural patterns. Even if the graph set goes extremely large (to guarantee the generalization capability), the distribution of spectrum provide the upper bound of Γ, so our theories will not lose generality.\nNow we get back to construct a matrix space T from A via a ring homomorphism π : A → T :\nπ : K∑ k=0 θkA(G) k 7→ K∑ k=0 θkT k. (16)\nRecall that a ring homomorphism preserves the “summation” and “multiplication”. Concretely, we write the matrix space T as follows:\nT = { H : H =\nK∑ k=0 θkT k,∀θk ∈ R\n} . (17)\nIn the following part, we prove that π is an isomorphism.\nProof. First, the definition of T (Equation 17) basically conclude the surjectivity. Second, for any f1 6= f2 ∈ A with parameter αk, βk ∈ R where k = 0, · · · ,K, there exists Gj ∈ G,x ∈ RN such that f1(Gj ,x) 6= f2(Gj ,x). And we have their images H1 = π(f1),H2 = π(f2). By padding x with zeros like:\nx′ = [ 0TN(j−1) x T 0TN(|G|−j) ]T , (18)\nwhere 0N denote the all-zero vector of length N . We applyH1,H2 to x′:\nH1x ′ = [ 0TN(j−1) (∑K k=0 αkA(Gj) kx )T 0TN(|G|−j) ]T = [ 0TN(j−1) f1(Gj ,x) T 0TN(|G|−j) ]T ,\n(19)\nH2x ′ = [ 0TN(j−1) (∑K k=0 βkA(Gj) kx )T 0TN(|G|−j) ]T = [ 0TN(j−1) f2(Gj ,x) T 0TN(|G|−j) ]T .\n(20)\nHence,H1 6= H2 concludes the injectivity." }, { "heading": "C PROOF OF LEMMA 1", "text": "We first show A is a vector space, then we leverage the isomorphism to have equality dimA = dim T . Figuring out the dimension of T is much more tractable.\nProof. By verifying the linear combination is closed or simply implied from the ring isomorphism π, A is at least a vector space Then Lemma 1 follows from the Theorem 3 of Sandryhaila & Moura (2013). We briefly conclude the proof in the following way. Letm(x) denote the minimal polynomial of T . We have Γ = degm(x). Due to the isomorphism, dimA = dim T . Suppose K + 1 < Γ. First, dim T cannot be larger than K + 1, because { I,T , · · · ,TK } is a spanning set. If dim T < K + 1, then there exists some polynomial p(x) with deg p(x) ≤ K, such that p(A) = 0. This contradicts the minimality of m(x). dim T can only be K + 1. Suppose K + 1 ≥ Γ. For any H = h(T ) for some polynomial h(x) with deg h(x) ≤ K. By polynomial division, there exists unique polynomials q(x) and r(x) such that\nh(x) = q(x)m(x) + r(x), (21)\nwhere deg r(x) < degm(x) = Γ. Insert T into Equation 21:\nh(T ) = q(T )m(T ) + r(T ) = q(T )0 + r(T ) = r(T ). (22) Therefore, { I,T , · · · ,T Γ−1 } form a basis of T , i.e., dim T = Γ.\nRemark that, we assume each graph contains the same number of vertices only for the sake of simplicity. Lemma 1 still holds when the vertex numbers are varying, since the construction of Equation 15 is independent of this assumption. However, we need the graph set to be finite, otherwise Γ might be uncountable. We leave the discussion on infinite graph sets for future study." }, { "heading": "D PROOF OF THEOREM 1", "text": "First, we borrow the concept of T (Equation 17) in place of A. Then we leverage the following basic yet powerful Lemma 3 to conclude the proof of Theorem 1 straightforwardly. Lemma 3. Over the field of reals, the degree of an irreducible non-trivial univariate polynomial is either one or two.\nProof. For any f ∈ A, let us map it to H = h(A) through the isomorphism π (Equation 16) for some polynomial h(x) with deg h(x) ≤ Γ− 1 (By Lemma 1). By Lemma 3, factorize h(x) into series of polynomials with the degree at most two, and then merge first-order polynomials into second-order ones until no paired first-order polynomials remaining. Finally, we obtain the following equation:\nh(x) = dD/2e∏ l=1 h(l)(x), (23)\nwhere D = deg h(x). If D is even, deg h(l) = 2 for l = 1, · · · , dD/2e. Otherwise, there exists some j ∈ [L] such that deg h(j)(x) = 1. Notice that, the remaining first-order polynomials is at most 1, which indicates the sparsity of coefficients is very low.\nNow we obtain filters H(l) = h(l)(T ) for l = 1, · · · , dD/2e. The last step is applying the inverse of isomorphism π−1 to mapH(l) ∈ T back to f (l) ∈ A as below:\nπ−1 : K∑ k=0 θkT k 7→ K∑ k=0 θkA(G) k. (24)\nRecalling Section 4.1, f (l) ∈ B2 for l = 1, · · · , dD/2e. Since π−1 is also a ring isomorphism, H = H(dD/2e) ◦ · · · ◦H(1) implies f = f (dD/2e) ◦ · · · ◦ f (1).\nRemark that Theorem 1 can be considered as the GCN-version of Theorem 3 in Zhou (2020). It plays a key step in proving the universality of nonlinear CNNs. Our Theorem 1, thus, provides a strong tool for analyzing nonlinear GCNs." }, { "heading": "E SYNTHETIC GRAPH SPECTRUM (SGS) DATASET", "text": "Our SGS dataset works for node signal filtering regression tasks. We designed 3 types of graph signal filters: high-pass (HP), low-pass (LP) and band-pass (BP) filters, as given in Equation 25. For each type, we generate 1000, 1000 and 2000 undirected graphs with graph signals and groundtruth response in the training set, validation set and test set, repectively. Each graph approximately has 80 ˜120 nodes and 80 ˜350 edges. Models trained on each sub-dataset are expected to learn the corresponding filter by supervising the MAE loss.\nF ∗HP (x) = 1\n1 + exp{−50(x− 1)}\nF ∗LP (x) = 1− 1\n1 + exp{−50(x− 1)}\nF ∗BP (x) = −1\n1 + exp{−100(x− 1.05)} +\n1\n1 + exp{−100(x− 0.95)}\n(25)\nUndirected graphs are randomly sampled through rejection sampling of edges from complete graphs. In detail, we randomly draw an integerN from [80, 120] as the number of nodes, and then generate a N×N random matrixB with each element independently sampled from Unif(0, 1). Set a threshold and we can construct an adjacency matrix A by letting ai,j = 1 if bi,j > , otherwise ai,j = 0, where ai,j is the element located at the i-th row and j-th column ofA.\nNext, we need to generate spectral signals s for the graph. Independent sampling for each spectrum from a probabilistic distribution will only generate noises. Hence, we synthesize spectrum by summing random functions. We notice that pdf of the beta distribution Beta(a, b) is a powerful tool to construct diverse curves by tuning shape parameters a and b. Also, Gaussian function Norm(µ, σ) is able to yield diverse bell-shaped curves by tuning µ and σ. We sum 2 discretized beta functions, 4 discretized Gaussian functions with random parameters to generate spectral signals. Equation 26 elaborates the generation process and hyper-parameter choosing in our experiments, where g[x; a, b] is the pdf of Beta(a, b) distribution, f [x;µ, σ] is the pdf of Norm(µ, σ) distribution.\ns[x] = 2∑ i=1 g[x/N ; ai, bi] + 4∑ j=1 cjf [x;µj , σj ] x ∈ [N ]\nai, bi ∼ Unif{0.1, 5} µj ∼ Unif{0, N} σj ∼ Unif { N\n(j + 1) , N j\n}/ 9\ncj ∼ Unif{0.5, 2}\nmaxx∈[N ] f [x;µj , σj ]\n(26)\nIn most real-world cases, graph signals are usually represented in vertex-domain. With a generated graph and its spectral signals, we can retrieve the vertex-domain signals by inverse graph Fourier transformation: perform eigen-decomposition on the normalized adjacency matrix à of the graph to retrieve its graph Fourier basis Ũ , then we can find vertex-domain signals by Ũs.\nFor supervising purpose, we retrieve the groundtruth of each filter’s response by applying each filter to the generated spectral signals, namely F ∗k (s), k ∈ {HP,LP,BP}. Figure 4 illustrates an example of the generated spectral signals and its groundtruth filtering response of the 3 filters.\nInput\nHigh-pass GT\nLow-pass GT\nBand-pass GT" }, { "heading": "F MORE VISUALIZATIONS OF SPECTRUM", "text": "We compute spectrum as follows: suppose we have an undirected and unweighted graph with normalized adjacency matrix A and node signal X ∈ RN×D, where N is the number of nodes and D is the number of signal channels. Since A is symmetric, we perform an eigen-decomposition on the adjacency matrix A = UΛUT . Then the spectrum of X is computed by S = UTX . More information about the graph spectrum and graph Fourier transformation can be found in Ortega et al. (2018).\nFigure 5 shows the output spectrum of Vanilla GCN, 1st-Order GCN and SoGCN on the synthetic Band-Pass dataset. The visualizations are consistent to the Table 1 and Figure 3. Vanilla GCN almost loses all the band-pass frequency, resulting in a very poor performance. 1st-Order GCN learns to pass a part of medium-frequency band but still have an obvious distance from the groundtruth filter. SoGCN’s filtering response is close to the groundtruth response, showing its strong ability to represent graph signal filters.\nFigure 6 gives more spectrum visualizations on the ZINC dataset. We can observe the spectrum impacts of GRU on Vanilla GCN and our SoGCN. Each curve in the visualization figure represents the spectrum of each output channel, i.e., we plot each column of S as a curve." } ]
2,020
null
SP:15122fcea632ba9f420bd8a538f708a7621c8323
[ "The paper deals with explainable machine learning in the supervised setting and especially tackles the case where no ground truth data for evaluating the generate explanations, such as bounding boxes for objects, is available. The proposed \"Concensus\" approach retrains established architectures on the target dataset and averages their generated explanations from out-of-the-box explainers, such as LIME or SmoothGrad. The approach is evaluated for image classification on ImageNet and CUB-200-2011 by comparing the averaged explanations of the committee models when using LIME and SmoothGrad with the ground truth bounding box and segmentation, respectively. Minor evaluations are included for datasets Stanford Cars 196, Oxford Flowers 102 and Foods 101. The results show that the averaged explanations strongly correlate with the mean average precision with respect to the label distances." ]
Deep learning interpretation tools, such as (Bau et al., 2017; Ribeiro et al., 2016; Smilkov et al., 2017), have been proposed to explain and visualize the ways that deep neural network (DNN) classifiers make predictions. However, the success of these methods highly relies on human subjective interpretations, i.e., the ground truth of interpretations, such as feature importance ranking or locations of visual objects, when evaluating the interpretability of the DNN classifiers on a specific task. For tasks that the ground truth of interpretations is not available, we propose a novel framework Consensus incorporating an ensemble of deep models as the committee for interpretability evaluation. Given any task/dataset, Consensus first obtains the interpretation results using existing tools, e.g., LIME (Ribeiro et al., 2016), for every model in the committee, then aggregates the results from the entire committee and approximates the “ground truth” of interpretations through voting. With such quasi-ground-truth, Consensus evaluates the interpretability of a model through matching its interpretation result and the approximated one, and ranks the matching scores together with committee members, so as to pursue the absolute and relative interpretability evaluation results. We carry out extensive experiments to validate Consensus on various datasets. The results show that Consensus can precisely identify the interpretability for a wide range of models on ubiquitous datasets that the ground truth is not available. Robustness analyses further demonstrate the advantage of the proposed framework to reach the consensus of interpretations through simple voting and evaluate the interpretability of deep models. Through the proposed Consensus framework, the interpretability evaluation has been democratized without the need of ground truth as criterion.
[]
[ { "authors": [ "Isaac Ahern", "Adam Noack", "Luis Guzman-Nateras", "Dejing Dou", "Boyang Li", "Jun Huan" ], "title": "Normlime: A new feature importance metric for explaining deep neural networks", "venue": null, "year": 1909 }, { "authors": [ "Zeyuan Allen-Zhu", "Yuanzhi Li", "Yingyu Liang" ], "title": "Learning and generalization in overparameterized neural networks, going beyond two layers", "venue": "In Advances in neural information processing systems,", "year": 2019 }, { "authors": [ "David Bau", "Bolei Zhou", "Aditya Khosla", "Aude Oliva", "Antonio Torralba" ], "title": "Network dissection: Quantifying interpretability of deep visual representations", "venue": "In IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI),", "year": 2017 }, { "authors": [ "Lukas Bossard", "Matthieu Guillaumin", "Luc Van Gool" ], "title": "Food-101–mining discriminative components with random forests", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Aditya Chattopadhay", "Anirban Sarkar", "Prantik Howlader", "Vineeth N Balasubramanian" ], "title": "Gradcam++: Generalized gradient-based visual explanations for deep convolutional networks", "venue": "IEEE Winter Conference on Applications of Computer Vision (WACV),", "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Iasonas Kokkinos", "Kevin Murphy", "Alan L Yuille" ], "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "venue": null, "year": 2017 }, { "authors": [ "Yunpeng Chen", "Jianan Li", "Huaxin Xiao", "Xiaojie Jin", "Shuicheng Yan", "Jiashi Feng" ], "title": "Dual path networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2017 }, { "authors": [ "François Chollet" ], "title": "Xception: Deep learning with depthwise separable convolutions", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Jia Deng", "Wei Dong", "Richard Socher", "Li-Jia Li", "Kai Li", "Li Fei-Fei" ], "title": "Imagenet: A large-scale hierarchical image database", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2009 }, { "authors": [ "Thomas G Dietterich" ], "title": "Ensemble methods in machine learning", "venue": "In International workshop on multiple classifier systems,", "year": 2000 }, { "authors": [ "Xiaohan Ding", "Yuchen Guo", "Guiguang Ding", "Jungong Han" ], "title": "Acnet: Strengthening the kernel skeletons for powerful cnn via asymmetric convolution blocks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Finale Doshi-Velez", "Been Kim" ], "title": "Towards a rigorous science of interpretable machine learning", "venue": "arXiv preprint arXiv:1702.08608,", "year": 2017 }, { "authors": [ "Shanghua Gao", "Ming-Ming Cheng", "Kai Zhao", "Xin-Yu Zhang", "Ming-Hsuan Yang", "Philip HS Torr" ], "title": "Res2net: A new multi-scale backbone", "venue": null, "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep residual learning for image recognition", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Kaiming He", "Georgia Gkioxari", "Piotr Dollár", "Ross Girshick" ], "title": "Mask R-CNN", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2017 }, { "authors": [ "Geoffrey Hinton", "Oriol Vinyals", "Jeffrey Dean" ], "title": "Distilling the knowledge in a neural network", "venue": "In NeurIPS Deep Learning and Representation Learning Workshop,", "year": 2015 }, { "authors": [ "Andrew Howard", "Mark Sandler", "Grace Chu", "Liang-Chieh Chen", "Bo Chen", "Mingxing Tan", "Weijun Wang", "Yukun Zhu", "Ruoming Pang", "Vijay Vasudevan" ], "title": "Searching for mobilenetv3", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Andrew G Howard", "Menglong Zhu", "Bo Chen", "Dmitry Kalenichenko", "Weijun Wang", "Tobias Weyand", "Marco Andreetto", "Hartwig Adam" ], "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "venue": "arXiv preprint arXiv:1704.04861,", "year": 2017 }, { "authors": [ "Jie Hu", "Li Shen", "Gang Sun" ], "title": "Squeeze-and-excitation networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Gao Huang", "Zhuang Liu", "Laurens Van Der Maaten", "Kilian Q Weinberger" ], "title": "Densely connected convolutional networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2017 }, { "authors": [ "Forrest N Iandola", "Song Han", "Matthew W Moskewicz", "Khalid Ashraf", "William J Dally", "Kurt Keutzer" ], "title": "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size", "venue": "arXiv preprint arXiv:1602.07360,", "year": 2016 }, { "authors": [ "Been Kim", "Rajiv Khanna", "Oluwasanmi O Koyejo" ], "title": "Examples are not enough, learn to criticize! criticism for interpretability", "venue": "In Advances in neural information processing systems,", "year": 2016 }, { "authors": [ "Pieter-Jan Kindermans", "Kristof T Schütt", "Maximilian Alber", "Klaus-Robert Müller", "Dumitru Erhan", "Been Kim", "Sven Dähne" ], "title": "Learning how to explain neural networks: Patternnet and patternattribution", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Jonathan Krause", "Michael Stark", "Jia Deng", "Li Fei-Fei" ], "title": "3D object representations for fine-grained categorization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2013 }, { "authors": [ "Alex Krizhevsky", "Ilya Sutskever", "Geoffrey E Hinton" ], "title": "Imagenet classification with deep convolutional neural networks", "venue": "In Advances in Neural Information Processing Systems (NeurIPS),", "year": 2012 }, { "authors": [ "Tsung-Yi Lin", "Michael Maire", "Serge Belongie", "James Hays", "Pietro Perona", "Deva Ramanan", "Piotr Dollár", "C Lawrence Zitnick" ], "title": "Microsoft coco: Common objects in context", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2014 }, { "authors": [ "Hanxiao Liu", "Karen Simonyan", "Yiming Yang" ], "title": "Darts: Differentiable architecture search", "venue": "arXiv preprint arXiv:1806.09055,", "year": 2018 }, { "authors": [ "Ningning Ma", "Xiangyu Zhang", "Hai-Tao Zheng", "Jian Sun" ], "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Maria-Elena Nilsback", "Andrew Zisserman" ], "title": "Automated flower classification over a large number of classes", "venue": "In Sixth Indian Conference on Computer Vision, Graphics & Image Processing,", "year": 2008 }, { "authors": [ "Joseph Redmon", "Ali Farhadi" ], "title": "Yolov3: An incremental improvement", "venue": "arXiv preprint arXiv:1804.02767,", "year": 2018 }, { "authors": [ "Joseph Redmon", "Santosh Divvala", "Ross Girshick", "Ali Farhadi" ], "title": "You only look once: Unified, real-time object detection", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Marco Tulio Ribeiro", "Sameer Singh", "Carlos Guestrin" ], "title": " why should i trust you?” explaining the predictions of any classifier", "venue": "In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,", "year": 2016 }, { "authors": [ "Wojciech Samek", "Thomas Wiegand", "Klaus-Robert Müller" ], "title": "Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models", "venue": "arXiv preprint arXiv:1708.08296,", "year": 2017 }, { "authors": [ "Mark Sandler", "Andrew Howard", "Menglong Zhu", "Andrey Zhmoginov", "Liang-Chieh Chen" ], "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Ramprasaath R Selvaraju", "Michael Cogswell", "Abhishek Das", "Ramakrishna Vedantam", "Devi Parikh", "Dhruv Batra" ], "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "venue": "International Journal of Computer Vision (IJCV),", "year": 2020 }, { "authors": [ "Pierre Sermanet", "David Eigen", "Xiang Zhang", "Michaël Mathieu", "Rob Fergus", "Yann LeCun" ], "title": "Overfeat: Integrated recognition, localization and detection using convolutional networks", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Avanti Shrikumar", "Peyton Greenside", "Anshul Kundaje" ], "title": "Learning important features through propagating activation differences", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Daniel Smilkov", "Nikhil Thorat", "Been Kim", "Fernanda Viégas", "Martin Wattenberg" ], "title": "Smoothgrad: removing noise by adding noise", "venue": "In ICML Workshop on Visualization for Deep Learning,", "year": 2017 }, { "authors": [ "Mukund Sundararajan", "Ankur Taly", "Qiqi Yan" ], "title": "Axiomatic attribution for deep networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2017 }, { "authors": [ "Christian Szegedy", "Wei Liu", "Yangqing Jia", "Pierre Sermanet", "Scott Reed", "Dragomir Anguelov", "Dumitru Erhan", "Vincent Vanhoucke", "Andrew Rabinovich" ], "title": "Going deeper with convolutions", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2015 }, { "authors": [ "Mingxing Tan", "Quoc V Le" ], "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "venue": "In International Conference on Machine Learning (ICML),", "year": 2019 }, { "authors": [ "Ilse van der Linden", "Hinda Haned", "Evangelos Kanoulas" ], "title": "Global aggregations of local explanations for black box models. FACTS-IR: Fairness, Accountability, Confidentiality, Transparency, and Safety ", "venue": null, "year": 2019 }, { "authors": [ "Andrea Vedaldi", "Stefano Soatto" ], "title": "Quick shift and kernel methods for mode seeking", "venue": "In Proceedings of the European Conference on Computer Vision (ECCV),", "year": 2008 }, { "authors": [ "Haofan Wang", "Zifan Wang", "Mengnan Du", "Fan Yang", "Zijian Zhang", "Sirui Ding", "Piotr Mardziel", "Xia Hu" ], "title": "Score-cam: Score-weighted visual explanations for convolutional neural networks", "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops,", "year": 2020 }, { "authors": [ "Jingdong Wang", "Ke Sun", "Tianheng Cheng", "Borui Jiang", "Chaorui Deng", "Yang Zhao", "Dong Liu", "Yadong Mu", "Mingkui Tan", "Xinggang Wang" ], "title": "Deep high-resolution representation learning for visual recognition. 2020b", "venue": null, "year": 2020 }, { "authors": [ "P. Welinder", "S. Branson", "T. Mita", "C. Wah", "F. Schroff", "S. Belongie", "P. Perona" ], "title": "Caltech-UCSD birds 200", "venue": "Technical Report CNS-TR-2010-001, California Institute of Technology,", "year": 2010 }, { "authors": [ "Saining Xie", "Ross Girshick", "Piotr Dollár", "Zhuowen Tu", "Kaiming He" ], "title": "Aggregated residual transformations for deep neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Quanshi Zhang", "Ruiming Cao", "Feng Shi", "Ying Nian Wu", "Song-Chun Zhu" ], "title": "Interpreting cnn knowledge via an explanatory graph", "venue": "In AAAI Conference on Artificial Intelligence,", "year": 2018 }, { "authors": [ "Quanshi Zhang", "Yu Yang", "Haotian Ma", "Ying Nian Wu" ], "title": "Interpreting cnns via decision trees", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Xiangyu Zhang", "Xinyu Zhou", "Mengxiao Lin", "Jian Sun" ], "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Bolei Zhou", "Aditya Khosla", "Agata Lapedriza", "Aude Oliva", "Antonio Torralba" ], "title": "Learning deep features for discriminative localization", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Smilkov" ], "title": "2017) reduces the visual noise by repeatedly adding small random noises to the input", "venue": null, "year": 2017 }, { "authors": [ "Redmon", "Farhadi" ], "title": "DenseNet (Huang et al., 2017), DPN (Chen et al., 2017b), SqueezeNet 3https://github.com/PaddlePaddle/models/blob/release/1.8/PaddleCV", "venue": null, "year": 2018 } ]
[ { "heading": "1 INTRODUCTION", "text": "Due to the over-parameterization nature (Allen-Zhu et al., 2019), deep neural networks (DNNs) (LeCun et al., 2015) have been widely used to handle machine learning and artificial intelligence tasks, however it is often difficult to understand the prediction results of DNNs despite the very good performance. To interpret the DNN classifiers’ behaviors, a number of interpretation tools (Bau et al., 2017; Ribeiro et al., 2016; Smilkov et al., 2017; Sundararajan et al., 2017; Zhang et al., 2019; Ahern et al., 2019) have been proposed to recover or visualize the ways that DNNs make decisions.\nPreliminaries. For example, Network Dissection (Bau et al., 2017) uses a large computer vision dataset with a number of visual concepts identified/localized in every image. Given a convolutional neural network (CNN) model for interpretability evaluation, it recovers the visual features used by the model for the classification of every image via intermediate-layer feature maps, then matches the visual features with the labeled visual concepts to estimate the interpretability of the model as the intersection-over-union (IoU) between the activated feature maps and labeled locations of visual objects. Related tools that interpret CNNs through locating importation subregions of visual features in the feature maps have been proposed in (Zhou et al., 2016; Selvaraju et al., 2020; Chattopadhay et al., 2018; Wang et al., 2020a).\nApart from investigating the inside of complex deep networks, (Ribeiro et al., 2016; van der Linden et al., 2019; Ahern et al., 2019) proposed to use simple linear or tree-based models to surrogate the predictions made by the DNN model over the dataset through local or global approximations, so as to capture the variation of model outputs with the interpolation of inputs in feature spaces. Then,\nwith the surrogate model, these methods interpret the DNN model as ways the model uses features for predictions, e.g., ranking of feature importance, and compare the results with the ground truth labeled by human experts to evaluate interpretability. Besides the use of linear interpolation for surrogates, many algorithms, like SmoothGrad (Smilkov et al., 2017), Integrated Gradients (Sundararajan et al., 2017), DeepLIFT (Shrikumar et al., 2017), and PatternNet (Kindermans et al., 2018) have been proposed to estimate the input feature importance as the way to interpret the models, so as to interpret the model predictions by highlighting the importation subregions in the input. In addition to the above methods, (Zhang et al., 2018a; 2019) proposed to learn a graphical model to clarify the process of making decision at a semantic level. Note that obtaining the interpretation of a model is an algorithmic procedure to explain the model (Samek et al., 2017). On the other hand, through the comparing the interpretation results with the human labeled ground truth, the interpretability evaluation aims at estimating the degree to which a human (expert) can consistently predict the model’s result (Kim et al., 2016; Doshi-Velez & Kim, 2017).\nIn summary, the ground truth of interpretation results (usually labeled by human experts) is indispensable to all above methods for interpretability evaluations and comparisons, no matter ways they interpret the models, e.g., visual concepts detecting (Bau et al., 2017), and feature importance ranking for either local (Ribeiro et al., 2016) or global (Ahern et al., 2019) interpretations. While the datasets with visual objects labeled/localized and/or the importance features ranked have offered in some specific areas, the unavailability of ground truths also limits the generalization of these methods to interpret brand new models on the new tasks/datasets ubiquitously. There is thus the need of a method being able to evaluate the interpretability of models on the datasets where the ground truth of interpretation results is not available.\nOur contributions. In this paper, we study the problem of evaluating the interpretability of DNN classifiers on the datasets without ground truth of interpretation results. The basic idea of Consensus is to leverage the interpretability of known models as reference to predict the interpretability of new models on new tasks/datasets. Especially, in terms of general purpose perception tasks, we have already obtained a number of reliable models with decent interpretability, such as ResNets, DenseNets and so on. With a new dataset, one could use interpretation tools (Ribeiro et al., 2016; Smilkov et al., 2017) to obtain the interpretation results of these models, then aggregate interpretation results as the reference. Then for any model, one could evaluate interpretability of the model through comparing its interpretation results with the reference.\nSpecifically, as illustrated in Figure 1, we propose a novel framework named Consensus that uses a large number of known models as a committee for interpretability evaluation. Given any task/dataset, Consensus first obtains the interpretation results for every model in the committee using existing interpretation tools, e.g., LIME (Ribeiro et al., 2016), then aggregates the results from\nthe entire committee and reaches the consensus of interpretations through voting. With the quasiground-truth, Consensus evaluates the interpretability of each model through matching its interpretation result and the approximated one, and ranks the matching scores of committee members, so as to pursue the absolute and relative interpretability evaluation results without the ground truth of interpretations labeled by human experts. More specifically, we make contributions as follows. • We study the problem of interpretability evaluation on the datasets without human labeled\nground truth of interpretations. To the best of our knowledge, this work is the first to study the problem of evaluating the interpretability of DNNs while the ground truths of interpretation results are not available, by addressing the technical issues of voting and committees. • We design and implement Consensus, a novel interpretability evaluation framework that incorporates a wide range of alternating interpretation tools, such as LIME (Ribeiro et al., 2016), SmoothGrad (Smilkov et al., 2017), to interpret the model as the variation of outputs over the interpolations of inputs (in feature spaces), from massive perspectives (e.g., local or global interpretation, tree-based or linear surrogate and so on), and carries out the interpretability evaluation through the voting based on interpretation results of models in the committee. • We carry out extensive experiments to validate Consensus on a wide range of models on new ubiquitous tasks/datasets that the ground truth is not available, and exploit the quantifiable metric of model interpretability to report the overall interpretability evaluation results (Section 3). Case studies (Section 4) confirm the effectiveness of Consensus and show the closeness of Consensus based results to the ground truth of interpretations. Robustness analyses (Section 5) further demonstrate the advantage of the committee that the factors including the use of basic interpretation algorithms, the types of networks in the committee, and the size of committee would have few affects on the Consensus based interpretability evaluation results." }, { "heading": "2 CONSENSUS: A FRAMEWORK OF INTERPRETABILITY EVALUATION", "text": "In this section, we introduce our proposed framework, namely Consensus, which incorporates existing interpretations, such as LIME (Ribeiro et al., 2016) and SmoothGrad (Smilkov et al., 2017), to enable DNN interpretability evaluation without the use of human labeled interpretation results as reference/ground truth. Specifically, Consensus generalizes a simple electoral system, and consists of three steps: (1) Committee Formation with Deep Models, (2) Committee Voting for Consensus Achievement1, and (3) Consensus-based Interpretability Evaluation, as follow.\nCommittee Formation with Deep Models. Given a number of deep neural networks (which are well-known with decent performance on common perception tasks) and a target task with a dataset (potentially without ground truth of interpretation results), Consensus first trains the given neural networks (from scratch or fine-tuned) using the dataset. Then, Consensus forms the post-trained networks as a committee of models, noted as M, and considers the variety of interpretability of models in the committee that would establish the references for interpretability comparisons and evaluation. Note that while our research assumes the human labeled ground truth of interpretation results is not available in the given task/dataset for interpretability evaluation, the labels of samples are requested when handling classifications and regression tasks.\nCommittee Voting for Consensus Achievement. With the committee of trained models and the target task/dataset for interpretation, Consensus first leverages an existing interpretation tool A, e.g., A can be LIME (Ribeiro et al., 2016) or SmoothGrad (Smilkov et al., 2017) alternatively, to obtain the interpretation results of every model in the committee on every sample in the dataset. Given some sample di, we note the obtained interpretation results of all models as L. Then, Consensus proposes a voting procedure that aggregates L to achieve the consensus c as the quasiground-truth of the interpretation for the sample di. Specifically, ck = 1m ∑m i=1 L 2 ik ‖Li‖ for LIME and ck = 1 m ∑m i=1 Lik−min(Li) max(Li)−min(Li) for SmoothGrad. In summary, Consensus adopts a normalizationaveraging procedure to obtain the quasi-ground-truth of interpretations for the sample. To the end, the consensus has been achieved through obtaining the collections of quasi-ground-truth for every sample in the target dataset based on committee voting.\nConsensus-based Interpretability Evaluation. Given the quasi-ground-truths as the consensus of the whole committee, the proposed algorithm evaluates the interpretability of every model in the\n1We are not intending to connect our work with the multi-agent research though we use the term “consensus achievement”.\ncommittee by considering the similarity between the interpretation result of each individual model and the consensus of the whole committee. Specifically, for the interpretations and consensus based on LIME, Consensus uses cosine similarity between the flattened vectors of interpretation of each model and the consensus. Then Consensus quantifies the interpretability of the model through the mean of similarity measures over all samples. For the results based on SmoothGrad (visual feature importance in pixel levels), Consensus follows a similar procedure, where the proposed algorithm uses Radial Basis Function (exp(− 12 (||a− b||/σ)\n2)) for the similarity measurement. We rank all models in the committee using their similarities to the consensus and consider the top/bottom models with good/bad interpretability in the committee.\nAlgorithm 1: Consensus Framework Pseudocode. The functions interpret(), aggregate() and sim() are described in the main text and detained in Algorithm 2 in Appendix E. 1 function Consensus(D, A)\nInput : A dataset D containing n examples {di}i=1,··· ,n and an interpretation algorithm A. Output: s ∈ Rm, where each element sj indicates the interpretability of each modelMj inM. /* Step 1: Committee Formation with Deep Models M */\n2 PrepareM containing m models {Mj}j=1,··· ,m, i.e., the committee of deep models. 3 S = zeros(n,m) // Initialize an empty n×m matrix for storing the interpretability scores of m models on n data sample. 4 for i in 1, · · · , n do 5 L = zeros(m, pi) 6 for j in 1, · · · ,m do 7 Lj = interpret(A,Mj , di) 8 end /* Step 2: Committee Voting for Consensus Achievement at di */ 9 c = aggregate(L) // c ∈ Rpi, consensus as quasi-ground-truth\n/* Step 3: Consensus-based Interpretability Evaluation at di */ 10 for j in 1, · · · ,m do 11 Sij = sim(Lj , c) // the score of Mj at di 12 end" }, { "heading": "13 end", "text": "" }, { "heading": "14 for j in 1, · · · ,m do", "text": "" }, { "heading": "15 sj = average(S·j) // average score for each model over n samples", "text": "16 end 17 return s\nThese three steps of Consensus are illustrated in Figure 1 and formalized in Algorithm 1. Note that for any new models for interpretability evaluation, Consensus includes them as members of committee together with a number of known models, and performs above procedures to obtain their interpretability evaluation results (absolute evaluation results) as well as the ranking in the committee (relative evaluation results). In this way, one can clearly position the interpretability and potentials (in terms of performance) of the new models on the new tasks among the known models, even when the ground truth of interpretation results are not available." }, { "heading": "3 OVERALL EVALUATION AND RESULTS", "text": "In this section, we use the image classification as the target task for interpretation and interpretability evaluation of deep models. We first introduce the settings of image classification tasks in details, as the setups of our experiments. Then, we present the overall results of Consensus, where we could observe its capacity of evaluating the interpretability of models, with connections to the model performance, while the ground truth of interpretations is not used. Through the comparisons with interpretability evaluation based on LIME and SmoothGrad algorithms using human labeled ground truth, the effectiveness of Consensus has been evaluated." }, { "heading": "3.1 EVALUATION SETUPS", "text": "Here we present the setups of our experiments from following perspectives.\nDatasets and Models. For overall evaluation and comparisons, we use two image classification datasets ImageNet (Deng et al., 2009) for ubiquitous visual objects and CUB-200-2011 (Welinder et al., 2010) for birds respectively. Note that ImageNet provides the class label for every image; CUB-200-2011 dataset includes the class label and pixel-level segmentation for the bird in every\nimage, where the pixel annotations of visual objects have been considered as the ground truth of interpretations (Bau et al., 2017). In this way, we evaluate interpretability of models using Consensus, and compare the results with the existing algorithms based on the ground truth.\nFor the fair comparisons, we use more than 80 models publicly available from PaddlePaddle2, which have been trained using ImageNet dataset. We also derive models based on CUB-200-2011 dataset through standard fine-tuning procedures. In our experiments, we include these models in two committees based on ImageNet and CUB-200-2011 respectively for the interpretability evaluation.\nBaselines. We consider two interpretation algorithms, LIME (Ribeiro et al., 2016) and SmoothGrad (Smilkov et al., 2017). Specifically, LIME surrogates the interpretation as the assignment of visual feature importance to superpixels (Vedaldi & Soatto, 2008), SmoothGrad outputs the interpretation results as the visual feature importance over pixels. In this way, we can evaluate the flexibility of the proposed Consensus framework over interpretation results from diverse sources (i.e., linear surrogates vs. input gradients) and in multiple granularity (i.e., feature importance in superpixel/pixel-levels). Note that both algorithms use mean Average Precision (mAP) between their interpretation results and the ground truth (i.e., pixel level segmentation, if available) as the measure of interpretability evaluation.\nMetrics. Given the similarity measures between the model’s interpretation results and consensus of the committee, the proposed algorithm is used to rank the interpretability of every model in the committee. In this way, we compare the ranking list of model interpretability based on Consensus evaluation results with the ranking list based on the ground truth, so as to understand how well the proposed Consensus framework can approximate the evaluation results of model interpretability without the use of ground truth. More specifically, visual comparisons, Pearson correlation coefficients (which characterize the linearity between two variables, e.g., model performance vs. interpretability evaluation) and significance tests have been used as the metrics for overall evaluation and comparisons. Note that we do not have the ground truth\nof interpretations for ImageNet. However, it is still possible for us to access the interpretability evaluation through connecting Consensus outputs with the generalization performance of the models. Figure 2 illustrates an example of correlations between interpretability evaluation results based on ground truth and the testing accuracy of 85 models over CUB-200-2011 dataset, where we can see strong, significant, and consistent correlations between model performance and the interpretability." }, { "heading": "3.2 OVERALL COMPARISONS", "text": "In Figure 3, we present the interpretability evaluation results using CUB-200-2011, and the overall comparisons between Consensus and the ground truth based, where we plot the scatter points (i.e., Consensus results in x-axis vs. ground truth based results in y-axis) of the two comparisons using LIME and SmoothGrad respectively. In both comparisons, Consensus performs almost-identically as the one based on ground truth with strong Spearman’s correlation (which characterizes the consistency between two ranking lists) with significance tests passed.\nTo enable the similar comparisons using ImageNet dataset (where the ground truth of interpretations is not available), we connect the Consensus results with the model performance (testing accuracy) of models, as the model performance and the ground truth based interpretability evaluation results are usually correlated (please see also in Figure 2 and section 3.1). In Figure 3, we present the\n2https://github.com/PaddlePaddle/models/blob/release/1.8/PaddleCV/ image_classification/README_en.md#supported-models-and-performances\ncorrelations between the Consensus results (without the use of ground truth) and the model performance for both LIME and SmoothGrad using ImageNet and CUB-200-2011 datasets. Specifically, in Figure 4 (a-b) and (d-e), we present the comparisons between model performance (in y-axis) and the Consensus results (in x-axis) using LIME (a,d) and SmoothGrad (b,e) on ImageNet (a,b) and CUB-200-2011 (d,e) respectively.\nAll correlations here are strong with significance tests passed, though in some local areas of the correlation plots between model performance and interpretability evaluation the trends are not always consistent. It has been observed that some extremely large networks work well with the datasets, while they are lack of interpretability (Bau et al., 2017). In this way, we could conclude that, in an overall manner, interpretability evaluation results based on Consensus using both LIME and SmoothGrad over the two datasets are correlated to model performance with significance." }, { "heading": "3.3 COMPARISON RESULTS WITH NETWORK DISSECTION (BAU ET AL., 2017)", "text": "Here we compare the results of Consensus with ground truth based interpretability evaluation solution – Network Dissection (Bau et al., 2017). With ImageNet dataset, Network Dissection gave a ranking list of models (w.r.t the model interpretability) as follows: ResNet152 > DenseNet161 > VGG16 > GoogLeNet > AlexNet. We report two ranking lists based on Consensus with detailed numbers for every architecture above, which has been demonstrated in Figure 4 (a, LIME): DenseNet161 (0.849) ≈ ResNet152 (0.846) > VGG16 (0.821) > GoogLeNet (0.734) > AlexNet (0.594); and (b, SmoothGrad): DenseNet161 (0.038) ≈ ResNet152 (0.037) > VGG16 (0.030) >\nGoogLeNet (0.026)>AlexNet (0.021). The three ranking lists are almost identical, except the comparisons between DenseNet161 and ResNet152, where in the both lists of Consensus, DenseNet161 is similar to ResNet152 with marginally elevated interpretability, while Network Dissection considers ResNet152 is more interpretable than DenseNet161.\nWe believe the results from Consensus and Network Dissection are close enough from the perspectives of ranking lists, and the difference may be caused by the different ways that Consensus and Network Dissection evaluate the interpretability. Consensus matches the interpretations with the visual objects on images (due to the results of LIME and SmoothGrad), while Network Dissection counts the number of neurons activated by the visual objects and patterns (color, materials, textures, scenes, and parts). Furthermore, Network Dissection evaluates the interpretability of deep models using the Broden dataset with densely labeled visual objects and patterns (Bau et al., 2017), while Consensus does not need additional dataset or ground truths of interpretations. In this way, the results by Consensus and Network Dissection might be slightly different." }, { "heading": "4 CASE STUDIES", "text": "In this section, we discuss several technical issues on Consensus for ground-truth-free evaluation of DNN interpretability, using a set of case studies.\nQualification and Effectiveness of Committee-based Voting In our research, Consensus proposes to replace the ground truth with a committee of networks, and it trusts the voting results as the interpretations of ground truth. Thus, we have to verify (1) whether the (consensus achieved by the) committee would approximate to the ground truth, and (2) whether voting is an effective way to express the interpretation results of the whole committee.\nTo achieve the goal, we consider the committee as an ensemble of networks, and it classifies data via committee-based voting through averaging the probability outputs of the member networks. In Figure 2, we compare the committee (as an ensemble of networks, entitled with “Consensus”) with other architectures for the evaluation of model performance (testing accuracy) and the interpretability evaluation with the ground truth (i.e., mAP between consensus achieved by the committee and the ground truth), using both LIME and SmoothGrad over CUB-200-2011 dataset. The comparison results show that the committee is of the best testing accuracy with significant advantages compared to other models, while the committee is also with the highest mAP between its consensus interpretations and the ground truth compared to other models. In this way, we can confirm the qualification of the committee, as well as the effectiveness of the committee-based voting.\nCloseness of Consensus to the Ground Truth In addition to the mAP measurement (illustrated in Figure 2) between consensus and the ground truth on CUB-200-2011 dataset, we also visualize the examples to compare the consensus achieved by the committee, interpretation results of individual networks using both LIME and SmoothGrad, and (optionally) the ground truth labeled for both ImageNet and CUB-200-2011 datasets in Figures 5 and 6 respectively. The comparison shows that the consensus can clearly segment the visual objects related to the classification from the background of images, and it would be closer to the ground truth (if available) than the individual networks. Both quantitative results in Figure 2 and the visual comparisons in Figures 5 and 6 validate the closeness of consensus to the ground truth of interpretations." }, { "heading": "5 ROBUSTNESS ANALYSES", "text": "In this section, we discuss several factors, including the use of basic interpretation algorithms (e.g., LIME and SmoothGrad), the size of committee, and the candidate pool for models in the committee, that would affect the proposed Consensus framework.\nConsistency of LIME and SmoothGrad While Consensus adopts LIME and SmoothGrad as the basic interpretation algorithms, the interpretation results from these two algorithms are not exactly the same. Even though the granularity of interpretation results are different, which causes mismatching in mAP estimation with ground truth, the interpretability evaluation results of Consensus based on the two algorithms are generally consistent. The consistency has been confirmed by Figure 4 (c, f), where the overall results of Consensus based on LIME is strongly correlated to SmoothGrad over all models. This shows that the proposed Consensus framework can work well with a wide spectrum of basic interpretation algorithms.\nConsistency of Cross-Committee Interpretability Evaluation In real-word applications, the committee-based evaluation makes the results inconsistent in a committee-by-committee manner. In this work, we are interested in whether the interpretability evaluation is consistent against the change of committee (e.g., using different sets of models). Given 16 ResNet models as the targets, we form 20 independent committees through combining the 16 models with 10–20 models randomly drawn from the networks presented in Figure 4. In each of these 20 independent committees, we use Consensus to evaluate the interpretability of 16 ResNet models and rank them accordingly. We then estimate the Pearson correlation coefficients between any of these 20 ranking lists and the list in Figure 4 (a), where the mean correlation coefficient is 0.96 with the standard deviation 0.04. Thus, we can say the interpretability evaluation based on randomly picked committees would be consistent.\nConvergence over Committee Sizes To understand effects of committee sizes to interpretability evaluation, we run Consensus using committees of various sizes formed with networks randomly picked up from the pools. In Figure 7, we plot and compare the performance of the consensus with increasing committee sizes, where we estimate the mAP between the ground truth and the consensus based on the random committees of different sizes and 20 random trials have been done for every single size\nindependently. It shows that the curve of mAP would quickly converge to the complete the committee, while the consensus based on a small proportion of committee (e.g., 15 networks) works good enough even compared to the complete committee with 85 networks.\nApplicability with Random Committees over Other Datasets To demonstrate the applicability of Consensus with varying committees over other datasets, we continue our experiments using net-\nworks randomly picked up from the pool on other datasets, including Stanford Cars 196 (Krause et al., 2013), Oxford Flowers 102 (Nilsback & Zisserman, 2008) and Foods 101 (Bossard et al., 2014) in Figure 8, where we consider the connections between the interpretability and model performance (e.g., the testing accuracy, inspired by Figure 2). The results confirm that, when the ground truth of interpretations is not available, our framework is still capable of identifying the interpretability for a wide range of models on ubiquitous datasets/tasks." }, { "heading": "6 CONCLUSION AND FUTURE WORK", "text": "We have proposed a novel framework Consensus for evaluating the interpretability of deep models while avoiding the use of ground truth for interpretations. Specifically, Consensus forms a committee of deep models and generalizes an electoral system to reach the consensus of interpretation results using some basic interpretation algorithms via committee-based voting. Then, for every model in the committee, Consensus computes the similarity between its interpretation results and the aggregated consensus, then it ranks the models in the committee accordingly, so as to pursue the absolute score (i.e., similarities to the consensus) and relative results (the rank in the committee) for interpretability evaluation. To validate Consensus, we carry out extensive experimental studies using 85 deep models including ResNets, DenseNets, and so on, on top of 5 datasets, including ImageNet, CUB-200-2011 and so on, in comparison with interpretation algorithms of three categories, including LIME (Ribeiro et al., 2016), SmoothGrad (Smilkov et al., 2017), and Network Dissection (Bau et al., 2017). The results show that (1) Consensus can evaluate the interpretability of models on the datasets even when the ground truth of interpretation results are not available, (2) the consensus of interpretation results aggregated from the committee could well approximate the ground truth of interpretations, (3) the interpretability evaluation results delivered by Consensus correlates to the model performance (testing accuracy) strongly and significantly, (4) the factors including the use of basic interpretation algorithms, the types of networks in the committee, and the size of committee would not affect the results of interpretability evaluation with Consensus.\nDiscussion. For the interpretability evaluation of any models on any dataset, the ground truth based evaluation approaches rely on human subjective interpretations, while Consensus can automate this process by just training a few more models and approximating the ground truth of interpretations. We thus conclude that the interpretability evaluation can be democratized through an electoral system constructed by the DNN models themselves, rather than the use of human labeled ground truth as criterion. We discuss the future work in three different directions. (1) In terms of methodologies, the proposed Consensus framework considers the segmentation of visual features as interpretations for vision tasks and adopts simple voting mechanism to aggregate results from the committee of models. We believe that the contributions made in this work are complementary with visual objects segmentation and voting. The use of advanced segmentation (Chen et al., 2017a; He et al., 2017) and ensemble learning methods (Dietterich, 2000; Hinton et al., 2015) would further improve the proposed framework. (2) In terms of applications, following the steps of Consensus, on medical or financial domains where interpretations for black-box models are urged, the quasi-ground-truth of interpretations and the interpretability evaluation of models could be easily obtained. (3) Instead of using trained models of different architectures as committee members, models of common or even the same architectures trained using various training strategies would also form an interesting committee for analyzing the interpretability of models based on different training strategies." }, { "heading": "A MORE VISUALIZATION RESULTS", "text": "We present more visualization results of Consensus, where the samples are from ImageNet and CUB-200-2011." }, { "heading": "B EXPERIMENTAL DETAILS", "text": "We present the technique details for the experiments in the main text.\nB.1 COMMITTEE FORMATIONS\nThere are around 100 publicly available deep models trained on ImageNet from PaddlePaddle3. We first exclude some very large models that take much more computation resources. Then for the consistency of computing superpixels, we include only the models that take images of size 224×224 as input, resulting 81 models for the committee based on ImageNet. Since there are already a large number of available models, we choose to not include more models by aligning the superpixels in different sizes of images.\nAs for CUB-200-2011 (Welinder et al., 2010), similarly we first exclude the very large models. Then we follow the standard procedures (Sermanet et al., 2014; Simonyan & Zisserman, 2015) for finetuning ImageNet-pretrained models on CUB-200-2011. For simplicity, we use the same training setup for all pre-trained models (learning rate 0.01, batch size 64, SGD optimizer with momentum 0.9, resize to 256 being the short edge, randomly cropping images to the size of 224×224), and obtain 85 models that are well trained. Different hyper-parameters may help to improve the performance of some specific networks, but for the same reason of the large number of available models, we choose to not search for better hyper-parameter settings.\nGiven the convergence over committee sizes (Figure 7), which shows that a committee of more than 15 models works good enough, thus we randomly choose around 20 models for Stanford Cars 196 (Krause et al., 2013), Oxford Flowers 102 (Nilsback & Zisserman, 2008) and Foods 101 (Bossard et al., 2014), following the same training procedure as CUB-200-2011.\nB.2 INTERPRETATION ALGORITHMS\nTo interpret a model, LIME (Ribeiro et al., 2016) on vision tasks first performs a superpixel segmentation (Vedaldi & Soatto, 2008) for an image, then generates samples by randomly masking some superpixels and computing the outputs through the model, and finally fits the model outputs with the set of superpixels as input by a linear regression model. The linear weights then presents the feature importance in the superpixel level as the interpretation result.\nThe gradients of model output w.r.t. input can partly identify influential pixels, but due to the saturation of activation functions in the deep networks, the vanilla gradient is usually noisy. SmoothGrad (Smilkov et al., 2017) reduces the visual noise by repeatedly adding small random noises to the input so as to get a list of corresponding gradients, which are averaged for the final interpretation result." }, { "heading": "C RESNET FAMILY", "text": "We show the zoomed plot of ResNet family (whose name contains “ResNet” key word) in the ImageNet-LIME committee of 81 models in Figure 13 (a). Meanwhile, we also present the results using ResNet family as committee in Figure 13 (b). These two subfigures have no large difference, which further confirms the consistency of ranking models in different committees." }, { "heading": "D REFERENCES OF NETWORK STRUCTURES", "text": "Many structures of deep neural networks have been evaluated in this paper, including AlexNet (Krizhevsky et al., 2012), ResNet (He et al., 2016), ResNeXt (Xie et al., 2017), SEResNet (Hu et al., 2018), ShuffleNet (Zhang et al., 2018b; Ma et al., 2018), MobileNet (Howard et al., 2017; Sandler et al., 2018; Howard et al., 2019), VGG (Simonyan & Zisserman, 2015), GoogleNet (Szegedy et al., 2015), Inception (Szegedy et al., 2015), Xception (Chollet, 2017), DarkNet (Redmon et al., 2016; Redmon & Farhadi, 2018), DenseNet (Huang et al., 2017), DPN (Chen et al., 2017b), SqueezeNet\n3https://github.com/PaddlePaddle/models/blob/release/1.8/PaddleCV/ image_classification/README_en.md#supported-models-and-performances\n(Iandola et al., 2016), EfficientNet (Tan & Le, 2019), Res2Net (Gao et al., 2019), HRNet (Wang et al., 2020b), Darts (Liu et al., 2018), AcNet (Ding et al., 2019) and so on.\nE COMPLETE PSEUDOCODE OF Consensus\nFigure 1 shows an illustrative pipeline of our framework Consensus of evaluating the interpretability of models without the need of the ground truth of interpretation, and Algorithm 2 gives a complete process of Consensus in pseudocode." }, { "heading": "F NUMERICAL REPORT OF MAIN PLOTS", "text": "Due to the large number of deep models evaluated, Figure 2, 3 and 4 grouped those that are of the same architecture. Here, we report all of the corresponding numerical results in Table 1.\nG VISUALIZATION OF COCO IMAGES\nFor further showing the effectiveness of Consensus, we visualize several random images from MSCOCO (Lin et al., 2014), shown in Figure 14.\nAlgorithm 2: Consensus Framework Pseudocode 1 function sim(a, b)\nInput : Two vectors or tensors. Output: a scalar as the similarity between a and b with appropriate normalization approaches. /* This function uses cosine similarity for LIME interpretations and\nRadial Basis Function, exp(− 1 2 (||a− b||/σ)2). */\n2 3 function aggregate(L) Input : L, a collection of interpretations of m models for one given data sample. Output: c, the consensus among m model for the interpretation of the given data sample. /* This function returns the quasi-ground-truth of the\ninterpretation for the given sample, basically it is equivalent to a normalization-averaging procedure in this paper.\nSpecifically, ck = 1 m\n∑m i=1 L 2 ik\n‖Li‖ for LIME, and ck = 1 m ∑m i=1 Lik−min(Li) max(Li)−min(Li)\nfor SmoothGrad. */ 4 5 function interpret(A,M , d) Input : An interpretation algorithm A, a trained modelM and a data sample d ∈ Rp, where Rp is the\nfeature domain and the dimension p may vary with the sample d. Output: α ∈ Rp, where the elements indicate the importances of input features. /* This function is an implementation of a typical interpretation\nalgorithm like LIME (number of superpixels as p), SmoothGrad (number of pixels as p) or others. */\n6 7 function Consensus(D, A) Input : A dataset D containing n examples {di}i=1,··· ,n and an interpretation algorithm A. Output: s ∈ Rm, where each element sj indicates the interpretability of each modelMj inM. /* Step 1: Committee Formation with Deep Models M */ 8 PrepareM containing m models {Mj}j=1,··· ,m, i.e., the committee of deep models. 9 S = zeros(n,m) // Initialize an empty n×m matrix for storing the\ninterpretability scores of m models on n data sample. 10 for i in 1, · · · , n do 11 L = zeros(m, pi) 12 for j in 1, · · · ,m do 13 Lj = interpret(A,Mj , di) 14 end /* Step 2: Committee Voting for Consensus Achievement at di */ 15 c = aggregate(L) // c ∈ Rpi, consensus as quasi-ground-truth /* Step 3: Consensus-based Interpretability Evaluation at di */ 16 for j in 1, · · · ,m do 17 Sij = sim(Lj , c) // the score of Mj at di 18 end" }, { "heading": "19 end", "text": "" }, { "heading": "20 for j in 1, · · · ,m do", "text": "" }, { "heading": "21 sj = average(S·j) // average score for each model over n samples", "text": "" }, { "heading": "22 end", "text": "23 return s" } ]
2,020
null
SP:203a33205b1cacb84b4d31c5b1b3a5cbb4d93742
[ "The authors propose a new method to improve robustness to adversarial examples under various norms (L1, L2 and LInf). Their method combines adversarial training with an adversarial noise generator. They improve upon adversarial training in a multi norm setting by choosing one norm at random for each sample, instead of computing an adversarial for all norms, thus significantly reducing the training time. They additionally improve robustness by regularizing model features between the standard image, the adversarially perturbed image and a perturbation of the image created with an adversarial noise generator." ]
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. `∞-attack). In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address these challenges, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model’s robustness against multiple types of attacks. Its key component is Meta Noise Generator (MNG) that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost.
[]
[ { "authors": [ "Dario Amodei", "Sundaram Ananthanarayanan", "Rishita Anubhai", "Jingliang Bai", "Eric Battenberg", "Carl Case", "Jared Casper", "Bryan Catanzaro", "Qiang Cheng", "Guoliang Chen" ], "title": "Deep speech 2: End-to-end speech recognition in english and mandarin", "venue": null, "year": 2016 }, { "authors": [ "Anish Athalye", "Nicholas Carlini", "David A. Wagner" ], "title": "Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples", "venue": null, "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Bethge" ], "title": "Decision-based adversarial attacks: Reliable attacks against black-box machine learning models", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Wieland Brendel", "Jonas Rauber", "Matthias Kümmerer", "Ivan Ustyuzhaninov", "Matthias Bethge" ], "title": "Accurate, reliable and fast robustness evaluation", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Nicholas Carlini", "David Wagner" ], "title": "Towards evaluating the robustness of neural networks", "venue": "In 2017 ieee symposium on security and privacy (sp),", "year": 2017 }, { "authors": [ "Yair Carmon", "Aditi Raghunathan", "Ludwig Schmidt", "John C Duchi", "Percy S Liang" ], "title": "Unlabeled data improves adversarial robustness", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Chenyi Chen", "Ari Seff", "Alain Kornhauser", "Jianxiong Xiao" ], "title": "Deepdriving: Learning affordance for direct perception in autonomous driving", "venue": "In ICCV,", "year": 2015 }, { "authors": [ "Pin-Yu Chen", "Yash Sharma", "Huan Zhang", "Jinfeng Yi", "Cho-Jui Hsieh" ], "title": "Ead: Elastic-net attacks to deep neural networks via adversarial examples", "venue": "In AAAI,", "year": 2018 }, { "authors": [ "Francesco Croce", "Matthias Hein" ], "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Tristan Deleu", "Tobias Würfl", "Mandana Samiei", "Joseph Paul Cohen", "Yoshua Bengio" ], "title": "Torchmeta: A meta-learning library for pytorch", "venue": null, "year": 1909 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "arXiv preprint arXiv:1810.04805,", "year": 2018 }, { "authors": [ "Chelsea Finn", "Pieter Abbeel", "Sergey Levine" ], "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "venue": "In ICML,", "year": 2017 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Identity mappings in deep residual networks", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Gao Huang", "Yu Sun", "Zhuang Liu", "Daniel Sedra", "Kilian Q Weinberger" ], "title": "Deep networks with stochastic depth", "venue": "In ECCV,", "year": 2016 }, { "authors": [ "Ajil Jalal", "Andrew Ilyas", "Constantinos Daskalakis", "Alexandros G Dimakis" ], "title": "The robust manifold defense: Adversarial training using generative models", "venue": "arXiv preprint arXiv:1712.09196,", "year": 2017 }, { "authors": [ "Yunhun Jang", "Hankook Lee", "Sung Ju Hwang", "Jinwoo Shin" ], "title": "Learning what and where to transfer", "venue": "In ICML,", "year": 2019 }, { "authors": [ "Daniel Kang", "Yi Sun", "Dan Hendrycks", "Tom Brown", "Jacob Steinhardt" ], "title": "Testing robustness against unforeseen adversaries", "venue": "arXiv preprint arXiv:1908.08016,", "year": 2019 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "University of Toronto,", "year": 2012 }, { "authors": [ "Alexey Kurakin", "Ian Goodfellow", "Samy Bengio" ], "title": "Adversarial examples in the physical world", "venue": "arXiv preprint arXiv:1607.02533,", "year": 2016 }, { "authors": [ "Cassidy Laidlaw", "Soheil Feizi" ], "title": "Functional adversarial attacks", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Hae Beom Lee", "Taewook Nam", "Eunho Yang", "Sung Ju Hwang" ], "title": "Meta dropout: Learning to perturb latent features for generalization", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Aleksander Madry", "Aleksandar Makelov", "Ludwig Schmidt", "Dimitris Tsipras", "Adrian Vladu" ], "title": "Towards deep learning models resistant to adversarial attacks", "venue": "In ICLR,", "year": 2017 }, { "authors": [ "Pratyush Maini", "Eric Wong", "J Zico Kolter" ], "title": "Adversarial robustness against the union of multiple perturbation models", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Chengzhi Mao", "Ziyuan Zhong", "Junfeng Yang", "Carl Vondrick", "Baishakhi Ray" ], "title": "Metric learning for adversarial robustness", "venue": "In AAAI,", "year": 2019 }, { "authors": [ "Yuval Netzer", "Tao Wang", "Adam Coates", "Alessandro Bissacco", "Bo Wu", "Andrew Y Ng" ], "title": "Reading digits in natural images with unsupervised feature learning", "venue": "In Workshop on Deep Learning and Unsupervised Feature Learning, NeurIPS,", "year": 2011 }, { "authors": [ "Hyeonwoo Noh", "Tackgeun You", "Jonghwan Mun", "Bohyung Han" ], "title": "Regularizing deep neural networks by noise: Its interpretation and optimization", "venue": "NeurIPS,", "year": 2017 }, { "authors": [ "Adam Paszke", "Sam Gross", "Francisco Massa", "Adam Lerer", "James Bradbury", "Gregory Chanan", "Trevor Killeen", "Zeming Lin", "Natalia Gimelshein", "Luca Antiga" ], "title": "Pytorch: An imperative style, high-performance deep learning library", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Jonas Rauber", "Wieland Brendel", "Matthias Bethge" ], "title": "Foolbox: A python toolbox to benchmark the robustness of machine learning models", "venue": "In Reliable Machine Learning in the Wild Workshop,", "year": 2017 }, { "authors": [ "Mengye Ren", "Wenyuan Zeng", "Bin Yang", "Raquel Urtasun" ], "title": "Learning to reweight examples for robust deep learning", "venue": "In ICML,", "year": 2018 }, { "authors": [ "Evgenia Rusak", "Lukas Schott", "Roland Zimmermann", "Julian Bitterwolf", "Oliver Bringmann", "Matthias Bethge", "Wieland Brendel" ], "title": "A simple way to make neural networks robust against diverse image corruptions", "venue": "In ECCV,", "year": 2020 }, { "authors": [ "Olga Russakovsky", "Jia Deng", "Hao Su", "Jonathan Krause", "Sanjeev Satheesh", "Sean Ma", "Zhiheng Huang", "Andrej Karpathy", "Aditya Khosla", "Michael Bernstein" ], "title": "Imagenet large scale visual recognition challenge", "venue": "International journal of computer vision,", "year": 2015 }, { "authors": [ "Pouya Samangouei", "Maya Kabkab", "Rama Chellappa" ], "title": "Defense-GAN: Protecting classifiers against adversarial attacks using generative models", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Lukas Schott", "Jonas Rauber", "Matthias Bethge", "Wieland Brendel" ], "title": "Towards the first adversarially robust neural network model on mnist", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Ali Shafahi", "Mahyar Najibi", "Mohammad Amin Ghiasi", "Zheng Xu", "John Dickerson", "Christoph Studer", "Larry S Davis", "Gavin Taylor", "Tom Goldstein" ], "title": "Adversarial training for free", "venue": "NeurIPS,", "year": 2019 }, { "authors": [ "Dinggang Shen", "Guorong Wu", "Heung-Il Suk" ], "title": "Deep learning in medical image analysis", "venue": "Annual review of biomedical engineering,", "year": 2017 }, { "authors": [ "Yang Song", "Taesup Kim", "Sebastian Nowozin", "Stefano Ermon", "Nate Kushman" ], "title": "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples", "venue": "In ICLR,", "year": 2018 }, { "authors": [ "Nitish Srivastava", "Geoffrey Hinton", "Alex Krizhevsky", "Ilya Sutskever", "Ruslan Salakhutdinov" ], "title": "Dropout: a simple way to prevent neural networks from overfitting", "venue": null, "year": 2014 }, { "authors": [ "Sebastian Thrun", "Lorien Pratt (eds" ], "title": "Learning to Learn", "venue": "Kluwer Academic Publishers,", "year": 1998 }, { "authors": [ "Florian Tramèr", "Dan Boneh" ], "title": "Adversarial training and robustness for multiple perturbations", "venue": "In NeurIPS,", "year": 2019 }, { "authors": [ "Florian Tramer", "Nicholas Carlini", "Wieland Brendel", "Aleksander Madry" ], "title": "On adaptive attacks to adversarial example defenses", "venue": "In NeurIPS,", "year": 2020 }, { "authors": [ "Jonathan Uesato", "Brendan O’Donoghue", "Aaron van den Oord", "Pushmeet Kohli" ], "title": "Adversarial risk and the dangers of evaluating against weak attacks", "venue": "arXiv preprint arXiv:1802.05666,", "year": 2018 }, { "authors": [ "Eric Wong", "Leslie Rice", "J. Zico Kolter" ], "title": "Fast is better than free: Revisiting adversarial training", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Chang Xiao", "Peilin Zhong", "Changxi Zheng" ], "title": "Enhancing adversarial defense by k-winners-take-all", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Xuwang Yin", "Soheil Kolouri", "Gustavo K Rohde" ], "title": "Gat: Generative adversarial training for adversarial example detection and robust classification", "venue": "In ICLR,", "year": 2020 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In British Machine Vision Conference,", "year": 2016 }, { "authors": [ "Hongyang Zhang", "Yaodong Yu", "Jiantao Jiao", "Eric P. Xing", "Laurent El Ghaoui", "Michael I. Jordan" ], "title": "Theoretically principled trade-off between robustness and accuracy", "venue": null, "year": 2019 } ]
[ { "heading": "1 INTRODUCTION", "text": "Deep neural networks have demonstrated enormous success on multiple benchmark applications (Amodei et al., 2016; Devlin et al., 2018), by achieving super-human performance on certain tasks. However, to deploy them to safety-critical applications (Shen et al., 2017; Chen et al., 2015; Mao et al., 2019), we need to ensure that the model is robust as well as accurate, since incorrect predictions may lead to severe consequences. Notably, it is well-known that the existing neural networks are highly susceptible to carefully crafted image perturbations which are imperceptible to humans but derail the predictions of these otherwise accurate networks.\nThe emergence of adversarial examples has received significant attention in the research community, and several defense mechanisms have been proposed (Madry et al., 2017; Zhang et al., 2019; Carmon et al., 2019). However, despite a large literature to improve upon the robustness of neural networks, most of the existing defenses leverage the knowledge of the adversaries and are based on the assumption of only a single type of perturbation. Consequently, many of the proposed defenses were circumvented by stronger attacks (Athalye et al., 2018; Uesato et al., 2018; Tramer et al., 2020).\nMeanwhile, several recent works have (Schott et al., 2018; Tramèr & Boneh, 2019) demonstrated the vulnerability of existing defense methods against multiple perturbations. For the desired multi-attack robustness, Tramèr & Boneh (2019); Maini et al. (2020) proposed various strategies to aggregate multiple perturbations during training. However, training with multiple perturbations comes at an additional cost; it increases the training cost by a factor of four over adversarial training, which is already an order of magnitude more costly than standard training. This slowdown factor hinders the research progress of robustness against multiple perturbations due to the large computation overhead incurred during training. Some recent works reduce this cost by reducing the complexity of generating adversarial examples (Shafahi et al., 2019; Wong et al., 2020), however, they are limited to `∞ adversarial training.\nTo address the drawbacks of existing methods, we propose a novel training scheme, Meta Noise Generator with Adversarial Consistency (MNG-AC), which learns instance-dependent noise to\nminimize the adversarial loss across multiple perturbations while enforcing label consistency between them, as illustrated in Figure 1 and explained in details below.\nFirst, we tackle the heavy computational overhead incurred by multi-perturbation training by proposing Stochastic Adversarial Training (SAT), that samples from a distribution of perturbations during training, which significantly accelerates training for multiple perturbations1. Then, based on the assumption that the model should output the same predictions for different perturbations of the same image, we introduce Adversarial Consistency (AC) loss that enforces label consistency across multiple perturbations. Finally, motivated by the noise regularization techniques (Huang et al., 2016; Srivastava et al., 2014; Noh et al., 2017; Lee et al., 2020) which target generalization, we formulate a Meta Noise Generator (MNG) that learns to stochastically perturb a given sample in a meta-learning framework to explicitly improve the generalization and label consistency across multiple attacks. In particular, MNG-AC utilizes our generated samples to enforce label consistency across the generated samples from our model, adversarial samples, and clean samples. Consequently, it pushes the decision boundary (see Figure 4) and enforces a smooth and robust network across multiple perturbations.\nWe validate the efficacy and efficiency of our proposed method by comparing it against existing, state-of-the-art methods on CIFAR-10, SVHN, and Tiny-ImageNet dataset. The experimental results show that our method obtains significantly superior performance over all the baseline methods trained with multiple perturbations, generalizes to diverse perturbations, and substantially reduces the computational cost incurred by training with multiple perturbations. In summary, the major contributions of this paper are as follows:\n• We introduce Adversarial Consistency (AC) loss that enforces label consistency across multiple perturbations to enforce smooth and robust networks.\n• We formulate Meta-Noise Generator that explicitly meta-learns an input-dependent noise generator, such that it outputs stochastic noise distribution to improve the model’s robustness and adversarial consistency across multiple types of adversarial perturbations.\n• We validate our proposed method on various datasets against diverse benchmark adversarial attacks, on which it achieves state-of-the-art performance, highlighting its practical impact." }, { "heading": "2 RELATED WORK", "text": "Robustness against single adversarial perturbation. In the past few years, multiple defenses have been proposed to defend against a single type of attack (Madry et al., 2017; Xiao et al., 2020; Zhang et al., 2019; Carmon et al., 2019) and have been consequently circumvented by subsequent attacks (Athalye et al., 2018; Brendel et al., 2018; Tramer et al., 2020). Adversarial-training based\n1By a factor of four on a single machine with four GeForce RTX 2080Ti on CIFAR-10 and SVHN dataset using Wide ResNet 28-10 (Zagoruyko & Komodakis, 2016) architecture.\ndefenses (Madry et al., 2017; Zhang et al., 2019; Carmon et al., 2019) have been the only exceptions that have withstood the intense scrutiny and have provided empirical gains in adversarial robustness.\nGenerative models for adversarial robustness. There have been various attempts that leverage the representative power of generative models to improve model robustness. Samangouei et al. (2018); Jalal et al. (2017) project an image onto the generator manifold, which is then classified by the discriminator. Song et al. (2018) uses the sensitivity of generative models to defend against a single perturbation. Yin et al. (2020) proposed a detection method based on input space partitioning. However, Samangouei et al. (2018); Jalal et al. (2017); Song et al. (2018) were shown to be ineffective by stronger attacks (Carlini & Wagner, 2017; Athalye et al., 2018). In contrast to learning the generative model to model the adversarial examples, we meta-learn the generator to explicitly learn an input-dependent optimal noise distribution to lower adversarial error across multiple perturbations, that does not necessarily correspond to any of the attack perturbations.\nRobustness against multiple adversarial perturbations. Schott et al. (2018) demonstrated that `∞ adversarial training is highly susceptible to `0/`2-norm adversarial perturbations and used multiple VAEs to defend against multiple perturbations on the MNIST dataset. However, it was not scalable and limited to the MNIST dataset. Tramèr & Boneh (2019) investigated the theoretical/empirical trade-offs between multiple perturbations and introduced adversarial training with worst/average perturbations to defend against multiple perturbations. Maini et al. (2020) incorporated multiple perturbations into a single adversary to maximize the adversarial loss. However, computing all the perturbations is impractical for multiple perturbations and large scale datasets. On the other hand, our proposed framework overcomes this limitation, with improved performance over these methods and has a negligible increase in training cost over multi-perturbation adversarial training." }, { "heading": "3 ROBUSTNESS AGAINST MULTIPLE PERTURBATIONS", "text": "We first briefly review single/multi-perturbation adversarial training and introduce Stochastic Adversarial Training (SAT) to reduce the computational cost incurred by training with multiple perturbations. We consider a dataset D over observations x ∈ Rd and labels y ∈ RC with C classes. Let fθ : Rd → RC be a L-layered classifier with parameters θ and classification loss Lcls. Given an attack procedure A(x) with norm-ball BA(x, ε) around x with radius ε for each example, which introduces a perturbation δ, we let xadv = x+ δ denote the corresponding adversarial examples. We consider the `p norm distance under the additive threat model (Laidlaw & Feizi, 2019) and adopt the projected-gradient descent (PGD) (Madry et al., 2017) for crafting the `p perturbations:\nxadv(t+1) = proj BA(x,ε)\n( xadv(t) + argmax\n||v||A≤αA vTAOxadv (t) Lcls\n( fθ ( xadv(t) ) , y )) , (1)\nwhere xadv0 is chosen at random within BA(x, ε), αA is the step size, proj is the projection operator projecting the input onto the norm ball BA(x, ε), and xadv(t+1) denotes the adversarial example at the t-th PGD step. We will refer the approximation of the maximum loss by an attack procedure A(x), such that maxδ∈BA(x,ε) Lcls (fθ (x+ δ) , y) ≈ Lcls (fθ (A (x)) , y) for the rest of our paper. Single-perturbation adversarial training. In the standard single-perturbation adversarial training (Kurakin et al., 2016; Madry et al., 2017), the model optimizes the network using a min-max formulation. More formally, the inner maximization generates the adversarial perturbation by maximizing the loss, while the outer minimization minimizes the loss on the generated examples.\nmin θ\nE(x,y)∼D Lcls (fθ (A (x)) , y) . (2)\nThe majority of existing single-perturbation defenses are primarily able to defend against a single category of adversarial perturbation. However, this limits the generalization of these methods to perturbations that are unseen during training (Schott et al., 2018; Tramèr & Boneh, 2019), which has been referred to as overfitting on the particular type of training perturbation.\nMulti-perturbation adversarial training. Tramèr & Boneh (2019) extended the adversarial training to multiple perturbations by optimizing the outer objective in Eq. (2) on the strongest/union of adversarial perturbations for each input example. Their proposed strategies can more formally be defined as follows:\n1. The maximum over all perturbations: It optimizes the outer objective in Eq. (2) on the strongest adversarial perturbation from the whole set of additive adversarial perturbations\nmin θ\nE(x,y)∼D [ argmaxk Lcls (fθ (Ak (x)) , y) ] . (3)\n2. The average over all perturbations: It optimizes the outer objective in Eq. (2) on the whole set of n additive perturbations.\nmin θ\nE(x,y)∼D 1\nn k=n∑ k=1 Lcls (fθ (Ak (x) , y)) . (4)\nRecently, Maini et al. (2020) proposed “Multi Steepest Descent” (MSD) by incorporating the different perturbations into the direction of steepest descent. However, the practicality of all these methods is limited due to an increased computational overhead for training.\nStochastic Adversarial Training (SAT). To overcome this limitation, we propose Stochastic Adversarial Training to defend against multiple adversarial perturbations. Specifically, we conjecture that it is essential to cover the threat model during training, not utilizing all the perturbations simultaneously. We formulate the threat model as a random attack A(x) sampled uniformly from a perturbation set S during each episode (or batch) of training which prevents overfitting on a particular adversarial perturbation. In this work, we consider the `p-bounded perturbation set, and we sample the attack procedure A(x) with its corresponding norm-ball BA(x, ε) from the perturbation set S as follows:\nS = {A1(x), . . . ,An(x)}, k ∼ Cat ((1/n, . . . , 1/n)) ,\nA(x) = Sk(x), (5) where Cat is the categorical distribution and n is the number of attacks in the perturbation set S. Our proposed SAT optimizes the outer objective in Eq. (2) using the sampled attack procedure A(x) and is a drastic simplification of the average one in Eq. (4), which makes it highly efficient for multiple perturbations. It is important to note that unlike the average and max strategy SAT can be applied to any perturbation set with a constant cost and it promotes generalization and convergence (due to its stochasticity) by preventing over-fitting on a single type of perturbation." }, { "heading": "4 LEARNING TO GENERATE NOISE FOR MULTI-ATTACK ROBUSTNESS", "text": "In this section, we introduce our framework MNG-AC, which leverages an adversarial consistency loss (AC) and a meta-noise generator (MNG) to help the model generalize to multiple perturbations. Let gφ : Rd → Rd denote the generator with parameters φ and xadvθ be the adversarial examples generated by SAT for a uniformly sampled attack A(x) from a perturbation set S with norm-ball BA(x, ε). We sample z ∼ N (0, I) for input to our generator jointly with the clean examples x to generate the noise-augmented samples xaugφ projected on the norm-ball BA(x, ε). Note that, as MNG learns the noise to minimize the adversarial loss, it is essential to project the generated noise on the norm-ball BA(x, ε), which is the corresponding norm-ball of the sampled attack procedure A(x). The total loss function Ltotal for the classifier consists exclusively of two terms: SAT classification loss and an adversarial consistency loss:\nLtotal = 1\nB B∑ i=1 Lcls ( θ | xadvθ (i), y(i) )︸ ︷︷ ︸ SAT classification loss +β · Lac ( pclean(i); padv(i); paug(i) )︸ ︷︷ ︸ adversarial consistency loss , (6)\nwhereB is the batch-size, β is the hyper-parameter determining the strength of the AC loss denoted by Lac and pclean, padv, paug represent the posterior distributions p(y | xclean), p(y | xadvθ ), p(y | x aug φ ) computed using the softmax function on the logits for xclean, xadv, and xaug respectively. Specifically, Lac represents the Jensen-Shannon Divergence (JSD) among the posterior distributions:\nLac = 1\n3\n( DKL(p clean ‖M) +DKL(padv ‖M) +DKL(paug ‖M) ) , (7)\nwhere M = ( pclean + padv + paug ) /3. Consequently, Lac enforces stability and insensitivity across a diverse range of inputs based on the assumption that the classifier should output similar predictions when fed perturbed versions of the same image.\nAlgorithm 1 Learning to generate noise for multi-attack robustness input Dataset D, T inner gradient steps, batch size B, perturbation set S. output Final model paramaters θ\n1: for n = {1, . . . , N} do 2: Sample mini-batch of size B. 3: Sample an attack procedure A(x) with its corresponding norm-ball BA(x, ε) using Eq. (5). 4: Generate the adversarial examples for A(x) using Eq. (1). 5: Sample z ∼ N (0, I) and generate xaugφ = proj\nBA(x,ε) (x+ gφ(z, x)) using MNG, where BA(x, ε)\nis the norm-ball corresponding to the attack procedure sampled in Step 3. 6: Update θ to minimize Eq. (6). 7: Initialize θ(0) = θ 8: for t = {1, . . . , T} do 9: Update θ(t) using Eq. (8).\n10: end for 11: Descent a single step to update θ(T ) to θ(T+1) by Eq. (9). 12: Update the parameters φ of the generator by Eq. (10). 13: end for\nRecently, Rusak et al. (2020) formulated an adversarial noise generator to learn the adversarial noise to improve the robustness on common corruptions. However, our goal is different; the robustness against multiple adversarial attacks is a much more challenging task than that against common corruptions. To generate the augmented samples for our purpose, we explicitly perturb the input examples for generalization across multiple perturbations. In particular, MNG meta-learns (Thrun & Pratt, 1998; Finn et al., 2017) the parameters φ of a noise generator gφ to generate an input-dependent noise distribution to alleviate the issue of generalization across multiple adversaries. The standard approach to train our adversarial classifier jointly with MNG is to use bi-level optimization (Finn et al., 2017). However, bi-level optimization for adversarial training would be computationally expensive.\nTo tackle this challenge, we adopt an online approximation (Ren et al., 2018; Jang et al., 2019) to update θ and φ using a single-optimization loop. We alternatively update the parameters θ of the classifier with the parameters φ of MNG. In particular, we first update the parameters θ using Eq. (6) (step 3 in Algorithm 1). Then, given current parameters θ of our adversarial classifier, we update MNG parameters φ using the following training scheme: 1. Update model parameters for T steps. First, we update θ to minimize Lcls(θ | xaugφ , y) for T\nsteps which ensures the learning of the classifier using the knowledge from the generated samples constructed by MNG. It explicitly increases the influence of the noise-augmented samples on the classifier in the inner loop. More specifically, for a learning rate α, projection operator proj, θ(t) moves along the following descent direction on a mini-batch of training data:\nθ(t+1) = θ(t) − α · 1 B B∑ i=1 ∇θLcls ( θ(t) | xaugφ (i), y(i) ) ,\nwhere, xaugφ = proj B(x,ε) (x+ gφ(z, x)) .\n(8)\n2. Adapt model parameters on a single step. Second, perform one-step update to update θ(T ) to θ(T+1) to minimize SAT loss from Eq. (6). This step explicitly models the adaptation of adversarial model parameters in the presence of the noise-augmented data using a single step of update:\nθ(T+1) = θ(T ) − α · 1 B B∑ i=1 ∇θLcls ( θ(T ) | xadvθ (i), y(i) ) . (9)\n3. Update generator parameters. In the last step, after receiving feedback from the classifier, we measure the SAT loss from Eq. (6) and adapt φ to minimize this loss. In particular, φ performs the following update step to facilitate the classifier parameters θ in the next step:\nφ = φ− α · 1 B B∑ i=1 ∇φLcls ( θ(T+1) | xadvθ (i), y(i) ) . (10)\nOverall, the generator minimizes the loss on the adversarial examples sampled from the perturbation set S. Consequently, φ in Eq. (10) is dependent on θ(T+1) that depends on θ(T ) (see Eq. (9)), which in turn depends on xaugφ (see Eq. (8)) and acts as a path for the flow of gradients. Similarly, the gradients for θ(T+1) are chained through the T steps since θ(T+1) is dependent on θ(T ) that depends on θ(0), and we use TorchMeta Deleu et al. (2019) for the double backpropagation. We list the detailed algorithm in Algorithm 1. Formally, the overall objective can be summarized as:\nmin φ\n1\nB B∑ i=1 Lcls ( θ(T+1) | xadvθ (i), y(i) ) subject to θ(T+1) = θ(T ) − α · 1\nB B∑ i=1 ∇θLcls ( θ(T ) | xadvθ (i), y(i) ) ,\nθ(t+1) = θ(t) − α · 1 B B∑ i=1 ∇θLcls ( θ(t) | xaugφ (i), y(i) ) ,\nt = 0, . . . , T − 1.\n(11)\nTo summarize, MNG-AC consists of perturbation sampling to generate adversarial examples. Then, it perturbs the clean examples in a meta-learning framework to explicitly lower the adversarial classification loss on the sampled perturbation. Lastly, the adversarial classifier utilizes the generated samples, adversarial samples and clean samples to optimize the classification and adversarial consistency loss.\nIntuition behind our framework. Unlike existing adversarial defenses that aim for robustness against single perturbation, our proposed approach targets for a realistic scenario of robustness against multiple perturbations. Our motivation is that meta-learning the noise distribution to minimize the stochastic adversarial classification loss allows to learn the optimal noise to improve multiperturbation generalization. Based on the assumption that the model should output similar predictions for perturbed versions of the same image, we enforce the adversarial consistency loss, which enforces the label consistency across multiple perturbations. We empirically illustrate that our proposed training scheme increases the smoothness of the model (see Figure 3) and pushes the decision boundary (see Figure 4), which confirms our hypothesis for multi-perturbation generalization." }, { "heading": "5 EXPERIMENTS", "text": "" }, { "heading": "5.1 EXPERIMENTAL SETUP", "text": "Baselines and our model. We compare our method MNG-AC with the standard network (Nat) and state-of-the-art single-perturbation baselines including Madry et al. (2017) (Advp) for `∞, `1, and `2 norm, Zhang et al. (2019) (TRADES∞), and Carmon et al. (2019) (RST∞) for `∞ norm. We also consider state-of-the-art multi-perturbation baselines: namely, we consider Adversarial training with the maximum (see Eq. (3)) (Advmax), average (Advavg) (Tramèr & Boneh, 2019) (see Eq. (4)) over all perturbations, and Multiple steepest descent (MSD) (Maini et al., 2020).\nDatasets. We evaluate our method on multiple benchmark datasets including CIFAR-10 (Krizhevsky, 2012), SVHN (Netzer et al., 2011) on Wide ResNet 28-10 (Zagoruyko & Komodakis, 2016) and Tiny-ImageNet 2 on ResNet-50 (He et al., 2016) architecture.\nEvaluation setup. We have evaluated the proposed defense scheme and baselines against perturbations generated by state-of-the-art attack methods. We use the same attack parameters as Tramèr & Boneh (2019) for training and evaluation. We validate the clean accuracy (Accclean), the worst (Accunionadv ) and average (Acc avg adv) adversarial accuracy across all the perturbation sets for all the models. For `∞ attacks, we use PGD (Madry et al., 2017), Brendel and Bethge (Brendel et al., 2019),\n2https://tiny-imagenet.herokuapp.com/\nTable 1: Comparison of robustness against multiple types of perturbations. All the values are measured by computing mean, and standard deviation across three trials upon randomly chosen seeds, the best and second-best results are highlighted in bold and underline respectively. Time denotes the training time in hours. For CIFAR-10 and SVHN, we use ε = { 8255 , 2000\n255 ,\n80\n255} and α = {0.004, 1.0, 0.1}\nfor `∞, `1, and `2 attacks respectively. For Tiny-ImageNet, we use ε = { 4255 , 2000 255 , 80 255} and α = {0.004, 1.0, 0.1} for `∞, `1, and `2 attacks respectively. We report the worst-case accuracy for all the attacks and defer the breakdown of all attacks to Appendix B.\nModel Accclean `∞ `1 `2 Accunionadv Acc avg adv Time (h)\nC IF\nA R\n-1 0 Nat (Zagoruyko & Komodakis, 2016) 94.7± 0.1 0.0± 0.0 4.4± 0.8 19.4± 1.4 0.0± 0.0 7.9± 0.3 0.4 Adv∞ (Madry et al., 2017) 86.8± 0.1 44.9± 0.7 12.8± 0.6 69.3± 0.4 12.9± 0.5 42.6± 0.4 4.5 Adv1 93.3± 0.4 0.0± 0.0 78.1± 1.8 0.0± 0.0 0.0± 0.00 25.1± 1.6 8.1 Adv2 91.7± 0.2 20.7± 0.3 27.7± 0.7 76.8± 0.4 17.9± 0.8 47.6± 0.4 3.7 TRADES∞ (Zhang et al., 2019) 84.7± 0.3 48.9± 0.7 17.9± 0.6 69.4± 0.3 17.2± 0.6 45.4± 0.3 5.2 RST∞ (Carmon et al., 2019) 88.9± 0.2 54.9± 1.8 22.0± 0.5 73.6± 0.1 21.1± 1.0 50.2± 0.5 58.8\nAdvavg (Tramèr & Boneh, 2019) 87.1± 0.2 33.8± 0.7 49.0± 0.3 74.9± 0.4 31.0± 1.4 52.6± 0.5 16.9 Advmax (Tramèr & Boneh, 2019) 85.4± 0.3 39.9± 0.9 44.6± 0.2 73.2± 0.2 35.7± 0.3 52.5± 0.3 16.3 MSD (Maini et al., 2020) 82.6± 0.0 43.7± 0.2 41.6± 0.2 70.6± 1.1 35.8± 0.1 52.0± 0.4 16.7\nMNG-AC (Ours) 81.5± 0.3 42.2± 0.9 55.0± 1.2 71.5± 0.1 41.6± 0.8 56.2± 0.2 11.2\nSV H\nN Nat (Zagoruyko & Komodakis, 2016) 96.8± 0.1 0.0± 0.0 4.4± 0.8 19.4± 1.4 0.0± 0.0 7.9± 0.3 0.6 Adv∞ (Madry et al., 2017) 92.8± 0.2 46.2± 0.6 3.0± 0.3 59.2± 0.7 3.0± 0.3 36.2± 0.3 6.2 Adv1 92.4± 0.9 0.0± 0.0 77.9± 6.3 0.0± 0.0 0.0± 0.0 23.9± 2.1 11.8 Adv2 94.9± 0.1 18.7± 0.6 30.3± 0.3 79.3± 0.1 16.4± 0.7 42.8± 0.2 6.1 TRADES∞ (Zhang et al., 2019) 93.9± 0.1 49.9± 1.7 1.6± 0.3 56.0± 1.4 1.6± 0.3 35.8± 0.6 7.9 RST∞ (Carmon et al., 2019) 95.6± 0.0 60.9± 2.0 0.7± 0.6 60.6± 0.6 0.7± 0.6 40.7± 0.8 112.5\nAdvavg (Tramèr & Boneh, 2019) 92.6± 0.3 17.4± 2.3 54.2± 2.9 74.7± 0.1 16.6± 1.3 43.0± 1.0 24.1 Advmax (Tramèr & Boneh, 2019) 88.2± 1.3 5.9± 1.2 48.3± 4.1 31.0± 5.0 5.8± 1.7 26.7± 2.5 22.7\nMNG-AC (Ours) 93.7± 0.1 33.7± 1.9 47.4± 2.2 77.6 ± 1.0 30.3± 1.8 52.6± 0.5 11.9\nTi ny\n-I m\nag eN\net Nat (He et al., 2016) 62.8± 0.4 0.0± 0.0 2.7± 0.3 12.6± 0.8 0.0± 0.0 5.1± 0.4 0.9 Adv∞ (Madry et al., 2017) 54.2± 0.4 29.6± 0.1 31.8± 1.0 42.5± 0.6 19.8± 1.1 33.8± 0.1 4.3 Adv1 57.8± 0.2 10.5± 0.7 39.3± 1.0 41.9± 0.0 10.1± 0.7 30.4± 0.1 12.9 Adv2 59.5± 0.1 5.2± 0.6 37.2± 0.4 44.9± 0.1 5.2± 0.6 29.1± 0.0 3.7 TRADES∞ (Zhang et al., 2019) 48.2± 0.2 28.7± 0.9 30.9± 0.2 35.8± 0.7 26.1± 0.9 32.8± 0.1 5.8\nAdvavg (Tramèr & Boneh, 2019) 56.0± 0.0 23.7± 0.2 38.0± 0.2 44.6± 1.8 23.6± 0.3 35.4± 0.7 26.8 Advmax (Tramèr & Boneh, 2019) 53.5± 0.0 29.8± 0.1 33.4± 0.3 42.4± 1.0 29.0± 0.3 35.3± 0.4 20.8\nMNG-AC (Ours) 53.1± 0.3 27.4± 0.7 39.6± 0.7 44.8± 0.1 27.4± 0.8 37.2± 0.6 10.4\nand AutoAttack (Croce & Hein, 2020). For `2 attacks, we use CarliniWagner (Carlini & Wagner, 2017), PGD (Madry et al., 2017), Brendel and Bethge (Brendel et al., 2019), and AutoAttack (Croce & Hein, 2020). For `1 attacks, we use SLIDE (Tramèr & Boneh, 2019), Salt and pepper (Rauber et al., 2017), and EAD attack (Chen et al., 2018). We provide a detailed description of the experimental setup in Appendix A." }, { "heading": "5.2 COMPARISON OF ROBUSTNESS AGAINST MULTIPLE PERTURBATIONS", "text": "Results with CIFAR-10 dataset. Table 1 shows the experimental results for the CIFAR-10 dataset. It is evident from the results that MNG-AC achieves a relative improvement of ∼ 6% and ∼ 4% on the Accunionadv and Acc avg adv metric over the state-of-the-art methods trained on multiple perturbations. Moreover, MNG-AC achieves ∼ 33% reduction in training time compared to the multi-perturbations training baselines. It is also worth mentioning that, MNG-AC also shows an improvement over Advmax, which is fundamentally designed to address the worst perturbation.\nResults with SVHN dataset. The results for the SVHN dataset are shown in Table 1. We make the following observations from the results: (1) Firstly, MNG-AC significantly outperforms Advavg, Advmax by ∼ 14% and ∼ 25% on Accunionadv metric. Furthermore, it achieves an improvement of ∼ 7.2% and ∼ 26% on Accavgadv metric over Advavg, Advmax respectively. (2) Secondly, MNG-AC leads to a ∼ 50% reduction in training time compared to the multi-perturbation training baselines.\nInterestingly, MNG-AC achieves significant better performance over `1 adversarial training with comparable training time which illustrates the utility of our method over standard adversarial training.\nResults with Tiny-ImageNet. We also evaluate our method on Tiny-ImageNet to verify that it performs well on complex datasets. In Table 1 we observe that MNG-AC outperforms the multiperturbation training baselines and achieves comparable performance to the single-perturbation baselines. Only against `∞ perturbations, we notice that Advmax achieves better performance. We believe this is an artefact of the inherent trade-off across multiple perturbations (Tramèr & Boneh, 2019; Schott et al., 2018). Interestingly, MNG-AC even achieves comparable performance to the single perturbation baselines trained on `1 and `2 norm. This demonstrates the effectiveness of MNG in preventing overfitting over a single attack, and it’s generalization ability to diverse types of attacks." }, { "heading": "5.3 ABLATION STUDIES", "text": "Component analysis. To further investigate our training scheme, we dissect the effectiveness of various components in Table 2. First, we examine that SAT leads to a ∼ 68% and ∼ 30% reduction in training time over multiple perturbations baselines and MNG-AC for both the datasets, however, it does not improve the adversarial robustness. Then, we analyze the impact of our meta-noise generator by injecting random noise z ∼ N (0, I) to the inputs for the generation of augmented samples. We observe that it significantly improves the performance over the SAT with a marginal increase in the training time. Furthermore, leveraging MNG our combined framework MNG-AC achieves consistent improvements, outperforming all the baselines, demonstrating the efficacy of our meta-learning scheme to defend against multiple perturbations.\nEffect of hyperparameters. We further analyze the impact of β in our augmentation loss (see Eq. (6)) in Figure 2. We evaluate the worst-attack performance across all `p norm adversarial attacks. Our results show that as the value of β increases the performance on `∞ and `1 attacks improves significantly. In particular, the performance with `∞ and `1 attack improve by 4% with an increase in the weight of adversarial consistency loss. However, an increase in β also leads to a reduction of ∼ 3% in the robustness against the `2 attacks, which is in line with the previous works that have showcased an inherent trade-off between various attacks theoretically and empirically (Tramèr & Boneh, 2019; Schott et al., 2018)." }, { "heading": "5.4 FURTHER ANALYSIS OF OUR DEFENSE", "text": "Results on unforseen adversaries. We further evaluate our model on various unforeseen perturbations (Kang et al., 2019) namely we evaluate on the Elastic, `∞-JPEG, `1-JPEG and ` − 2-JPEG attacks. Note that, even though adversarial training methods do not generalize beyond the threat model, we observe that MNG-VS improves the performance on these adversaries. We compare MNGSAT to the baselines trained with multiple perturbations on the SVHN dataset in Table 3. We notice that even though, Advmax achieves better performance on `p-JPEG attacks, it obtains the minimum robustness across the Accunionadv metric. In contrast, MNG-AC generalizes better over both the baselines for the worst-attack in the set of unforeseen perturbations.\nVisualization of loss landscape. As further qualitative analysis of the effect of MNG-AC, we compare the loss surface of various methods against `∞, `1, and `2 norm attack in Figure 3. We can observe that in most of the instances when trained with a single adversary, the adversary can find a direction orthogonal to that explored during training; for example, `1 attack results in a non-smooth\nTable 3: Performance of MNG-AC against unforseen adversaries on SVHN dataset.\nloss surface for both `∞ and `2 adversarial training. On the contrary, MNG-AC achieves smoother loss surface across all types of attacks which suggests that the gradients modelled by our model are closer to the optimum global landscape. See Appendix B for the loss landscape on CIFAR-10.\nVisualization of decision boundary. Finally, we visualize the learned decision boundary on binaryclassification task across multiple attacks in Figure 4. We can observe that MNG-AC obtains the least error against all the attacks compared to the baselines trained on multiple perturbations. Furthermore, the consistency regularization embeds multiple perturbations onto the same latent space, which pushes them away from the decision boundary that in turn improves the overall robustness. See Appendix B for visualization of the examples generated by our proposed meta-noise generator." }, { "heading": "6 CONCLUSION", "text": "We tackled the problem of robustness against multiple adversarial perturbations. Existing defense methods are tailored to defend against single adversarial perturbation which is an artificial setting to evaluate in real-life scenarios where the adversary will attack the system in any way possible. To this end, we propose a novel Meta-Noise Generator (MNG) that learns to stochastically perturb adversarial examples by generating output noise across diverse perturbations. Then we train the model using Adversarial Consistency loss that accounts for label consistency across clean, adversarial, and augmented samples. Additionally, to resolve the problem of computation overhead with conventional adversarial training methods for multiple perturbations, we introduce a Stochastic Adversarial Training (SAT) which samples a perturbation from the distribution of perturbations. We believe that our method can be a strong guideline when other researchers pursue similar tasks in the future." }, { "heading": "A EXPERIMENTAL SETUP", "text": "A.1 DATASETS\n1. CIFAR-10. This dataset (Krizhevsky, 2012) contains 60,000 images with 5,000 images for training and 1,000 images for test for each class. Each image is sized 32× 32, we use the Wide ResNet 28-10 architecture (Zagoruyko & Komodakis, 2016) as a base network for this dataset.\n2. SVHN. This dataset (Netzer et al., 2011) contains 73257 training and 26032 testing images of digits and numbers in natural scene images containing ten-digit classes. Each image is sized 32× 32, we use the Wide ResNet 28-10 architecture similar to the CIFAR-10 dataset as the base network. 3. Tiny-ImageNet. This dataset 3 is a subset of ImageNet (Russakovsky et al., 2015) dataset, consisting of 500, 50, and 50 images for training, validation, and test dataset, respectively. This dataset contains 64 × 64 size images from 200 classes, we use ResNet50 (He et al., 2016) as a base network for this dataset.\nA.2 TRAINING SETUP\nWe use the SGD optimizer with momentum 0.9 and weight decay 5 ·10−4 to train all our models with cyclic learning rate with a maximum learning rate λ that increases linearly from 0 to λ over first N/2 epochs and then decreases linearly fromN/2 to 0 in the remainder epochs, as recommended by Wong et al. (2020) for fast convergence of adversarial training. We train all the models for 30 epochs on a single machine with four GeForce RTX 2080Ti using WideResNet 28-10 architecture (Zagoruyko & Komodakis, 2016). We use the maximum learning rate of λ = 0.21 for all our experiments. We use β = 16 for all the experiments with our meta noise generator. The generator is formulated as a convolutional network with four 3×3 convolutional layers with LeakyReLU activations and one residual connection from input to output. We use T = 2 for all our experiments and all our algorithms are implemented using Pytorch (Paszke et al., 2019) and TorchMeta (Deleu et al., 2019). We use the weight for the KL divergence (β = 6.0) for TRADES and RST in all our experiments. We replicate all the baselines on SVHN and TinyImageNet since most of the baseline methods have reported their results on MNIST and CIFAR-10. Unfortunately, we found that MSD Maini et al. (2020) did not converge for larger datasets even after our extensive hyperparameter-search. We believe that this is due to the the change in formulation of the inner optimization which leads to a difficulty in convergence for larger datasets. Since the authors also report their results on CIFAR-10, we do not use it as a baseline for other datasets.\nA.3 EVALUATION SETUP\nFor `∞ perturbations, we use PGD (Madry et al., 2017), Brendel and Bethge attack (Brendel et al., 2019), and AutoAttack (Croce & Hein, 2020). For `2 perturbations, we use CarliniWagner attack (Carlini & Wagner, 2017), PGD (Madry et al., 2017), Brendel and Bethge attack (Brendel et al., 2019), and AutoAttack (Croce & Hein, 2020). For `1 perturbations, we use SLIDE (Tramèr & Boneh, 2019), Salt and pepper (Rauber et al., 2017), and EAD attack (Chen et al., 2018). For CIFAR-10 and SVHN, we use ε = { 8255 , 2000 255 , 80 255} and α = {0.004, 1.0, 0.1} for `∞, `1, and `2 attacks respectively. For Tiny-ImageNet, we use ε = { 4255 , 2000 255 , 80 255} and α = {0.004, 1.0, 0.1} for `∞, `1, and `2 attacks respectively. We use 10 steps of PGD attack for `∞, `2 during training. For `1 adversarial training, we use 20 steps during training and 100 steps during evaluation. We use the code provided by the authors for evaluation against AutoAttack Croce & Hein (2020) and Foolbox (Rauber et al., 2017) library for all the other attacks." }, { "heading": "B MORE EXPERIMENTAL RESULTS", "text": "Due to the length limit of our paper, we provide a breakdown of all the attacks on CIFAR-10 in Table 4, SVHN on Wide ResNet 28-10 in Table 5, Tiny-ImageNet on ResNet50 in Table 6. Besides, we analyze the noise learned by our meta-learning framework on multiple datasets and the loss landscape on the CIFAR-10 dataset.\n3https://tiny-imagenet.herokuapp.com/\nPGD-`∞ 46.9± 0.5 0.40± 0.7 23.6± 0.2 52.0± 0.6 56.9± 0.1 35.2± 0.8 42.2± 1.1 45.4± 0.4 44.5± 1.1 PGD-Foolbox 54.7± 0.4 0.33± 0.6 35.3± 0.4 57.8± 0.5 62.9± 0.3 45.0± 0.4 50.4± 0.4 51.7± 0.8 50.8± 0.8 AutoAttack 44.9± 0.7 0.0± 0.0 20.7± 0.4 48.8± 1.1 53.9± 0.3 33.8± 0.7 39.9± 0.9 42.7± 0.2 42.8± 0.8 Brendel & Bethge 49.9± 1.1 0.0± 0.0 26.8± 0.3 52.1± 0.7 56.5± 1.8 39.6± 0.7 45.8± 0.9 48.3± 0.4 46.8± 0.9\nAll `∞ attacks 44.9± 0.7 0.0± 0.0 20.7± 0.3 48.9± 0.7 54.9± 1.8 33.8± 0.7 39.9± 0.9 43.7± 0.2 42.2± 0.9\nPGD-`1 12.8± 0.6 91.6± 1.4 27.7± 0.7 17.9± 0.6 22.0± 0.5 49.0± 0.3 44.6± 0.2 46.8± 1.4 55.0± 1.2 PGD-Foolbox 35.2± 0.7 92.3± 1.3 53.1± 0.5 40.3± 0.7 44.6± 0.3 64.5± 0.2 60.7± 0.5 60.3± 0.4 65.5± 0.1 EAD 72.9±1.0 87.1± 3.3 75.9± 1.9 80.2± 0.7 84.5± 0.2 85.7± 0.2 83.3± 0.5 80.8± 0.1 79.3± 0.6 SAPA 71.5± 0.2 80.2± 1.8 81.9± 0.5 71.4± 0.7 76.0± 0.5 82.7± 0.1 80.0± 0.1 76.9± 0.5 76.7± 0.4\nAll `1 attacks 12.8± 0.6 78.1± 1.8 27.7± 0.7 17.9± 0.6 22.0± 0.5 49.0± 0.3 44.6± 0.2 43.7± 0.2 55.0± 1.2\nPGD-`2 78.7± 0.3 47.6± 1.6 84.6± 0.2 77.0± 0.9 82.2± 0.2 81.5± 0.2 79.1± 0.3 76.5± 0.1 75.6± 0.4 PGD-Foolbox 74.6± 0.2 5.1± 2.1 79.8± 0.2 73.3± 0.6 78.3± 0.2 77.6± 0.2 75.8± 0.3 73.6± 0.5 73.4± 0.1 Gaussian Noise 85.2± 0.4 88.5± 1.8 90.5± 1.1 83.2± 0.3 87.8± 0.2 86.2± 0.5 83.3± 0.3 70.9± 1.1 79.3± 0.1 AutoAttack 69.9± 0.4 0.0± 0.0 76.8± 0.4 69.4± 0.3 73.7± 0.1 74.9± 0.4 73.2± 0.2 71.9± 0.4 71.5± 0.1 Brendel & Bethge 71.8± 0.9 0.0± 0.0 78.1± 0.6 70.2± 0.1 75.0± 0.3 75.9± 0.3 74.1± 0.4 80.4± 0.4 72.3± 0.1 CWL2 70.5± 0.2 0.1± 0.0 77.2± 0.5 69.7± 0.3 74.2± 0.1 74.6± 1.2 73.5± 0.2 71.1± 1.1 71.0± 0.1\nAll `2 attacks 69.3± 0.4 0.0± 0.0 76.8± 0.4 69.4± 0.3 73.6± 0.1 74.9± 0.4 73.2± 0.2 70.6± 1.1 71.5± 0.1\nAccunionadv 12.9± 0.5 0.0± 0.0 17.9± 0.8 17.2± 0.6 21.1± 1.0 31.0± 1.4 35.7± 0.3 35.8± 0.1 41.6± 0.8 Accavgadv 42.6± 0.4 25.1± 1.6 47.6± 0.4 45.4± 0.3 50.2± 0.5 52.6± 0.5 52.5± 0.3 52.0± 0.4 56.2± 0.2\nVisualization of learned noise. To demonstrate the learning ability of our meta-noise generator, we visualize the learned noise by our generator during training. We present representative samples projected on various `p norms and datasets in Figure 5 where each sample is projected to their respected norm-ball B(x, ε) around x with radius ε. From the figure, we can observe that our metanoise generator incorporates the features by different attacks and learns diverse input-dependent noise distributions across multiple adversarial perturbations by explicitly minimizing the adversarial loss across multiple perturbations during meta-training. Overall, it combines two approaches that are complementary to each other and leads to a novel input-dependent learner for generalization across diverse attacks.\nVisuaization of loss landscape on CIFAR-10. Figure 6 shows the visualization of loss landscape of various methods against `∞, `1, and `2 norm attack for CIFAR-10 dataset on Wide ResNet 28-10 architecture. We vary the input along a linear space defined by the norm of the gradient where x and y-axes represent the perturbation added in each direction, and the z-axis represents the loss. Similar to the SVHN dataset, we can observe that the loss is highly curved for multiple perturbations in the vicinity of the data point x for the adversarial training trained with a single perturbation, which reflects that the gradient poorly models the global landscape. In contrast, MNG-AC achieves smoother loss surface across all types of `p norm attacks.\nTable 6: Summary of adversarial accuracy results for Tiny-ImageNet on ResNet50 architecture.\nAdv∞ Adv1 Adv2 Trades∞ Advavg Advmax MNG-AC\nClean Accuracy 54.2± 0.1 57.8± 0.2 59.8± 0.1 48.2± 0.2 56.0± 0.2 53.5± 0.0 53.1± 0.1\nPGD-`∞ 32.1± 0.0 11.5± 1.2 17.9± 1.1 32.2± 0.4 25.0± 0.6 32.0± 0.6 29.3± 0.3 PGD-Foolbox 34.6± 0.4 17.2± 0.1 5.2± 0.6 34.1± 0.2 34.0± 0.2 28.3± 0.1 32.3± 0.3 AutoAttack 29.6± 0.1 10.1± 0.7 16.3± 0.3 28.7± 0.9 23.7± 0.2 30.0± 0.1 27.7± 0.4 Brendel & Bethge 32.7± 0.1 14.6± 0.8 20.8± 0.6 31.0± 0.9 28.1± 0.2 33.2± 0.5 31.5± 0.6\nAll `∞ attacks 29.6± 0.1 10.5± 0.7 5.2± 0.6 28.7± 0.9 23.7± 0.2 29.8± 0.1 27.4± 0.7\nPGD-`1 32.0± 1.1 39.3± 0.9 37.2± 0.2 31.1± 0.3 38.0± 0.1 33.6± 0.4 39.0± 0.9 PGD-Foolbox 40.0± 0.8 44.8± 0.2 45.2± 0.2 37.6± 0.9 44.7± 1.5 40.6± 0.1 45.0± 0.2 EAD 52.3± 1.5 56.3± 0.6 57.3± 0.0 46.7± 0.9 54.6± 0.9 51.2± 0.2 52.7± 0.3 SAPA 46.5± 0.9 52.9± 0.7 53.5± 1.2 40.8± 0.1 50.3± 1.1 46.6± 0.1 49.3± 0.4\nAll `1 attacks 31.8± 1.0 39.3± 1.0 37.2± 0.4 30.9± 0.2 38.0± 0.2 33.4± 0.3 39.6± 0.7\nPGD-`2 48.5± 1.1 49.1± 0.1 51.8± 1.8 42.6± 0.7 49.9± 1.7 47.0± 0.3 49.1± 0.4 PGD-Foolbox 45.6± 0.4 45.2± 0.4 47.7± 0.7 41.0± 0.3 47.0± 1.3 44.9± 0.4 47.0± 0.2 Gaussian Noise 52.5± 1.3 56.1± 0.6 57.6± 0.3 46.4± 0.9 54.4± 0.8 51.1± 0.0 52.1± 0.5 AutoAttack 42.4± 0.8 41.9± 0.0 44.6± 0.6 38.9± 0.8 44.4± 1.3 42.4± 0.9 44.6± 0.4 Brendel & Bethge 43.7± 0.4 44.4± 0.1 46.6± 1.1 39.2± 0.7 45.1± 1.6 43.6± 0.4 45.4± 0.1 CWL2 43.5± 1.3 44.8± 1.1 47.5± 0.7 39.5± 0.4 46.8± 1.9 43.4± 0.1 46.0± 0.4\nAll `2 attacks 42.5± 0.6 41.9± 0.0 44.9± 0.1 35.8± 0.7 44.6± 0.1 42.4± 1.0 44.8± 0.1\nAccunionadv 19.8± 1.1 10.1± 0.7 5.2± 0.6 26.1± 0.9 23.6± 0.3 29.0± 0.3 27.4± 0.8 Accavgadv 33.8± 0.1 30.4± 0.1 29.1± 0.0 32.8± 0.1 35.4± 0.7 35.3± 0.4 37.2± 0.6\n` ∞\nOurs\n0.0 0\n0\n8\n16\n0.0 0-0.0 3 0.0 0-0.0 3\n0.0 0 0.0\n3-0 .03\n0\n8\n16\n0\n8\n16\n0\n8\n16\n-0.0 3 -0.03\n0.00 0.03 -0.03 0.00 0.03 0.0 3\n-0.03\n0.00\n0.03\n0\n8\n16\n0\n8\n16\n0\n8\n16\n0\n8\n16\n8\n16\n0\n8\n16\n0\n8\n16\n0\n8\n16\n0.0 3\n0.1 5 0.3 00\n.00\n4 0 4 8 0 4 8\n00 4\n0 4\n8\n0 4\n8\n0.300.150.00\n0.300.150.00 0.300.150.00 0.1\n5 0.3\n00 .00 0.1 5 0.3\n00. 00\n0.1 5 0.3 00\n.00\n` 2\n` 1\n`1 `2 `∞\nDefenses\nA ttacks\n0.00\n0.15\n0.30\n0.03\n0.0 3\n-0.03\n0.00\n4 0\n8 0\n4 8\n0 8\n8\nFigure 6: Visualization of the loss landscapes for the `1, `2, and `∞-norm attacks on the CIFAR-10 dataset. The rows represent the attacks and columns represent different defenses. We can observe that that MNG-AC obtains smooth loss surface across all `p-norm attacks." } ]
2,020
null
SP:a097bea86250950d5b3c5be7676c2b390663098e
[ "The paper proposes to define the uncertainty set in the DRO problem as a family of parametric generative models, which is to allow more flexibility in the choice of the uncertainty set architecture. To realize this idea, the paper first proposes a new relaxation of the DRO game's inner maximization problem (with KL constraints) so as to improve the training stability. It then develops a principled approach to select the hyper-parameters of the proposed method." ]
Distributionally robust optimization (DRO) provides a framework for training machine learning models that are able to perform well on a collection of related data distributions (the “uncertainty set”). This is done by solving a min-max game: the model is trained to minimize its maximum expected loss among all distributions in the uncertainty set. While careful design of the uncertainty set is critical to the success of the DRO procedure, previous work has been limited to relatively simple alternatives that keep the min-max optimization problem exactly tractable, such as f -divergence balls. In this paper, we argue instead for the use of neural generative models to characterize the worst-case distribution, allowing for more flexible and problem-specific selection of the uncertainty set. However, while simple conceptually, this approach poses a number of implementation and optimization challenges. To circumvent these issues, we propose a relaxation of the KL-constrained inner maximization objective that makes the DRO problem more amenable to gradient-based optimization of large scale generative models, and develop model selection heuristics to guide hyper-parameter search. On both toy settings and realistic NLP tasks, we find that the proposed approach yields models that are more robust than comparable baselines1.
[ { "affiliations": [], "name": "Paul Michel" }, { "affiliations": [], "name": "Tatsunori Hashimoto" }, { "affiliations": [], "name": "Graham Neubig" } ]
[ { "authors": [ "Zhenyao Zhu" ], "title": "Deep speech 2 : End-to-end speech recognition in english and mandarin", "venue": "In Proceedings of the 33rd International Conference on Machine Learning (ICML),", "year": 2016 }, { "authors": [ "David Balduzzi", "Sebastien Racaniere", "James Martens", "Jakob Foerster", "Karl Tuyls", "Thore Graepel" ], "title": "The mechanics of n-player differentiable games", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Aharon Ben-Tal", "Dick Den Hertog", "Anja De Waegenaere", "Bertrand Melenberg", "Gijs Rennen" ], "title": "Robust solutions of optimization problems affected by uncertain probabilities", "venue": "Management Science,", "year": 2013 }, { "authors": [ "Aharon Ben-Tal", "Dick Den Hertog", "Anja De Waegenaere", "Bertrand Melenberg", "Gijs Rennen" ], "title": "Robust solutions of optimization problems affected by uncertain probabilities", "venue": "Management Science,", "year": 2013 }, { "authors": [ "Su Lin Blodgett", "Lisa Green", "Brendan O’Connor" ], "title": "Demographic dialectal variation in social media: A case study of African-American English", "venue": "In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2016 }, { "authors": [ "Thomas Davidson", "Dana Warmsley", "Michael Macy", "Ingmar Weber" ], "title": "Automated hate speech detection and the problem of offensive language", "venue": "In Proceedings of the 11th International AAAI Conference on Weblogs and Social Media (ICWSM),", "year": 2017 }, { "authors": [ "Erick Delage", "Yinyu Ye" ], "title": "Distributionally robust optimization under moment uncertainty with application to data-driven problems", "venue": "Operations research,", "year": 2010 }, { "authors": [ "Jacob Devlin", "Ming-Wei Chang", "Kenton Lee", "Kristina Toutanova. Bert" ], "title": "Pre-training of deep bidirectional transformers for language understanding", "venue": "In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT),", "year": 2018 }, { "authors": [ "Lucas Dixon", "John Li", "Jeffrey Sorensen", "Nithum Thain", "Lucy Vasserman" ], "title": "Measuring and mitigating unintended bias in text classification", "venue": "In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society,", "year": 2018 }, { "authors": [ "John Duchi", "Hongseok Namkoong" ], "title": "Learning models with uniform performance via distributionally robust optimization", "venue": "arXiv preprint arXiv:1810.08750,", "year": 2018 }, { "authors": [ "John Duchi", "Peter Glynn", "Hongseok Namkoong" ], "title": "Statistics of robust optimization: A generalized empirical likelihood approach", "venue": "arXiv preprint arXiv:1610.03425,", "year": 2016 }, { "authors": [ "Louis Faury", "Ugo Tanielian", "Elvis Dohmatob", "Elena Smirnova", "Flavian Vasile" ], "title": "Distributionally robust counterfactual risk minimization", "venue": "In Proceedings of the 34th Meeting of the Association for Advancement of Artificial Intelligence (AAAI),", "year": 2020 }, { "authors": [ "Paula Fortuna", "Sérgio Nunes" ], "title": "A survey on automatic detection of hate speech in text", "venue": "ACM Computing Surveys (CSUR),", "year": 2018 }, { "authors": [ "Antigoni-Maria Founta", "Constantinos Djouvas", "Despoina Chatzakou", "Ilias Leontiadis", "Jeremy Blackburn", "Gianluca Stringhini", "Athena Vakali", "Michael Sirivianos", "Nicolas Kourtellis" ], "title": "Large scale crowdsourcing and characterization of twitter abusive behavior", "venue": "In Proceedings of the 12th International AAAI Conference on Weblogs and Social Media (ICWSM),", "year": 2018 }, { "authors": [ "Andreas Fuster", "Paul Goldsmith-Pinkham", "Tarun Ramadorai", "Ansgar Walther" ], "title": "Predictably unequal? the effects of machine learning on credit markets", "venue": "The Effects of Machine Learning on Credit Markets (November", "year": 2018 }, { "authors": [ "Athinodoros S. Georghiades", "Peter N. Belhumeur", "David J. Kriegman" ], "title": "From few to many: Illumination cone models for face recognition under variable lighting and pose", "venue": "IEEE transactions on pattern analysis and machine intelligence,", "year": 2001 }, { "authors": [ "Ian Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron Courville", "Yoshua Bengio" ], "title": "Generative adversarial nets", "venue": "In Proceedings of the 28th Annual Conference on Neural Information Processing Systems (NIPS),", "year": 2014 }, { "authors": [ "Evan Greensmith", "Peter L Bartlett", "Jonathan Baxter" ], "title": "Variance reduction techniques for gradient estimates in reinforcement learning", "venue": "Journal of Machine Learning Research,", "year": 2004 }, { "authors": [ "Patrick Grother", "Mei Ngan", "Kayee Hanaoka" ], "title": "Face Recognition Vendor Test (FVRT): Part 3, Demographic Effects", "venue": "National Institute of Standards and Technology,", "year": 2019 }, { "authors": [ "Suchin Gururangan", "Ana Marasović", "Swabha Swayamdipta", "Kyle Lo", "Iz Beltagy", "Doug Downey", "Noah A. Smith" ], "title": "Don’t stop pretraining: Adapt language models to domains and tasks", "venue": "In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 8342–8360,", "year": 2020 }, { "authors": [ "J. Hershey", "P. Olsen" ], "title": "Approximating the kullback leibler divergence between gaussian mixture models", "venue": "Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP),", "year": 2007 }, { "authors": [ "Sepp Hochreiter", "Jürgen Schmidhuber" ], "title": "Long short-term memory", "venue": "Neural computation,", "year": 1997 }, { "authors": [ "Dirk Hovy", "Anders Søgaard" ], "title": "Tagging performance correlates with author age", "venue": "In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL), pp", "year": 2015 }, { "authors": [ "Weihua Hu", "Gang Niu", "Issei Sato", "Masashi Sugiyama" ], "title": "Does distributionally robust supervised learning give robust classifiers", "venue": "In Proceedings of the 35th International Conference on Machine Learning (ICML),", "year": 2018 }, { "authors": [ "Zhaolin Hu", "L Jeff Hong" ], "title": "Kullback-leibler divergence constrained distributionally robust optimization", "venue": "Available at Optimization Online,", "year": 2013 }, { "authors": [ "Hisham Husain" ], "title": "Distributional robustness with ipms and links to regularization and gans", "venue": "In Proceedings of the 34th Annual Conference on Neural Information Processing Systems (NIPS),", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2014 }, { "authors": [ "Solomon Kullback", "Richard A Leibler" ], "title": "On information and sufficiency", "venue": "The annals of mathematical statistics,", "year": 1951 }, { "authors": [ "Stephen Merity", "Caiming Xiong", "James Bradbury", "Richard Socher" ], "title": "Pointer sentinel mixture models", "venue": "Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2017 }, { "authors": [ "Viet Anh Nguyen", "Nian Si", "Jose Blanchet" ], "title": "Robust bayesian classification using an optimistic score ratio", "venue": "In Proceedings of the 37th International Conference on Machine Learning (ICML),", "year": 2020 }, { "authors": [ "Mohammad Norouzi", "Samy Bengio", "zhifeng Chen", "Navdeep Jaitly", "Mike Schuster", "Yonghui Wu", "Dale Schuurmans" ], "title": "Reward augmented maximum likelihood for neural structured prediction", "venue": "In Proceedings of the 30th Annual Conference on Neural Information Processing Systems (NIPS),", "year": 2016 }, { "authors": [ "Yonatan Oren", "Shiori Sagawa", "Tatsunori Hashimoto", "Percy Liang" ], "title": "Distributionally robust language modeling", "venue": "In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2019 }, { "authors": [ "Martin J. Osborne", "Ariel Rubinstein" ], "title": "A Course in Game Theory", "venue": null, "year": 1994 }, { "authors": [ "Alec Radford", "Karthik Narasimhan", "Time Salimans", "Ilya Sutskever" ], "title": "Improving language understanding with unsupervised learning", "venue": "Technical report, Technical report,", "year": 2018 }, { "authors": [ "Alec Radford", "Jeffrey Wu", "Rewon Child", "David Luan", "Dario Amodei", "Ilya Sutskever" ], "title": "Language models are unsupervised multitask learners", "venue": "OpenAI Blog,", "year": 2019 }, { "authors": [ "Hamed Rahimian", "Sanjay Mehrotra" ], "title": "Distributionally Robust Optimization: A Review", "venue": "arXiv preprint arXiv:1908.05659,", "year": 2019 }, { "authors": [ "Shiori Sagawa", "Pang Wei Koh", "Tatsunori B Hashimoto", "Percy Liang" ], "title": "Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2020 }, { "authors": [ "Maarten Sap", "Dallas Card", "Saadia Gabriel", "Yejin Choi", "Noah A. Smith" ], "title": "The risk of racial bias in hate speech detection", "venue": "In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL),", "year": 2019 }, { "authors": [ "Anna Schmidt", "Michael Wiegand" ], "title": "A survey on hate speech detection using natural language processing", "venue": "In Proceedings of the 5th International Workshop on Natural Language Processing for Social Media (SocialNLP),", "year": 2017 }, { "authors": [ "Satinder P. Singh", "Michael J. Kearns", "Yishay Mansour" ], "title": "Nash convergence of gradient dynamics in general-sum games", "venue": "In Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence,", "year": 2000 }, { "authors": [ "Aman Sinha", "Hongseok Namkoong", "John Duchi" ], "title": "Certifying some distributional robustness with principled adversarial training", "venue": "In Proceedings of the International Conference on Learning Representations (ICLR),", "year": 2018 }, { "authors": [ "Richard Socher", "Alex Perelygin", "Jean Wu", "Jason Chuang", "Christopher D. Manning", "Andrew Ng", "Christopher Potts" ], "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "venue": "In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP),", "year": 2013 }, { "authors": [ "Martin Sundermeyer", "Ralf Schlüter", "Hermann Ney" ], "title": "Lstm neural networks for language modeling", "venue": "In Proceedings of the 13th Annual Conference of the International Speech Communication Association (InterSpeech),", "year": 2012 }, { "authors": [ "Prasetya Ajie Utama", "N. Moosavi", "Iryna Gurevych" ], "title": "Towards debiasing nlu models from unknown biases", "venue": "arXiv preprint arXiv:2009.12303,", "year": 2020 }, { "authors": [ "Zhengwei Wang", "Qi She", "Tomas E Ward" ], "title": "Generative adversarial networks in computer vision: A survey and taxonomy", "venue": null, "year": 1906 }, { "authors": [ "Mengzhou Xia", "Anjalie Field", "Yulia Tsvetkov" ], "title": "Demoting racial bias in hate speech detection", "venue": "In Proceedings of the 9th International Workshop on Natural Language Processing for Social Media (SocialNLP),", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Machine learning models trained with empirical risk minimization (ERM) are able to achieve high aggregate performance on data sampled from their training distribution. However, they often exhibit drops in accuracy when confronted with data from domains that are under-represented in their training data, such as those of different topic (Gururangan et al., 2020), sociolect (Blodgett et al., 2016), accent (Amodei et al., 2016) or writer age (Hovy & Søgaard, 2015) in language processing tasks, or skin color (Grother et al., 2019) or lighting (Georghiades et al., 2001) in image processing tasks. This is a particularly egregious issue in applications where higher error rates can have far reaching negative implications, such as the silencing of underrepresented minorities in toxicity detection systems (Dixon et al., 2018) or disparity amplifying feedback loops in credit rating models (Fuster et al., 2018).\nThis behaviour often arises from the objective function of ERM, where the parameters θ of the model are learned by minimizing the expectation of a loss function ` under a data distribution p (or, specifically in practice, an associated empirical data distribution p̂)\nLERM(θ) = E(x,y)∼p̂`(x, y, θ). (1)\nWhen the model encounters data sampled from a different distribution qtest 6= p, performance can suffer significantly. Distributionally robust optimization (DRO) (Ben-Tal et al., 2013b) provides a natural solution to this issue by replacing the expected risk under a single distribution p with the worst expected risk over a pre-determined family of distributions Q (the “uncertainty set”)\nLDRO(θ) = max q∈Q E(x,y)∼q `(x, y, θ). (2)\n1Code to reproduce our experiments can be found at https://github.com/pmichel31415/P-DRO\nIf Q contains qtest, the DRO objective upper bounds the expected risk under qtest. However, a priori knowledge of possible test distributions is not always available or easy to acquire. For example, training a model to be robust to some demographic attributes (Q = {qdemographic 1, qdemographic 2, . . .}) requires collecting and annotating data with the necessary information, an expensive and ethically fraught endeavour. In the absence of such information, one has to resort to defining the uncertainty set analytically, drawing on one’s intuition of what constitutes a possible test distribution given the observed training distribution, such as using moment constraints (Delage & Ye, 2010; Nguyen et al., 2020), f -divergence (Ben-Tal et al., 2013a; Hu & Hong, 2013; Faury et al., 2020), Wasserstein/IPM (Sinha et al., 2018; Husain, 2020) balls, or coarse-grained mixture models (Oren et al., 2019; Hu et al., 2018). However, the need for keeping the inner supremum in Eq. (2) tractable limits the possible choices.\nIn this paper, we propose that the uncertainty set be instead defined as a family of parametric generative models. The resulting DRO objective (§2) is a differentiable game with two players: the original model `(x, y; θ) and a model of its worst-case distribution qψ(x, y), the titular “second player” which we hereafter refer to as the adversary. Using this formulation — which we call Parametric DRO (P-DRO) — allows for more flexibility in the choice of the adversary’s architecture (and so the uncertainty set). Unfortunately, finding a solution of this game via direct application of simultaneous gradient descent (Singh et al., 2000) is difficult (Balduzzi et al., 2018). In particular, direct gradient descent on the uncertainty set suffers from instability due to the large variance of the gradients (Greensmith et al., 2004), and hyper-parameter selection is not straightforward.\nTo address these challenges, we make two main contributions (§3): first, we propose a new relaxation of the DRO game’s inner maximization problem (with KL constraints). The resulting objective is more amenable to simultaneous gradient update than the original zero-sum game and significantly improves training stability, while still yielding useful adversaries. Second, we develop a principled approach for selecting hyper-parameters: we leverage the learned adversaries to decide which of any two given models trained with P-DRO is more robust than the other.\nWe do an in-depth set of experiments analyzing the effect of our proposed changes on both a toy task as well as a more realistic, yet still synthetic sentiment classification task (§4). Finally, we show that in the more realistic setting of toxicity detection, P-DRO yields models that are more robust to changes in demographic groups, even though these groups are unknown at training time, opening up applications in combatting dataset bias (§5)." }, { "heading": "2 PARAMETERIZING THE UNCERTAINTY SET", "text": "Consider a model parameterized by θ ∈ Rdmodel . Minimizing the DRO objective described in Eq. (2) over the uncertainty set Q turns the optimization problem into the min-max (or zero-sum) game\nmin θ∈Rd max q∈Q E(x,y)∼q `(x, y, θ). (3)\nThe first player controls the parameters θ, whilst the second player controls the worst-case distribution q. In the absence of explicit information on groups of interest (such as demographics, domain, etc.), an adequate choice of the uncertainty set Q is critical to the success of DRO. This is in fact very much an active area of research (Sinha et al. (2018); Duchi & Namkoong (2018); Oren et al. (2019), see Rahimian & Mehrotra (2019) for a survey). Q must be sufficiently large to contain test distributions of interest, but if it is too large it may contain “adversarial” distributions on which no model can perform well. Moreover, the design of Q is also circumscribed by the necessity of keeping the min-max problem tractable, particularly in the context of stochastic optimization. In Hu & Hong (2013) and Duchi et al. (2016) for example, the choice of f -divergence balls allows the use of duality arguments to reformulate (3) as a more manageable min-min problem. Others, like Hu et al. (2018) or Oren et al. (2019), propose using mixture models, the simplicity of which enables them to solve the inner maximization problem efficiently.\nInstead, we propose to explicitly model the second player in the DRO game as a parametric model qψ of the data. Of course, not all parameterizations ψ ∈ Rdadv of a given generative model represent useful distributions, and we require that the adversary stay “close” to the underlying true data distribution p. As a measure of distance between qψ and p, we choose the KL (Kullback & Leibler, 1951) divergence due to its wide acceptance in the machine learning community, as well as its appealing\nproperties in the context of DRO.2 The KL upper bound, κ, is left as a parameter to be decided by the experimenter. We refer to the resulting DRO formulation as Parametric DRO\nmin θ max ψ\nKL(qψ‖p)≤κ E(x,y)∼qψ`(x, y, θ)︸ ︷︷ ︸ LP-DRO(θ,ψ) . (4)" }, { "heading": "3 OPTIMIZING P-DRO", "text": "The min-max problem in Eq. (4) belongs to a class of games called “differentiable games” (another famous representative being generative adversarial networks (Goodfellow et al., 2014)). We can search for a solution of this game with simultaneous gradient descent (Singh et al., 2000), i.e. by simultaneously updating θ and ψ with −∇θ LP-DRO and ∇ψ LP-DRO respectively. Unfortunately, in general, there is no theoretical guarantee that simultaneous gradient descent will converge to a Nash equilibrium3 (Balduzzi et al., 2018), nor that any such equilibrium even exists if the objective is nonconvex in θ (or non-concave in ψ). The success of GANs and the follow-up literature (Wang et al., 2019) serves as an encouraging example that gradient based methods can yield useful solutions despite the pessimistic theoretical results. In this section, we discuss difficulties that arise when optimizing θ and ψ jointly, and propose modifications of the objective to address them.\n3.1 TRAINING THE MODEL θ\nWe could train the model θ by taking negative gradient steps on E(x,y)∼qψ `(x, y; θ). This gradient can be estimated by sampling examples from qψ and averaging the gradient of their losses. Unfortunately, this objective requires that qψ is well-behaved at all iterations, as it is the only source of supervision for θ. If qψ is initialized incorrectly or begins producing unrealistic (x, y), the quality of θ degrades as it begins to learn a predictor on invalid training examples from qψ. As an alternative, we opt to compute the gradients for θ with importance sampling, i.e. rewriting L P-DRO as E(x,y)∼p qψ(x,y) p(x,y) `(x, y; θ), which ensures that all (x, y) samples will be derived from the training set itself. Unfortunately, the true density p is unknown to us. As an approximation, we replace qψ(x,y)p(x,y) with the likelihood ratio between qψ and the maximum likelihood estimate of p, qψ0 := argmaxqψ E(x,y)∼p log qψ(x, y). This changes the min-max problem to\nmin θ max ψ\nKL(qψ‖p)≤κ\nE(x,y)∼p qψ(x, y)\nqψ0(x, y) `(x, y, θ)︸ ︷︷ ︸\nLmodel\n. (5)\n2For instance: KL(q‖p) < +∞ implies that q stays within the support of p 3Nash equilibria (Osborne & Rubinstein, 1994) can be thought of the game theoretic analog of global minima in optimization.\nThis becomes a simple expected loss objective, which we can estimate by sampling from the empirical distribution p̂. In experiments, we find that with this formulation we are able to train robust θ even when qψ is only a mediocre generative model (see Appendix C.2). To further stabilize training at the beginning of the optimization process, we initialize ψ with ψ0, making the objective exactly the same as ERM for the first gradient step.\n3.2 TRAINING THE ADVERSARY ψ\nAccording to Eq. (5) the adversary ψ must maximize E(x,y)∼qψ p(x,y) qψ0 (x,y) `(x, y, θ) within a KL ball of fixed radius. This is challenging for several reasons: first, enforcing the bound is intractable for complex families of adversaries where e.g. projecting onto the KL ball is another difficult optimization problem of its own. Second, maximizing the expectation with respect to the parameters of the distribution qψ is prone to instability due to large gradient variance (Greensmith et al., 2004).\nLagrangian Relaxation To address the first difficulty, we loosen the strict KL constraint and instead consider the Lagrangian relaxation L\nL(ψ, τ) = E(x,y)∼qψ p(x, y)\nqψ0(x, y) `(x, y, θ)− τ (KL(qψ‖p)− κ) . (6)\nWe fix the Lagrangian multiplier τ > 0 as treat it as a “temperature” hyper-parameter. With some reorganization (which we develop in Appendix A.1), we can show that\nL(ψ, τ) = −τKL(qψ‖q∗τ,θ) + C. (7)\nWhere q∗τ,θ ∝ p(x, y)e p(x,y) qψ0 (x,y) `(x,y;θ)\nτ and C is a constant in ψ. In other words, maximizing L in ψ is equivalent to minimizing the KL divergence between qψ and q∗τ,θ. One difficulty with this objective is that q∗τ,θ depends upon the unknown probability density p(x, y). We avoid this problem by treating the density ratio p(x,y)qψ0 (x,y) as a constant, which is closely related to assumptions that have been used successfully in past formulations of DRO (Oren et al., 2019). Empirically, we find that incorporating qψ0 as a surrogate for p is a serviceable approximation, as demonstrated in Section 4.\nReversing the KL Minimizing the KL divergence in this direction is difficult for several reasons. First, it entails optimizing an expectation in qψ over ψ, which is difficult due to the large variance of the gradients (Greensmith et al., 2004). Second, computing this KL necessitates access to the true theoretical density p(x, y) in order to compute q∗τ,θ(x, y) in the argument of the expectation, but this quantity is unknown in practice.4 To sidestep these issues, we elect to minimize the reverse direction KL(q∗τ,θ‖qψ) instead. Due to the KL divergence being non-symmetric, this is a rather crude approximation5, the implications of which are discussed in Norouzi et al. (2016). However, we find that this approach dramatically stabilizes the gradient dynamics while still yielding good adversaries, as observed empirically in Section 4.4. Discarding the entropy term (constant in ψ), the resulting problem is equivalent to minimizing\nLadv(ψ, τ) := − 1\nZτ,θ Ep e\n`(x,y;θ) τ log qψ(x, y) (8)\nin ψ, where Zτ,θ = Ep e `(x,y;θ) τ is the normalizer of q∗. In this case, we can estimate this expectation by substituting the empirical distribution p̂ for p in the expectation.\nComputing the Normalizer Approximating the inverse normalizer 1Zτ,θ in a minibatch yields a biased estimator. On the other hand, computing Zτ,θ over the entire training data at each step is prohibitive since it requires computing the loss of every single example. As a middle ground, we keep a running normalizer Z̃k computed from the average of the normalizers over a fixed number\n4Note that substituting the empirical distribution p̂ for p poses issues here, because qψ is not absolutely continuous with respect to p̂. 5For instance, the optimum of the reverse KL doesn’t necessarily match that of the forward KL within the parametric confusion set Q\nk of consecutive minibatches. In other words, if Bi and θi denote the minibatch and adversary parameters at step i respectively, the normalizer at step t will be\nZ̃k = 1∑t\ni=t−k |Bi| t∑ i=t−k ∑ x,y∈Bi e `(x,y;θi) τ . (9)\nIf k is too low, there is a risk of under-estimating the normalizer, especially if the distribution of weights contains infrequent high weight samples. On the other hand, if k is too high there is a risk of using “stale” weights in the normalizer. In experiments, we treat k as a hyper-parameter." }, { "heading": "3.3 OPTIMAL STOPPING", "text": "When should one stop training a model with P-DRO? In ERM it is customary to stop training after the empirical risk — periodically evaluated on a held out validation dataset — stops decreasing. This is particularly important to prevent over-fitting to the training data. However, it is not an appropriate criterion for P-DRO, since the model is not trained to minimize empirical risk in the first place. A more pertinent choice is to compare the robust validation losses\nLrobust,valid(θ) = max qψ∈Q\n1 |Dvalid| ∑\nx,y∈Dvalid\nqψ(x, y)\nqψ0(x, y) `(x, y; θ)︸ ︷︷ ︸\n:=Lvalid(θ,ψ)\n. (10)\nHowever, finding the inner supremum for each of the T evaluation checkpoints θ1 . . . θT is expensive as it requires solving T independent optimization problems. Instead, we leverage the existence of adversaries ψt associated with each model θt, as well as the initial adversary ψ0 and take the maximum over the T + 1 adversaries {ψ0, . . . , ψT }. Since our relaxation of the P-DRO objective loosens the KL constraint, we need weed out adversaries which might violate it. Specifically, we estimate the KL(qψ‖p) = Ep qψ/p log qψ/p on the validation set, using qψ/qψ0 as a stand-in for qψ/p, and reject all adversaries for which the result is greater than a threshold, which we set to log 10 based on preliminary experiments detailed in Appendix C.1.6 We refer to this stopping criterion as Minmax.\nComputing the full min-max necessitates keeping track of T models and T +1 adversaries, which is ponderous when the model is large. As a solution, we propose an approximation, Greedy-Minmax, in which we only keep one best model θ∗. At each evaluation step T , we compare θT to θ∗, and update θ∗ to whichever achieves lower robust validation loss over the T +1 adversaries ψ0, . . . , ψT .\nBy keeping track of only one additional model, and using the weights qψt (xi,yi)qψ0 (xi,yi) of individual examples in Dvalid as sufficient statistics for computing the loss against each adversary, Greedy-Minmax can be achieved with space complexity 2dmodel + T |Dvalid|, which is much more efficient than the T (dmodel + dadv) of Minmax." }, { "heading": "3.4 HYPER-PARAMETER SELECTION", "text": "Our proposed P-DRO method relies on 3 different hyper-parameters (in addition to the model’s hyper-parameters): the adversary learning rate λ, the temperature τ and the size of the renormalizing window k. As a consequence, we need a reliable criterion for deciding which of two configurations is better. This model comparison bears many similarities with the stopping problem described above. Therefore, we resort to a similar solution: given two models θ1, θ2 trained with P-DRO, and their respective adversaries {ψ10 , . . . , ψ1T }, {ψ20 , . . . , ψ2T } (for instance, the adversaries associated with θ1 and θ2 at periodic checkpoints during training), we select the best model following\nθ∗ = argmin θ∈{θ1,θ2} max ψ∈{ψ10 ,...,ψ1T ,ψ20 ,...,ψ2T } Lvalid(θ, ψ). (11)\n6To simplify notation, this additional constraint is implicit in the rest of this section." }, { "heading": "4 EXPERIMENTAL ANALYSIS OF P-DRO", "text": "Before moving on to a real world scenario in Section 5, we first demonstrate that P-DRO is able to learn robust models in a synthetic Natural Language Processing (NLP) task, and perform ablation studies to examine the importance of the various modifications described in Section 3." }, { "heading": "4.1 EXPERIMENTAL SETTING", "text": "For analysis purposes, we design a simple NLP task amenable to DRO. We specifically choose NLP as a domain due to the striking success of language models as generative models of textual data (Sundermeyer et al., 2012; Radford et al., 2018), which can be used to model the uncertainty set. We base our task off of the binary version of the Stanford Sentiment Treebank dataset (SST-2; Socher et al. (2013)), which we modify to introduce spurious correlation. Specifically, we introduce a distractor token to some sentences. The distractor we use consists of prepending “so , ” to the sentence (“i hated this movie” −→ “so , I hated this movie”), which doesn’t change the underlying sentiment. The resulting samples can be categorized in 4 “groups” depending on their label (positive or negative) and the presence or absence of the distractor. In particular, we add this distractor to 95% of the negative reviews and 5% of the positive reviews in the training and validation set, so that the presence of the distractor strongly correlates with negative sentiment (a similar construction is proposed in (Utama et al., 2020)). In the test data, we modify 50% of all sentences for each class equitably to ensure that there is enough data in each group, but we report “average” test accuracy by re-weighting the group accuracies to mimick the training distribution. We call this modified task BiasedSST.\nFor the classifier, we train a simple one layer BiLSTM model with embedding/hidden dimension 300. For the adversary, we adopt an auto-regressive transformer model based on the successful GPT-2 language model architecture but with 6 layers, a dimension of 512 and 8 attention heads (we experiment with a smaller, LSTM based adversary in Appendix C.2). In order to model the input output pair (x, y), we pre-pend a special label-specific token to sentences before running them through the language model. We train the model with Adam (Kingma & Ba, 2014) and the adversary with vanilla stochastic gradient descent (which we found more stable in experiments). We refer to Appendix B for specific details of the experimental setting.\n4.2 P-DRO CAN LEARN ROBUST MODELS\nTable 1: Average and robust accuracies on BiasedSST. Underlining indicates statistically significant difference compared to ERM (p < 0.05)\nRobust Average\nERM 2.15 ± 0.97 95.09 ± 0.16\nTopic CVaR 5.18 ± 1.46 95.00 ± 0.10 NonParam 28.11 ± 2.16 92.45 ± 1.55 P-DRO 34.98 ± 9.39 84.21 ± 2.11\nOracle DRO 67.71 ± 3.03 77.91 ± 4.49\nWe train 7 models with P-DRO on BiasedSST using different hyper-parameters for the adversary. We start from configuration λ = 10−4, τ = 0.01, k = 5, and for each hyper-parameter we run a configuration with a smaller and a higher value, keeping all other hyper-parameters the same. We train for 50 epochs and select the best model using the strategies described in Section 3.\nWe also compare three other approaches. First, to appreciate how well the model could perform if the groups were known at training time, we train with Group-DRO on the oracle groups using an exponentiated-gradients based online algorithm (Oracle DRO; Sagawa et al. (2020)). Second, we implement Topic CVaR (Oren et al., 2019), a method for DRO on NLP where the uncertainty set is determined by mixtures of a topic model. Finally, we compare to non-parametric DRO with a Kullback-Leibler (KL) constrained uncertainty set (Hu & Hong, 2013; Hu et al., 2018), which we adapt to fit our online mini-batch training setting (NonParam). We refer to Appendix B.3 for details and hyper-parameters of the baselines.\nWe report the worst-case (“robust”) accuracy over all groups on the test set, as well the average accuracy in Table 1 (we report the mean and standard deviation over 5 runs). We find that both TopicCVaR, NonParam and P-DRO are more robust than ERM, but the latter outperforms the former two close to 30 and 7 points respectively, achieving 52% of Oracle DRO’s robust accuracy, while not leveraging any information on the oracle groups." }, { "heading": "4.3 OPTIMAL STOPPING AND HYPER-PARAMETER SELECTION ABLATION", "text": "To understand the importance of the optimal stopping and hyper-parameter selection strategy described in Section 3.3, we perform an ablation on the BiasedSST dataset comparing 4 strategies:\n• Average: models are selected based on their average zero-one loss (i.e. error rate) on the unmodified validation set. This is the baseline stopping criterion.\n• Minmax: selection based on the adversaries (as described in Section 3.3), with and without the KL constraint, as well as its variant Greedy-Minmax for stopping.\n• Oracle: in this setting the groups are known (in the validation set), and models are selected based on their error rate on the worst performing group. This is the optimal criterion for the group-DRO setting we are considering.\nTo compare stopping criterions experiments, we only consider one set of hyper-parameters: λ = 10−4, k = 5 and τ = 0.01. From the robust validation accuracies reported in Table 2a, we first observe that Average stopping results in a robust accuracy of 0, highlighting the necessity for a suitable stopping criterion. We find that Minmax, especially with a KL constraint, is a much better strategy, recovering ≈ 60% of the performance achievable with Oracle stopping. Notably, the Greedy-Minmax variant which we use in practice reaches very close results (< 1 point difference) despite its requiring to keep track of only 2 out of the 50 model checkpoints at any time.\nTo understand the effectiveness of the Minmax strategy for selecting hyper-parameters. We take the models trained in Section 4.1, but select the best hyper-parameters using the different strategies described above. Results, shown in Table 2b, confirm that Minmax (with the KL constraint) is a better choice than Average for selecting hyper-parameters, even though the improvement is not as striking as for stopping.\n4.4 IMPORTANCE OF LADV\nFinally, we investigate the importance of modifying the adversary’s objective as described in Section 3.2. For this experiment, we devise a simpler toy task on which directly training the constrained DRO objective is possible. Specifically, we consider the two-dimensional binary classification problem pictured in Figure 2. The training data consists of 10,000 points partitioned in two normally distributed “domains” with a 1:50 sampling ratio and different classification boundaries. We train a logistic regression model, which cannot perfectly fit the training data and must trade-off between accuracy on each domain. For the sake of simplicity, we only model the input variables x7 as isotropic normal distributions with fixed variance: the adversaries’ parameter ψ ∈ R2 represents the\nlocation of the Gaussian (we fix the variance to the empirical variance of the data).\n7In other words, we set qψ(x, y) = p(y | x)qψ(x), where p(y | x), is the true conditional which will be canceled out in the ratio qψ(x,y)\nqψ0 (x,y) .\nWe compare 3 different versions of P-DRO: first, naive simultaneous gradient descent on the zero-sum game, without any constraint on the adversary (bare P-DRO), then the same, but with an approximation of the explicit KL constraint between qψ and qψ0 (+KL constraint; see Appendix A.2 for more details). Finally we report results using our relaxation and the KL reversal described in Section 3.2 (+Ladv). For each setting, we report the average and robust accuracy with mean and standard deviation over 10 runs. For the KL constraint and the relaxation, we report\nthe best results among 4 values of the KL bound κ and the temperature τ respectively.\nIn Table 3, we observe that bare P-DRO is too unstable and systematically diverges. The addition of a KL constraint mitigates this behaviour, but the zero-sum objective is still unstable, as evidenced by the high standard deviations. Finally, we find that the addition of Lrev stabilizes the training process greatly, leading to consistently high robust accuracy." }, { "heading": "5 P-DRO IN PRACTICE: CASE STUDY OF TOXICITY DETECTION", "text": "In this section, we demonstrate the effectiveness of P-DRO in the more realistic setting of toxicity detection, the task of recognizing various forms of toxic language (eg. hate speech or offensive language). Identifying online abuse on the internet is a crucial challenge, and has garnered much interest in the NLP community (Schmidt & Wiegand, 2017; Fortuna & Nunes, 2018). However, recent work (Sap et al., 2019) has shown that there is strong correlation between toxic labels and the presence of certain markers of dialects of English spoken by minority groups. This correlation is in turn amplified by hate speech classifiers trained on such data, leading to biased prediction.\nOur results on BiasedSST suggest that P-DRO can provide one solution to preventing models from absorbing spurious correlations present in their training data, even in the absence of protected attributes (such as language variety here)." }, { "heading": "5.1 EXPERIMENTAL SETTING", "text": "Following Sap et al. (2019) and Xia et al. (2020), we perform experiments on two datasets: DWMW17 (Davidson et al., 2017), a corpus of 25K tweets classified in three categories: hate speech (6%), offensive (76%) and neither (18%), and FDCL18 (Founta et al., 2018), a 100k sized dataset, also collected from Twitter and annotated with an additional spam label, with the following breakdown by categories: hateful (5%), abusive (27%), normal (54%) and spam (14%).\nThe released version of these datasets does not contain information on the dialect of each user. In order to be able to evaluate our models, and to train an Oracle DRO baseline, we follow Sap et al. (2019) and use annotations provided by the dialect classifier described in Blodgett et al. (2016) to label each example as one of four English varieties: White-aligned, African American, Hispanic, and Other. Note that, as these are automatically obtained labels, the groups may not exactly correspond to the actual racial sociolects, however Sap et al. (2019) does report that they correlate highly with self-reported race, and they serve as a useful proxy in the absence of manual annotation.\nWe formulate the group-DRO problem by separating each dataset into independent groups identified by both language variety and label, for a total of 12 and 16 groups for DWMW17 and FDCL18 respectively. Some of these groups are severely under-represented in the test set. In order to make our robust accuracy results reliable yet still representative of the under-represented groups, we combine groups that contain less than 100 samples into a single group to compute robust test accuracies.\nOn DWMW17, we train the same BiLSTM model as described in Section 4.3. To illustrate the applicability of P-DRO to other model architectures, we pick BERT (Devlin et al., 2018), a large scale pre-trained model as a classifier on FDCL18. In both cases, we adopt the Transformer architecture described in Section 4.3 as the adversary. We train the adversary with a temperature of τ = 0.01 and a normalizing window k = 10. To demonstrate the efficacy of automatic hyper-parameter selection in the P-DRO setting, we delegate the choice of the adversary’s learning rate λ to grid-search, training 3 models with λ ∈ {10−5, 10−4, 10−3} and selecting the best using the Minmax criterion\ndescribed in Section 3.4. We also report numbers for Oracle DRO and Topic CVaR. Results are averaged over 5 runs, each with a different random seed." }, { "heading": "5.2 CAN P-DRO PRODUCE MORE ROBUST MODELS?", "text": "Table 4a reports the robust test accuracies of all models on both tasks. Importantly, except for Oracle DRO, none of the methods compared here necessitate any knowledge of the groups, neither in the training nor validation data. We observe that in both settings P-DRO is able to achieve higher robust accuracy than ERM, Topic-CVaR and NonParam.\nThis suggests P-DRO as a useful option in case no group information whatsoever is available. However, in practice, it may be feasible to annotate at least a small amount of data with group information. To emulate this scenario, we perform the same experiment, but assume that group annotations are available on the validation data, which we use to determine optimal stopping and hyper-parameters. Results for this setting are reported in Table 4b. We find that, while the use of robust validation accuracy yields more robust models even for ERM (especially on FDCL18), P-DRO is still the best alternative that doesn’t require group annotation on the training data." }, { "heading": "6 IMPLICATIONS AND OUTLOOK", "text": "We have shown that there is promise in using parametric families of neural generative models for defining the uncertainty set in distributionally robust optimization. While we only perform experiments on NLP tasks, this approach can, in theory, be applied in any modality and in future work we hope to pursue this direction. In such cases where good quality generative models are unavailable, or such model cannot produce densities efficiently, an interesting direction would be to model the likelihood ratio qψ/p directly. This alternative formulation poses different implementation challenges, and we leave it as a promising avenue for future research." }, { "heading": "ACKNOWLEDGEMENTS", "text": "The authors would like to thank the anonymous reviewers for their insightful feedback which helped improve the paper to its current version. In addition, this paper greatly benefited from discussion and feedback from various colleagues at CMU, in particular Chunting Zhou, Haohan Wang, Zachary Lipton and Zico Kolter. This work was supported by a Facebook Sponsored Research Award and by the DARPA GAILA project (award HR00111990063). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the sponsors." }, { "heading": "A DERIVATIONS", "text": "A.1 REORGANIZING THE LAGRANGIAN L(ψ, τ)\nLet us write the Lagrangian L explicitly:\nL(ψ, τ) = E(x,y)∼qψ p(x, y)\nqψ0(x, y) `(x, y, θ)− τ (KL(qψ‖p)− κ) (12)\n= E(x,y)∼qψ p(x, y)\nqψ0(x, y) `(x, y, θ)− τ E(x,y)∼qψ log\nqψ(x, y)\np(x, y) + τκ (13)\n= τ E(x,y)∼qψ log p(x, y)e p(x,y)qψ0 (x,y) `(x,y,θ)τ qψ(x, y) + τκ (14) = τ(κ− KL(qψ‖q∗τ,θ)) + log ( E(x,y)∼p e p(x,y) qψ0 (x,y) `(x,y,θ) τ ) (15)\nThis last step requires that the log moment generating function of ` under p exist for τ . In most scenarios we consider, ` is typically the negative log likelihood of a neural network model, which is generally bounded. Therefore the moment generating function is defined everywhere.\nNote that the KL term is the only one dependent on ψ, therefore maximizing L for ψ is equivalent to maximizing −KL(qψ‖q∗τ,θ), in other words minimizing KL(qψ‖q∗τ,θ)" }, { "heading": "A.2 ENFORCING THE KL CONSTRAINT IN THE TOY SETTING", "text": "Even in this simplest setting, the exact KL between qψ (a gaussian) and p (a mixture of gaussians) does not have an analytical expression (Hershey & Olsen, 2007). Instead, we fall back on enforcing the KL constraint between qψ and qψ0 , both isotropic gaussians with the same standard deviation. Let µ and µ0 ∈ R2 denote their respective mean, and σ > 0 their standard deviation. In this context, their KL divergence reduces to:\nKL(qψ‖qψ0) = KL(qψ0‖qψ) = 1\n2σ2 ‖µ− µ0‖2\nIn other words, the KL divergence is equivalent to the euclidean distance between the distributions’ means. We use this fact to project ψ (in the KL sense) onto Bκ = {ψ̂ | KL(qψ̂‖qψ0) < κ}:\nprojBκ(ψ) := argmin ψ̂∈Bκ KL(qψ‖qψ̂)\n= ψ0 +\n√ 2κσ\n‖ψ − ψ0‖ (ψ − ψ0)" }, { "heading": "B EXPERIMENTAL DETAILS", "text": "We describe in more details some of the experimental settings for our NLP experiments. More details can be found in our code release: https://github.com/pmichel31415/P-DRO." }, { "heading": "B.1 MODEL SETTINGS", "text": "In all experiments, we split the text into sub-word tokens using the tokenizer described in (Devlin et al., 2018). During training, we sample minibatches that contain at most 64 sentences or 2500 tokens, whichever is greater, in order to prevent GPU memory overflow in case of long sentences.\nWe train all models with Adam (Kingma & Ba, 2014) with an initial learning rate of 2×10−5, which we decay linearly at each step until the end of training. We validate the models every epoch. For BERT, we start from the bert-base-uncased checkpoint." }, { "heading": "B.2 ADVERSARY SETTINGS", "text": "In all experiments, we use a Transformer model based on the GPT-2 architecture (Radford et al., 2019) to serve as the adversary. In order to initialize the adversary (to obtain ψ0), we first pre-train the model on a generic, relatively large language modeling dataset, WikiText-103 (Merity et al., 2017). We also use a batch size of 64 samples or 2500 tokens, and train with Adam for 10 epochs, with a fixed learning rate of 3 × 10−4. Then, we fine-tune this model on each dataset, this time minimizing the negative log-likelihood of the (x, y) pair (by introducing the special “[label]” token as described in Section B), using the same hyper-parameters but a smaller learning rate (10−5). We find that, due to the small to medium size of the datasets under consideration, this LM pretraining step helped achieve lower error on the generative modeling task." }, { "heading": "B.3 BASELINE SETTINGS", "text": "" }, { "heading": "B.3.1 TOPIC CVAR", "text": "To train the topic model for Topic CVaR, we first pre-process the text by removing all punctuation, urls and user mentions (for twitter data). Importantly, we remove stop-words for our toxicity experiments but not for our BiasedSST experiment. This is because the distractor token we use (“so”) belongs to most English stop words lists, and removing it would completely prevent the topic model from picking up on the groups of interest. We then estimate the parameters of the model with Gensim8 and use similar settings as Oren et al. (2019) (α = 0.1, β = 1.0), setting the number of topics to 10.\nFor both Oracle-DRO and Topic-CVaR, we use the algorithm proposed in Sagawa et al. (2020) to estimate the worst-case group (either oracle group or topic in Topic-CVaR) online during training. We perform grid-search over {1, 0.1, 0.01} to find the best learning rate for the group weights update. For Oracle DRO, the best model is simply selected by robust validation accuracy. For Topic CVaR, unless specified otherwise, we select the model with the lowest worst-case error over all topics." }, { "heading": "B.3.2 NONPARAM", "text": "In the KL-constrained non-parametric setting, the min-max problem reads\nmin θ max q s.t.\nKL(qψ‖p)≤κ\nE(x,y)∼q`(x, y, θ). (16)\nHere, κ is the desired radius of the KL ball, and is treated as a hyper-parameter. The solution of the inner maximum has an analytical solution of the form q∗θ = a Zθ,τ∗ p(x, y)e `(x,y;θ) τ∗ (see Hu & Hong (2013); Hu et al. (2018) for details) with Zθ,τ∗ = Epe `(x,y;θ) τ∗ and τ∗ such that\nKL(q∗‖p) = Ep e `(x,y;θ) τ∗\nZθ,τ∗\n( `(x, y; θ)\nτ∗ − logZθ,τ∗\n) = κ.\nNote that both computing Zθ,τ∗ and KL(q∗‖p) require taking expectations over p. In our setting, where `(x, y; θ) is the output of a large neural network, we cannot afford to take this expectation over the entire training data at each step. Instead, we fall back to taking the average over each minibatch. We find τ∗ with binary search in log10 space within the [10\n−10, 1010] interval and clip to the lowest or highest value should the result lie outside the search interval.\n8https://radimrehurek.com/gensim/\nIn all experiments, we try 4 different values for κ: 0.01, 0.1, 1 and 10. Unless indicated otherwise, we perform early stopping and hyper-parameter selection using our Minmax criterion using the non-parametric weights as adversaries on the validation data." }, { "heading": "C ADDITIONAL EXPERIMENTS", "text": "" }, { "heading": "C.1 MINMAX VALIDATION KL THRESHOLD", "text": "The Monte-Carlo estimate of KL(qψ‖p) on the validation set is 1 |Dvalid| ∑ x,y∈Dvalid qψ(x,y) p(x,y) log qψ(x,y) p(x,y) . Similarly to Section 3, we approximate the (unknown) likelihood ratio qψ(x,y)p(x,y) with qψ(x,y) qψ0 (x,y) .\nWe want to reject all adversaries where this approximated KL is greater than some threshold, κvalid, but how do we choose a good value for κvalid? Consider an adversary which selects a fraction of the validation data of size α|Dvalid| for some α ∈ (0, 1]. In such a case, the likelihood ratio is 1/α on this subset and 0 everywhere else, and the resulting KL estimate will be logα. In other words, choosing a threshold of κvalid means allowing the adversary to potentially select any subset of size at least 1/eκvalid of the original data. Our heuristic choice, log 10, corresponds to allowing subsets of size at least 10% of |Dvalid|. Of course, this is only a heuristic because the adversary can reweight the validation set nonuniformly. To assess the effect of κvalid on Greedy-Minmax, we compute the average robust validation error of the selected model across 5 runs for 3 different values of the adversary’s learning rate. Results on BiasedSST, depicted in Figure 3, show that adversaries with higher learning rate are more sensitive to the choice of threshold, but all values of κvalid between log 5 and log 20 seem to work for these settings." }, { "heading": "C.2 P-DRO EXPERIMENTS WITH AN LSTM ADVERSARY", "text": "We replicate the experiments BiasedSST experiments in Section 4, but this time using a smaller generative model, which is unlikely to generate good samples. Specifically, we use a one layer LSTM model (Hochreiter & Schmidhuber, 1997) with embedding and hidden dimension 256. We only perform grid-search over λ ∈ [10−5, 10−4, 10−3] and select the best with Minmax. Once pre-trained on the BiasedSST dataset, this model achieves a perplexity of 227.0, more than 4 times worse than the transformer model we use in other experiments (49.8). However, as evidenced by its robust accuracy displayed in Table 5, P-DRO is still able to learn a robust model. We take this as evidence that the re-weighting introduced in Section 3 helps stabilize training even when qψ is not a perfect model of the data.\nC.3 INFLUENCE OF HYPER-PARAMETERS ON P-DRO\nWe study the influence of the 3 hyper-parameters τ (temperature), k (size of the renormalization window) and λ (learning rate of the adversary) on the performance of P-DRO. All experiments are run on the BiasedSST dataset, and the analysis proceeds as follows: starting from configuration τ = 0.01, k = 5 and λ = 10−4 and vary each of the hyper-parameters independently. We report two numbers for each configuration: robust accuracy of the best model using Greedy-Minmax stopping and using Oracle stopping. The latter is useful to disentangle the effect of the stopping criterion.\nAs seen in the results shown in Table 6, we find that τ has the least effect on robust accuracies. While the renormalization window parameter k has some effect on optimal stopping, the best robust accuracy achieved by the model (with oracle stopping) varies little. We observe the adversary’s learning rate λ to be the most sensitive hyper-parameter, which is why we restrict our grid-search to λ in Section 5." } ]
2,021
IN DISTRIBUTIONALLY ROBUST OPTIMIZATION
SP:5ac6f67060ad4fa1470c5f87e8329ef293f2025c
[ "The paper presents a joint learning strategy for simultaneous semantic segmentation and monocular depth estimation. The main idea is to exploit stereo pairs in training and introduce pseudo-depth label estimated from pre-trained stereo-matching networks. Given the pseudo-depth with confidence estimation, the method proposes a cross-view consistency loss for both depth and semantic predictions, which augments the standard segmentation loss. The proposed method is evaluated on KITTI and Cityscapes datasets with comparisons to prior work and ablative study. " ]
Multi-task learning (MTL) for scene understanding has been actively studied by exploiting correlation of multiple tasks. This work focuses on improving the performance of the MTL network that infers depth and semantic segmentation maps from a single image. Specifically, we propose a novel MTL architecture, called Pseudo-MTL, that introduces pseudo labels for joint learning of monocular depth estimation and semantic segmentation tasks. The pseudo ground truth depth maps, generated from pretrained stereo matching methods, are leveraged to supervise the monocular depth estimation. More importantly, the pseudo depth labels serve to impose a cross-view consistency on the estimated monocular depth and segmentation maps of two views. This enables for mitigating the mismatch problem incurred by inconsistent prediction results across two views. A thorough ablation study validates that the cross-view consistency leads to a substantial performance gain by ensuring inference-view invariance for the two tasks.
[]
[ { "authors": [ "V. Badrinarayanan", "A. Kendall", "R. Cipolla" ], "title": "SegNet: A deep convolutional encoder-decoder architecture for image segmentation", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2017 }, { "authors": [ "Jia-Ren Chang", "Yong-Sheng Chen" ], "title": "Pyramid stereo matching network", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Liang-Chieh Chen", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Rethinking atrous convolution for semantic image segmentation", "venue": null, "year": 2017 }, { "authors": [ "Liang-Chieh Chen", "Yukun Zhu", "George Papandreou", "Florian Schroff", "Hartwig Adam" ], "title": "Encoderdecoder with atrous separable convolution for semantic image segmentation", "venue": null, "year": 2018 }, { "authors": [ "Po-Yi Chen", "Alexander H. Liu", "Yen-Cheng Liu", "Yu-Chiang Frank Wang" ], "title": "Towards scene understanding: Unsupervised monocular depth estimation with semantic-aware representation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Bowen Cheng", "Maxwell D. Collins", "Yukun Zhu", "Ting Liu", "Thomas S. Huang", "Hartwig Adam", "Liang-Chieh Chen" ], "title": "Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmentation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Jaehoon Cho", "Dongbo Min", "Youngjung Kim", "Kwanghoon Sohn" ], "title": "A large RGB-D dataset for semi-supervised monocular depth estimation", "venue": null, "year": 1904 }, { "authors": [ "Hyesong Choi", "Hunsang Lee", "Sunkyung Kim", "Sunok Kim", "Seungryong Kim", "Dongbo Min" ], "title": "Adaptive confidence thresholding for semi-supervised monocular depth estimation", "venue": null, "year": 2009 }, { "authors": [ "Marius Cordts", "Mohamed Omran", "Sebastian Ramos", "Timo Rehfeld", "Markus Enzweiler", "Rodrigo Benenson", "Uwe Franke", "Stefan Roth", "Bernt Schiele" ], "title": "The cityscapes dataset for semantic urban scene understanding", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "David Eigen", "Christian Puhrsch", "Rob Fergus" ], "title": "Depth map prediction from a single image using a multi-scale deep network", "venue": "In Advances in Neural Information Processing Systems (NIPS)", "year": 2014 }, { "authors": [ "Ravi Garg", "BG Vijay Kumar", "Gustavo Carneiro", "Ian Reid" ], "title": "Unsupervised CNN for single view depth estimation: Geometry to the rescue", "venue": "In European Conference on Computer Vision,", "year": 2016 }, { "authors": [ "A. Geiger", "P. Lenz", "R. Urtasun" ], "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2012 }, { "authors": [ "Clément Godard", "Oisin Mac Aodha", "Gabriel J. Brostow" ], "title": "Unsupervised monocular depth estimation with left-right consistency", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Clément Godard", "Oisin Mac Aodha", "Michael Firman", "Gabriel J Brostow" ], "title": "Digging into selfsupervised monocular depth estimation", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Vitor Guizilini", "Rares Ambrus", "Sudeep Pillai", "Allan Raventos", "Adrien Gaidon" ], "title": "3D packing for self-supervised monocular depth estimation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Vitor Guizilini", "Rui Hou", "Jie Li", "Rares Ambrus", "Adrien Gaidon" ], "title": "Semantically-guided representation learning for self-supervised monocular depth", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Caner Hazirbas", "Lingni Ma", "Csaba Domokos", "Daniel Cremers" ], "title": "Fusenet: Incorporating depth into semantic segmentation via fusion-based CNN architecture", "venue": "In Asian Conference on Computer Vision (ACCV),", "year": 2016 }, { "authors": [ "Heiko Hirschmüller" ], "title": "Stereo processing by semiglobal matching and mutual information", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2008 }, { "authors": [ "Ankit Jha", "Awanish Kumar", "Biplab Banerjee", "Subhasis Chaudhuri" ], "title": "AdaMT-Net: An adaptive weight learning based multi-task learning model for scene understanding", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,", "year": 2020 }, { "authors": [ "Diederik P. Kingma", "Jimmy Ba" ], "title": "Adam: A method for stochastic optimization", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Varun Ravi Kumar", "Senthil Kumar Yogamani", "Markus Bach", "Christian Witt", "Stefan Milz", "Patrick Mäder" ], "title": "UnRectDepthNet: Self-supervised monocular depth estimation using a generic framework for handling common camera distortion models", "venue": null, "year": 2007 }, { "authors": [ "Shikun Liu", "Edward Johns", "Andrew J Davison" ], "title": "End-to-end multi-task learning with attention", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Chenxu Luo", "Zhenheng Yang", "P. Wang", "Y. Wang", "W. Xu", "R. Nevatia", "A. Yuille" ], "title": "Every pixel counts ++: Joint learning of geometry and motion with 3D holistic understanding", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence,", "year": 2020 }, { "authors": [ "Ishan Misra", "Abhinav Shrivastava", "Abhinav Gupta", "Martial Hebert" ], "title": "Cross-stitch networks for multi-task learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2016 }, { "authors": [ "Jiahao Pang", "Wenxiu Sun", "Jimmy SJ Ren", "Chengxi Yang", "Qiong Yan" ], "title": "Cascade residual learning: A two-stage convolutional neural network for stereo matching", "venue": "In ICCV Workshop on Geometry Meets Deep Learning,", "year": 2017 }, { "authors": [ "Matteo Poggi", "Stefano Mattoccia" ], "title": "Learning from scratch a confidence measure", "venue": "In British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Matteo Poggi", "Filippo Aleotti", "Fabio Tosi", "Stefano Mattoccia" ], "title": "On the uncertainty of selfsupervised monocular depth estimation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations (ICLR),", "year": 2015 }, { "authors": [ "Hang Su", "Varun Jampani", "Deqing Sun", "Orazio Gallo", "Erik G. Learned-Miller", "Jan Kautz" ], "title": "Pixeladaptive convolutional neural networks", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2019 }, { "authors": [ "Towaki Takikawa", "David Acuna", "Varun Jampani", "Sanja Fidler" ], "title": "Gated-SCNN: Gated shape cnns for semantic segmentation", "venue": "IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Alessio Tonioni", "Matteo Poggi", "Stefano Mattoccia", "Luigi di Stefano" ], "title": "Unsupervised domain adaptation for depth prediction from images", "venue": "IEEE Trans. Pattern Anal. Mach. Intell.,", "year": 2020 }, { "authors": [ "J. Uhrig", "N. Schneider", "L. Schneider", "U. Franke", "T. Brox", "A. Geiger" ], "title": "Sparsity invariant CNNs", "venue": "In International Conference on 3D Vision (3DV),", "year": 2017 }, { "authors": [ "Lijun Wang", "Jianming Zhang", "Oliver Wang", "Zhe Lin", "Huchuan Lu" ], "title": "SDC-Depth: Semantic divide-and-conquer network for monocular depth estimation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Jamie Watson", "Michael Firman", "Gabriel J. Brostow", "Daniyar Turmukhambetov" ], "title": "Self-supervised monocular depth hints", "venue": "In IEEE International Conference on Computer Vision (ICCV),", "year": 2019 }, { "authors": [ "Yuhui Yuan", "Xilin Chen", "Jingdong Wang" ], "title": "Object-contextual representations for semantic segmentation", "venue": "CoRR, abs/1909.11065,", "year": 2019 }, { "authors": [ "Amir R. Zamir", "Alexander Sax", "William B. Shen", "Leonidas J. Guibas", "Jitendra Malik", "Silvio Savarese" ], "title": "Taskonomy: Disentangling task transfer learning", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2018 }, { "authors": [ "Amir Roshan Zamir", "Alexander Sax", "Nikhil Cheerla", "Rohan Suri", "Zhangjie Cao", "Jitendra Malik", "Leonidas J. Guibas" ], "title": "Robust learning through cross-task consistency", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2020 }, { "authors": [ "Huangying Zhan", "Ravi Garg", "Chamara Saroj Weerasekera", "Kejie Li", "Harsh Agarwal", "Ian D. Reid" ], "title": "Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2018 }, { "authors": [ "Zhenyu Zhang", "Zhen Cui", "Chunyan Xu", "Zequn Jie", "Xiang Li", "Jian Yang" ], "title": "Joint task-recursive learning for semantic segmentation and depth estimation", "venue": "In European Conference on Computer Vision (ECCV),", "year": 2018 }, { "authors": [ "Hengshuang Zhao", "Jianping Shi", "Xiaojuan Qi", "Xiaogang Wang", "Jiaya Jia" ], "title": "Pyramid scene parsing network", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Tinghui Zhou", "Matthew Brown", "Noah Snavely", "David G. Lowe" ], "title": "Unsupervised learning of depth and ego-motion from video", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Yi Zhu", "Karan Sapra", "Fitsum A. Reda", "Kevin J. Shih", "Shawn Newsam", "Andrew Tao", "Bryan Catanzaro" ], "title": "Improving semantic segmentation via video propagation and label relaxation", "venue": "In IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2019 }, { "authors": [ "Uhrig" ], "title": "2017) for the KITTI dataset. Numbers in bold and underlined represent 1 and 2 ranking, respectively. The methods used in evaluation are EPC++ (Luo et al., 2020), Monodepth2 (Godard et al., 2019), Uncertainty (Poggi et al., 2020), Packnet-SfM (Guizilini et al., 2020a)", "venue": "UnRectDepthNet (Kumar et al.,", "year": 2020 } ]
[ { "heading": "1 INTRODUCTION", "text": "Scene understanding has become increasingly popular in both academia and industry as an essential technology for realizing a variety of vision-based applications such as robotics and autonomous driving. 3D geometric and semantic information of a scene often serve as a basic building block for high-level scene understanding tasks. Numerous approaches have been proposed for inferring a depth map (Garg et al., 2016; Godard et al., 2019) or grouping semantically similar parts (Chen et al., 2017; Yuan et al., 2019) from a single image. In parallel with such a rapid evolution for individual tasks, several approaches (Chen et al., 2019; Zhang et al., 2018; Guizilini et al., 2020b; Liu et al., 2019) have focused on boosting the performance through joint learning of the semantic segmentation and monocular depth estimation tasks by considering that the two tasks are highly correlated. For instance, pixels with the same semantic segmentation labels within an object are likely to have similar (or smoothly-varying) depth values. An abrupt change of depth values often implies the boundary of two objects containing different semantic segmentation labels. These properties have been applied to deep networks to enhance the semantic segmentation and monocular depth estimation tasks in a synergetic manner.\nIn (Chen et al., 2019), they proposed a joint learning model that learns semantic-aware representation to advance the monocular depth estimation with the aid of semantic segmentation. A depth map is advanced by making use of loss functions designed for the purpose of bonding geometric and semantic understanding. The method in (Guizilini et al., 2020b) proposed a new architecture that improves the accuracy of monocular depth estimation through the pixel-adaptive convolution (Su et al., 2019) using semantic feature maps computed from pre-trained semantic segmentation networks. Despite the improved monocular depth accuracy over a single monocular depth network, the performance improvement of the semantic segmentation task by the aid of geometrical representation has not been verified (Chen et al., 2019), or even the semantic segmentation network was fixed with pretrained parameters (Guizilini et al., 2020b).\nA generic computational approach for multi-task learning (MTL) was proposed in (Zamir et al., 2018), which models the structure across twenty six tasks, including 2D, 2.5D, 3D, and semantic tasks, by finding first and higher-order transfer learning dependencies across them in a latent space to seamlessly reuse supervision among related tasks and/or solve them in a single network without increasing the complexity significantly. This was further extended by imposing a cross-task consis-\ntency based on inference-path invariance on a graph of multiple tasks (Zamir et al., 2020). Though these approaches provide a generic and principled way for leveraging redundancies across multiple tasks, there may be limitations to improving the performance of individual tasks in that it is difficult to consider task-specific architectures and loss functions in such unified frameworks. With the same objective yet with a different methodology, the method in (Liu et al., 2019) proposes a novel MTL architecture consisting of task-shared and task-specific networks based on task-attention modules, aiming to learn both generalizable features for multiple tasks and features tailored to each task. They validated the performance in the joint learning of monocular depth and semantic segmentation.\nIn this paper, we propose a novel MTL architecture for monocular depth estimation and semantic segmentation tasks, called pseudo label-guided multi-task learning (Pseudo-MTL). The proposed architecture leverages geometrically- and semantically-guided representations by introducing pseudo ground truth labels. When a pair of stereo images is given as inputs, our method first generates pseudo ground truth left and right depth maps by using existing pre-trained stereo matching networks (Pang et al., 2017; Chang & Chen, 2018). To prevent inaccurate depth values from being used, a stereo confidence map (Poggi & Mattoccia, 2016) is used together as auxiliary data that measures the reliability of the pseudo depth labels. These are leveraged for supervising the monocular depth network, obtaining substantial performance gain over recent self-supervised monocular depth estimation approaches (Godard et al., 2017; 2019). More importantly, the pseudo depth labels are particularly useful when imposing a cross-view consistency across left and right images. The estimated monocular depth and segmentation maps of two views are tied from a geometric perspective by minimizing the cross-view consistency loss, alleviating the mismatch problem incurred by inconsistent prediction across two views significantly. We will verify through an intensive ablation study that the proposed cross-consistency loss leads to a substantial improvement on both tasks. Experimental results also show that our approach achieves an outstanding performance over state-of-the-arts. In short, our novel contributions can be summarized as follows.\n• We propose a novel MTL approach that jointly performs monocular depth estimation and semantic segmentation through pseudo depth labels.\n• The cross-view consistency loss based on the pseudo depth labels and associated confidence maps is proposed to enable consistent predictions across two views.\n• An intensive ablation study is provided to quantify the contribution of the proposed items to performance improvement." }, { "heading": "2 RELATED WORK", "text": "Monocular Depth Estimation While early works for monocular depth estimation are based on supervised learning, self-supervised learning has attracted increasing interest in recent approaches (Godard et al., 2017; 2019; Watson et al., 2019) to overcome the lack of ground truth depth labels. Here, we review works mostly relevant to our method. Godard et al. (Godard et al., 2017; 2019) proposed the deep network that infers a disparity map using the image reconstruction loss and left-right consistency loss from a pair of stereo images or monocular videos. Chen et al. (Chen et al., 2019) infers both disparity and semantic segmentation maps by enforcing the cross consistency across stereo images to address the mismatch problem of (Godard et al., 2017). Several approaches have focused on improving the monocular depth estimation through the aid of segmentation networks, e.g., by stitching local depth segments from instance segmentation with respect to scale and shift (Wang et al., 2020) or leveraging pretrained semantic segmentation networks to guide the monocular depth estimation (Guizilini et al., 2020b).\nSemantic Segmentation A deep convolutional encoder-decoder architecture for semantic segmentation proposed in (Badrinarayanan et al., 2017) has been widely used as backbone. The pyramid pooling module was proposed for leveraging global context through aggregation of different regionbased contexts (Zhao et al., 2017). Some segmentation works attempted to combine different tasks to improve segmentation performance. Gated-SCNN (Takikawa et al., 2019) refines segmentation results by fusing semantic-region features and boundary features. FuseNet (Hazirbas et al., 2016) proposed to fuse features from color and depth images for improving the segmentation performance.\nMulti-task learning In (Chen et al., 2019; Takikawa et al., 2019; Zhang et al., 2018), they proposed to leverage task-specific loss functions to tie up two (or more) tasks within the MTL architecture. For\ninstance, Chen et al. (Chen et al., 2019) attempted to improve a monocular depth accuracy by using the loss functions that measure the consistency between geometric and semantic predictions. The generic computational approach for MTL was proposed by leveraging redundancies across multiple tasks in a latent space in (Zamir et al., 2018; 2020). The task-attention modules were introduced to extract features for individual tasks in (Misra et al., 2016; Liu et al., 2019; Jha et al., 2020). In this work, we focus on improving the performance of the MTL architecture for monocular depth estimation and semantic segmentation tasks by using the cross-view consistency loss based on pseudo labels." }, { "heading": "3 PROPOSED METHOD", "text": "" }, { "heading": "3.1 OVERVIEW AND ARCHITECTURE DESIGN", "text": "Our Pseudo-MTL approach focuses on improving the performance of the monocular depth estimation and semantic segmentation tasks through task-specific losses defined based on the pseudo depth labels generated by using pre-trained stereo matching networks (Pang et al., 2017). The stereo confidence maps are used together as auxiliary data to compensate for estimation errors in the pseudo depth labels. These are effective in mitigating undesired artifacts of errors that may exist in the pseudo depth labels. In our work, we chose the CCNN (Poggi & Mattoccia, 2016) for calculating the confidence map, but more advanced confidence estimation approaches can also be used.\nAs shown in Figure 1, the proposed Pseudo-MTL network is based on the encoder-decoder architecture, in which a single encoder takes an image and two decoders predict the monocular depth and semantic segmentation maps. The encoder network E consists of the convolutional layers of the VGG network (Simonyan & Zisserman, 2015). Two decoders, Dd for monocular depth estimation and Ds for monocular depth estimation, are designed symmetrically with the encoder. While two tasks share the encoder, the task-specific decoder branches are used for each task.\nThe pseudo depth labels and the segmentation label maps of stereo images are used for supervising the proposed architecture. The monocular depth and segmentation maps of left and right images are estimated by passing each image to the proposed architecture, as shown in Figure 1. The cross-view consistency loss is then imposed on the prediction results of two views. To be specific, the estimated monocular depth maps of left and right images are warped and tested using the pseudo depth labels for ensuring inference-view invariance on the monocular depth estimation, and a similar procedure is also applied to the semantic segmentation.\nUsing the pseudo depth labels for training the proposed model is advantageous at various aspects. The pseudo depth labels of stereo images, filtered out by its confidence map, provides a better supervision (Choi et al., 2020) than recent self-supervised monocular depth estimation approaches. More importantly, the cross-view consistency based on the pseudo depth labels mitigates the mismatch problem by inconsistent prediction results of two views, leading to a substantial performance gain. Our method aims at advancing the two tasks via task-specific losses based on pseudo ground truth labels, and existing MTL architectures, e.g. based on task-specific attention modules and adaptive balancing (Liu et al., 2019; Jha et al., 2020), can be used complementarily with our loss functions." }, { "heading": "3.2 LOSS FUNCTIONS", "text": "Loss functions are divided into two parts, 1) supervised loss for depth and segmentation networks and 2) pseudo depth-guided reconstruction loss for cross-view consistency. Note that the supervised\nloss used for monocular depth estimation relies on the pseudo depth labels generated from a pair of stereo images." }, { "heading": "3.2.1 LOSS FOR MONOCULAR DEPTH AND SEMANTIC SEGMENTATION", "text": "Depth maps di for i = {l, r}, predicted by the decoder Dd for monocular depth estimation, are used for measuring the depth regression loss Ld as follows:\nLd = ∑\ni={l,r}\nLreg(ci, di, d pgt i ), where Lreg(ci, di, d pgt i ) =\n1\nZi ∑ p∈Φ ci(p) · |di(p)− dpgti (p)|1,\n(1) where ci and d pgt i indicate the confidence map and pseudo ground truth depth map of left (i = l) or right (i = r) images, respectively. The loss is normalized with Zi = ∑\np ci(p). Φ represents a set of all pixels. The confidence map serves to exclude inaccurate depth values of dpgti when calculating the depth regression loss Ld. This can be used in various ways, including the hard thresholding (Cho et al., 2019; Tonioni et al., 2020) and the soft thresholding (Choi et al., 2020). Among them, the soft thresholded confidence map (Choi et al., 2020) is shown to be effective in the monocular depth estimation. Our work chose to threshold the confidence map through the soft-thresholding of (Choi et al., 2020). We found that the pretrained threshold network already provides satisfactory results, and thus it was fixed during our network training.\nA supervised loss for semantic segmentation is defined with the standard cross-entropy H :\nLs = ∑\ni={l,r}\nH(si, s gt i ), (2)\nsi and s gt i denote the segmentation map, predicted by the decoder Ds for semantic segmentation, and ground truth segmentation map, respectively. The supervised loss for both tasks is defined as LS = αdLd + αsLs with loss weights αd and αs." }, { "heading": "3.2.2 CROSS-VIEW CONSISTENCY LOSS", "text": "Minimizing the supervised lossLS for an individual view may often lead to the mismatched problem in the predicted depth and segmentation maps due to the lack of consistency constraints across two views. We address this issue by imposing the cross-view consistency across left and right images with the pseudo depth labels. Figure 2 shows the procedure of computing the cross-view consistency losses with pseudo depth labels. The cross-view consistency loss for the monocular depth estimation is defined as follows:\nLd,c = αd,lrLd,lr + αd,lLd,l + αd,rLd,r, (3) Ld,lr = Lreg ( cl, dl,G(dr; d pgt l ) ) + Lreg ( cr,G(dl; dpgtr ), dr ) , (4)\nLd,l = Lreg ( cl, d pgt l ,G(dr; d pgt l ) ) + Lreg ( cl, dl,G(dpgtr ; d pgt l ) ) , (5)\nLd,r = Lreg ( cr,G(dl; dpgtr ), d pgt r ) + Lreg ( cr,G(d pgt l ; d pgt r ), dr ) , (6)\nwhere αd,lr, αd,l, and αd,r denote weights for each loss. G(a; b) indicates the result of warping a with a depth map b into another view. For instance, G(dr; d pgt l ) returns the depth map warped onto the left image using dpgtl . Ld,lr measures the cross-view consistency between two predicted depth maps dl and dr. Note that the warping function G is applied to dr and dl, respectively. Similar to the depth regression loss Ld, the confidence map is used together to prevent inaccurate values in the pseudo depth labels from being used. Ld,l denotes the cross-view consistency for (d pgt l , dr) and (dl, dpgtr ) using the left pseudo label d pgt l . This implies that when warping dr (or d pgt r ) into the left image, the warped result should be similar to dpgtl (or dl). Ld,r is defined in a similar manner.\nThe cross-view consistency can also be applied to semantic segmentation as follows:\nLs,c = αs,lrLs,lr + αs,lLs,l + αs,rLs,r, (7) Ls,lr = cl · H ( sl,G(sr; d pgt l ) ) + cr · H ( G(sl; dpgtr ), sr ) , (8)\nLs,l = cl · H ( sgtl ,G(sr; d pgt l ) ) + cl · H ( sl,G(sgtr ; d pgt l ) ) , (9)\nLs,r = cr · H ( G(sl; dpgtr ), s gt r ) + cr · H ( G(sgtl ; d pgt r ), sr ) , (10)\nwhere ‘·’ indicates an element-wise multiplication. The confidence maps cl and cr are also used to compensate for errors in the pseudo depth labels dpgtl and d pgt r . Note that for some training datasets that provide no ground truth segmentation maps, we generate pseudo ground truth segmentation maps. More details are provided in Section 3.3.\nNote that in (Chen et al., 2019), the consistency for left and right segmentation maps is considered e.g., by minimizing H (sl,G(sr; dl)). Two segmentation maps sl and sr are aligned with the estimated monocular depth map dl. However, dl is continuously updated during the network training, and thus this may result in inaccurate alignments at early stage, often leading to divergences of loss. For these reasons, minimizing the loss H with respect to both monocular depth and segmentation maps often becomes very challenging, and the performance gain by the consistency loss is relatively marginal. Contrarily, our approach is more effective in imposing the cross-view consistency in that 1) more accurate pseudo depth labels, obtained from stereo matching networks, are used, 2) the confidence map helps to filter out inaccurate depth values in the pseudo ground truth depth maps. Furthermore, we extend the cross-view consistency to the monocular depth estimation, which\nis infeasible in the recent self-supervised monocular depth estimation approaches (Godard et al., 2017; 2019; Watson et al., 2019) that rely on the reconstruction loss only. A detailed ablation study will be provided to validate the effectiveness of the proposed cross-view consistency loss. A total loss is defined as\nL = LS + Ld,c + Ls,c. (11)\n3.3 TRAINING DETAILS\nWhile the pseudo depth labels dpgtl and d pgt r , generated using pretrained stereo matching networks, are used to supervise the monocular depth estimation task, the semantic segmentation task requires using the ground truth segmentation maps. The Cityscapes dataset provide only the left ground truth segmentation map sgtl , and the KITTI dataset does not provide them. In our work, we hence generated the pseudo segmentation labels of these images by using semantic segmentation methods (Cheng et al., 2020; Zhu et al., 2019). Table 1 summarizes the supervisions used for the two tasks." }, { "heading": "4 EXPERIMENTAL RESULTS", "text": "" }, { "heading": "4.1 DATASETS", "text": "We evaluated the performance on two popular datasets, KITTI (Geiger et al., 2012) and Cityscapes (Cordts et al., 2016). In KITTI, for a fair comparison, we followed the common setup to use 22,600 images for training and the rest for evaluation. The Eigen split data (697 images) (Eigen et al., 2014) was used for evaluating the monocular depth accuracy. Following existing MTL methods (Chen et al., 2019), the semantic segmentation accuracy was evaluated with 200 annotated data provided from KITTI benchmark. Cityscapes provides high resolution images of urban street scenes used for segmentation and depth estimation. 2,975 and 500 images were used for training and evaluation, respectively." }, { "heading": "4.2 IMPLEMENTATION DETAILS AND EVALUATION METRIC", "text": "We first pretrained the monocular depth network E+Dd and semantic segmentation network E+Ds independently for 30 epochs using the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 10−4 and momentum of 0.9. We then finetuned the whole network E + Dd + Ds for 20 epochs using the Adam optimizer with a learning rate of 10−5, reduced to 1/10 every 10 epochs, and a momentum of 0.9, after initializing it with the pretrained weight parameters of the monocular depth network E + Dd and semantic segmentation network Ds. During training, we resized KITTI images to a resolution of [480, 192], and cropped Cityscapes images [2048, 768] to exclude the front part of the car and resized to a resolution of [256, 96]. The weights for the objective function are set to αd = 850, αs = 2.5, αd,lr = 0.5, αd,l = 1 ,αd,r = 1, αs,lr = 0.5, αs,l = 1.5, αs,r = 1.5. The performance evaluation was conducted by following the common practices: 1) mean absolute relative error (Abs Rel), mean relative squared error (Sq Rel), root mean square error (RMSE), root mean square error log (RMSE log) and accuracy under threshold δ for monocular depth estimation, 2) intersection over union (IoU) and mean intersection over union (mIoU) for semantic segmentation. Due to page limits, some results are provided in appendix. Our code will be publicly available later." }, { "heading": "4.3 PERFORMANCE EVALUATION", "text": "KITTI In Table 2, we provide objective evaluation results on the KITTI Eigen split (Eigen et al., 2014). The proposed method produces very competitive results to state-of-the-arts monocular depth estimation approaches. Qualitative evaluation in Figure 3 verifies that our method yields the results with sharper boundaries and better object delineation. These validate the effectiveness of the crossview consistency based on the pseudo depth labels. In Figure 4, the proposed method produces satisfactory semantic segmentation results for the Cityscapes dataset, achieving mIoU = 59.93. Note that mIoU in the MTL approach of (Chen et al., 2019) is 39.13.\nCityscapes In Table 3, we compared results on the Cityscape dataset with recent multi-task learning approaches for monocular depth estimation and semantic segmentation tasks: ‘Cross-stitch’ (Misra et al., 2016) and ‘MTAN’ (Liu et al., 2019). ‘Split (deep)’, ‘Split (wide)’, and ‘Dense’ were reproduced by using author-provided codes in ‘MTAN’ (Liu et al., 2019). Our method achieves improved quantitative results on both tasks. Figure 5 exhibits qualitative results on the Cityscape dataset. As expected, depth and segmentation maps generated by our method are capable of preserving object boundaries and recover details better than the latest MTL methods (Misra et al., 2016; Liu et al., 2019)." }, { "heading": "4.4 ABLATION STUDY", "text": "We conducted the ablation experiments to validate the effectiveness of the confidence map and crossview consistency for the KITTI dataset in Table 4 and the Cityscapes dataset in Table 5. We first compared the performance with the method (b = di) based on the cross-view consistency using the estimated monocular depth map, e.g. H (sl,G(sr; dl)), similar to (Chen et al., 2019). Under the same setting, our method (b = dpgti ) tends to achieve higher mIoU than the method (b = di). Ad-\nditionally, while the method (b = di) often degenerates the monocular depth accuracy, our method (b = dpgti ) does not suffer from such an issue, achieving the improved monocular depth accuracy. Such a performance gain become even more apparent for both tasks when using the confidence map. Note that it is infeasible to leverage the confidence map for the method (b = di) in which the estimated monocular depth map is constantly updated during the network training. When including the cross-view consistency loss Ld,c for monocular depth estimation, the additional performance gain was observed, validating its effectiveness on the monocular depth estimation. Though the segmentation accuracy (mIoU) was slightly worsen in some cases, it is relatively marginal. This may be due to our architecture where the two tasks share the encoder, and more advanced MTL architecture, e.g. using task-attention modules (Liu et al., 2019), would lead to performance improvement. We reserve this as future work." }, { "heading": "5 CONCLUSION", "text": "This paper has presented a new MTL architecture designed for monocular depth estimation and semantic segmentation tasks. The cross-view consistency loss based on the pseudo depth labels, generated using pretrained stereo matching methods, was imposed on the prediction results of two views for resolving the mismatch problem. Intensive ablation study exhibited that it leads to a substantial performance gain in both tasks, especially achieving the best accuracy in the monocular depth estimation. Our task-specific losses can be used complementarily together with existing MTL architectures, e.g. based on task-specific attention modules (Liu et al., 2019). An intelligent combination with these approaches is expected to further improve the performance. Additionally, how to integrate recent architectures (Chen et al., 2018; Takikawa et al., 2019) designed for semantic segmentation into the MTL network would be an interesting research direction." }, { "heading": "A APPENDIX", "text": "A.1 MORE COMPREHENSIVE EVALUATION RESULTS FOR KITTI\nWe provide more comprehensive results for KITTI dataset. Figure 6 shows the qualitative evaluation with existing monocular depth estimation methods on Eigen Split of the KITTI dataset. Figure 7 shows the semantic segmentation prediction results on the KITTI dataset. We also evaluated the performance with the improved ground truth depth maps made available at (Uhrig et al., 2017) for the KITTI dataset in Table 6. Our approach is state-of-the-art in monocular depth estimation compared to existing methods.\nA.2 MORE COMPREHENSIVE EVALUATION RESULTS FOR CITYSCAPES\nWe provide more comprehensive results for the Cityscapes dataset. Figure 8 and 9 show the qualitative evaluation results with existing MTL methods (Misra et al., 2016; Liu et al., 2019) for monocular depth estimation and semantic segmentation on the Cityscapes dataset. These results also support the effectiveness of our method.\nA.3 QUALITATIVE EVALUATION FOR ABLATION STUDY\nFigure 10 and 11 and shows the qualitative evaluation for ablation study on the KITTI and Cityscape datasets, respectively. It was found that our final results are much improved compared to those of the ‘Baseline’ model, validating the effectiveness of the confidence map and cross-view consistency loss." } ]
2,020
null
SP:8fb2da71029fc4096f279c5873a2c55e8afaa947
[ "This paper empirically investigates the effect of the trace of the Fisher Information Matrix (FIM) early in training has on the generalization of SGD. Authors demonstrate that the effect of optimally chosen learning rate and batch size for SGD can be modeled as an implicit penalty on the trace of FIM. They argue that explicitly penalizing the trace of FIM discourages memorizing noisy labels, thus leading to better generalization. Furthermore, they experimentally show that the early low value of the trace of FIM may bias the optimization towards a flat optimum which has been observed to correlate well with good generalization." ]
The early phase of training has been shown to be important in two ways for deep neural networks. First, the degree of regularization in this phase significantly impacts the final generalization. Second, it is accompanied by a rapid change in the local loss curvature influenced by regularization choices. Connecting these two findings, we show that stochastic gradient descent (SGD) implicitly penalizes the trace of the Fisher Information Matrix (FIM) from the beginning of training. We argue it is an implicit regularizer in SGD by showing that explicitly penalizing the trace of the FIM can significantly improve generalization. We further show that the early value of the trace of the FIM correlates strongly with the final generalization. We highlight that in the absence of implicit or explicit regularization, the trace of the FIM can increase to a large value early in training, to which we refer as catastrophic Fisher explosion. Finally, to gain insight into the regularization effect of penalizing the trace of the FIM, we show that it limits memorization by reducing the learning speed of examples with noisy labels more than that of the clean examples, and 2) trajectories with a low initial trace of the FIM end in flat minima, which are commonly associated with good generalization.
[]
[ { "authors": [ "Alessandro Achille", "Matteo Rovere", "Stefano Soatto" ], "title": "Critical learning periods in deep networks", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Milad Alizadeh", "Arash Behboodi", "Mart van Baalen", "Christos Louizos", "Tijmen Blankevoort", "Max Welling" ], "title": "Gradient `1 regularization for quantization robustness", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Devansh Arpit", "Víctor Campos", "Yoshua Bengio" ], "title": "How to initialize your network? robust initialization for weightnorm & resnets", "venue": "In Advances in Neural Information Processing Systems,", "year": 2019 }, { "authors": [ "Satrajit Chatterjee" ], "title": "Coherent gradients: An approach to understanding generalization in gradient descent-based optimization", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Pratik Chaudhari", "Anna Choromanska", "Stefano Soatto", "Yann LeCun", "Carlo Baldassi", "Christian Borgs", "Jennifer Chayes", "Levent Sagun", "Riccardo Zecchina" ], "title": "Entropy-sgd: Biasing gradient descent into wide valleys", "venue": "Journal of Statistical Mechanics: Theory and Experiment,", "year": 2019 }, { "authors": [ "Soham De", "Samuel L. Smith" ], "title": "Batch normalization biases deep residual networks towards shallow paths, 2020", "venue": null, "year": 2020 }, { "authors": [ "J. Deng", "W. Dong", "R. Socher", "L.-J. Li", "K. Li", "L. Fei-Fei" ], "title": "ImageNet: A Large-Scale Hierarchical Image Database", "venue": "In CVPR09,", "year": 2009 }, { "authors": [ "Laurent Dinh", "Razvan Pascanu", "Samy Bengio", "Yoshua Bengio" ], "title": "Sharp Minima Can Generalize For Deep Nets", "venue": "In Proceedings of the 34th International Conference on Machine Learning,", "year": 2017 }, { "authors": [ "H. Drucker", "Y. Le Cun" ], "title": "Improving generalization performance using double backpropagation", "venue": "IEEE Transactionsf on Neural Networks,", "year": 1992 }, { "authors": [ "Stanislav Fort", "Paweł Krzysztof Nowak", "Stanislaw Jastrzebski", "Srini Narayanan" ], "title": "Stiffness: A new perspective on generalization in neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "Jonathan Frankle", "David J. Schwab", "Ari S. Morcos" ], "title": "The early phase of neural network training", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Aditya Sharad Golatkar", "Alessandro Achille", "Stefano Soatto" ], "title": "Time matters in regularizing deep networks: Weight decay and data augmentation affect early learning dynamics, matter little near convergence", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Ishaan Gulrajani", "Faruk Ahmed", "Martin Arjovsky", "Vincent Dumoulin", "Aaron C Courville" ], "title": "Improved training of wasserstein gans", "venue": "In Advances in Neural Information Processing Systems", "year": 2017 }, { "authors": [ "Guy Gur-Ari", "Daniel A. Roberts", "Ethan Dyer" ], "title": "Gradient descent happens in a tiny subspace, 2018", "venue": null, "year": 2018 }, { "authors": [ "Haowei He", "Gao Huang", "Yang Yuan" ], "title": "Asymmetric valleys: Beyond sharp and flat local minima", "venue": "In Advances in Neural Information Processing Systems", "year": 2019 }, { "authors": [ "Kaiming He", "Xiangyu Zhang", "Shaoqing Ren", "Jian Sun" ], "title": "Deep Residual Learning for Image Recognition", "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR),", "year": 2016 }, { "authors": [ "Wei Hu", "Lechao Xiao", "Ben Adlam", "Jeffrey Pennington" ], "title": "The surprising simplicity of the early-time learning dynamics of neural networks, 2020", "venue": null, "year": 2020 }, { "authors": [ "G. Huang", "Z. Liu", "L. Van Der Maaten", "K.Q. Weinberger" ], "title": "Densely connected convolutional networks", "venue": "IEEE Conference on Computer Vision and Pattern Recognition", "year": 2017 }, { "authors": [ "Michael F Hutchinson" ], "title": "A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines", "venue": "Communications in Statistics-Simulation and Computation,", "year": 1990 }, { "authors": [ "Arthur Jacot", "Franck Gabriel", "Clement Hongler" ], "title": "Neural tangent kernel: Convergence and generalization in neural networks", "venue": "Advances in Neural Information Processing Systems", "year": 2018 }, { "authors": [ "Stanislaw Jastrzebski", "Zachary Kenton", "Devansh Arpit", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amos J. Storkey" ], "title": "Three Factors Influencing Minima in SGD", "venue": null, "year": 2017 }, { "authors": [ "Stanislaw Jastrzebski", "Maciej Szymczak", "Stanislav Fort", "Devansh Arpit", "Jacek Tabor", "Kyunghyun Cho", "Krzysztof Geras" ], "title": "The break-even point on optimization trajectories of deep neural networks", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Stanisław Jastrzębski", "Zachary Kenton", "Nicolas Ballas", "Asja Fischer", "Yoshua Bengio", "Amost Storkey" ], "title": "On the relation between the sharpest directions of DNN loss and the SGD step length", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Lu Jiang", "Di Huang", "Mason Liu", "Weilong Yang" ], "title": "Beyond synthetic noise: Deep learning on controlled noisy labels", "venue": "In ICML,", "year": 2020 }, { "authors": [ "Yiding Jiang", "Behnam Neyshabur", "Dilip Krishnan", "Hossein Mobahi", "Samy Bengio" ], "title": "Fantastic Generalization Measures and Where to Find Them", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Nitish Shirish Keskar", "Dheevatsa Mudigere", "Jorge Nocedal", "Mikhail Smelyanskiy", "Ping Tak Peter Tang" ], "title": "On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima", "venue": "In 5th International Conference on Learning Representations,", "year": 2017 }, { "authors": [ "Alex Krizhevsky" ], "title": "Learning multiple layers of features from tiny images", "venue": "Technical report,", "year": 2009 }, { "authors": [ "Y. Le", "X. Yang" ], "title": "Tiny imagenet visual recognition challenge", "venue": null, "year": 2015 }, { "authors": [ "Guillaume Leclerc", "Aleksander Madry" ], "title": "The two regimes of deep network training, 2020", "venue": null, "year": 2020 }, { "authors": [ "Aitor Lewkowycz", "Yasaman Bahri", "Ethan Dyer", "Jascha Sohl-Dickstein", "Guy Gur-Ari" ], "title": "The large learning rate phase of deep learning: the catapult mechanism, 2020", "venue": null, "year": 2020 }, { "authors": [ "Junnan Li", "Richard Socher", "Steven C.H. Hoi" ], "title": "Dividemix: Learning with noisy labels as semisupervised learning", "venue": "In International Conference on Learning Representations,", "year": 2020 }, { "authors": [ "Wesley J. Maddox", "Gregory Benton", "Andrew Gordon Wilson" ], "title": "Rethinking parameter counting in deep models: Effective dimensionality revisited, 2020", "venue": null, "year": 2020 }, { "authors": [ "James Martens" ], "title": "New insights and perspectives on the natural gradient method, 2020", "venue": null, "year": 2020 }, { "authors": [ "Behnam Neyshabur" ], "title": "Implicit regularization in deep learning", "venue": null, "year": 2017 }, { "authors": [ "Tomaso Poggio", "Kenji Kawaguchi", "Qianli Liao", "Brando Miranda", "Lorenzo Rosasco", "Xavier Boix", "Jack Hidary", "Hrushikesh Mhaskar" ], "title": "Theory of deep learning iii: explaining the non-overfitting puzzle, 2018", "venue": null, "year": 2018 }, { "authors": [ "Salah Rifai", "Pascal Vincent", "Xavier Muller", "Xavier Glorot", "Yoshua Bengio" ], "title": "Contractive autoencoders: Explicit invariance during feature extraction", "venue": "In ICML,", "year": 2011 }, { "authors": [ "Levent Sagun", "Utku Evci", "V. Ugur Guney", "Yann Dauphin", "Leon Bottou" ], "title": "Empirical analysis of the hessian of over-parametrized neural networks, 2018", "venue": null, "year": 2018 }, { "authors": [ "Karen Simonyan", "Andrew Zisserman" ], "title": "Very deep convolutional networks for large-scale image recognition", "venue": "In International Conference on Learning Representations,", "year": 2015 }, { "authors": [ "Samuel L. Smith", "Quoc V. Le" ], "title": "A bayesian perspective on generalization and stochastic gradient descent", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Jiaming Song", "Lunjia Hu", "Michael Auli", "Yann Dauphin", "Tengyu Ma" ], "title": "Robust and on-the-fly dataset denoising for image classification, 2020", "venue": null, "year": 2020 }, { "authors": [ "Daniel Soudry", "Elad Hoffer", "Mor Shpigel Nacson", "Suriya Gunasekar", "Nathan Srebro" ], "title": "The implicit bias of gradient descent on separable data", "venue": "The Journal of Machine Learning Research,", "year": 2018 }, { "authors": [ "Valentin Thomas", "Fabian Pedregosa", "Bart Merriënboer", "Pierre-Antoine Manzagol", "Yoshua Bengio", "Nicolas Le Roux" ], "title": "On the interplay between noise and curvature and its effect on optimization and generalization", "venue": "In International Conference on Artificial Intelligence and Statistics. PMLR,", "year": 2020 }, { "authors": [ "Yusuke Tsuzuku", "Issei Sato", "Masashi Sugiyama" ], "title": "Normalized flat minima: Exploring scale invariant definition of flat minima for neural networks using pac-bayesian analysis, 2019", "venue": null, "year": 2019 }, { "authors": [ "Dániel Varga", "Adrián Csiszárik", "Zsolt Zombori" ], "title": "Gradient regularization improves accuracy of discriminative models, 2018", "venue": null, "year": 2018 }, { "authors": [ "Wei Wen", "Yandan Wang", "Feng Yan", "Cong Xu", "Chunpeng Wu", "Yiran Chen", "Hai Li" ], "title": "Smoothout: Smoothing out sharp minima to improve generalization in deep learning, 2018", "venue": null, "year": 2018 }, { "authors": [ "Zhiqin John Xu" ], "title": "Understanding training and generalization in deep learning by fourier analysis, 2018", "venue": null, "year": 2018 }, { "authors": [ "Yuichi Yoshida", "Takeru Miyato" ], "title": "Spectral norm regularization for improving the generalizability of deep learning, 2017", "venue": null, "year": 2017 }, { "authors": [ "Sergey Zagoruyko", "Nikos Komodakis" ], "title": "Wide residual networks", "venue": "In Proceedings of the British Machine Vision Conference (BMVC),", "year": 2016 }, { "authors": [ "Chiyuan Zhang", "Samy Bengio", "Moritz Hardt", "Benjamin Recht", "Oriol Vinyals" ], "title": "Understanding deep learning requires rethinking generalization", "venue": "In International Conference on Learning Representations,", "year": 2016 }, { "authors": [ "Hongyi Zhang", "Moustapha Cisse", "Yann N. Dauphin", "David Lopez-Paz" ], "title": "mixup: Beyond empirical risk minimization", "venue": "In International Conference on Learning Representations,", "year": 2018 }, { "authors": [ "Hongyi Zhang", "Yann N. Dauphin", "Tengyu Ma" ], "title": "Residual learning without normalization via better initialization", "venue": "In International Conference on Learning Representations,", "year": 2019 }, { "authors": [ "Zhang" ], "title": "For each hyperparameter setting, we run two experiments with different random seeds due to the computational overhead. We compute Tr(F) using 2500 samples (similarly to ?). CIFAR-10: We used random flipping as data augmentation. In the experiments with variation in learning rates, we use a batch size of 256", "venue": null, "year": 2019 }, { "authors": [ "He" ], "title": "2015), we train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. We remove Batch Normalization layers. To ensure stable training we use the SkipInit initialization (De & Smith, 2020). VGG-11 on the CIFAR-100 dataset We adapt the VGG-11 model (Simonyan", "venue": null, "year": 2015 } ]
[ { "heading": "1 INTRODUCTION", "text": "Implicit regularization in gradient-based training of deep neural networks (DNNs) remains relatively poorly understood, despite being considered a critical component in their empirical success (Neyshabur, 2017; Zhang et al., 2016; Jiang et al., 2020b). Recent work suggests that the early phase of training of DNNs might hold the key to understanding these implicit regularization effects. Golatkar et al. (2019); Keskar et al. (2017); Sagun et al. (2018); Achille et al. (2019) show that by introducing regularization later, a drop in performance due to lack of regularization in this phase is hard to recover from, while on the other hand, removing regularization after the early phase has a relatively small effect on the final performance.\nOther works show that the early phase of training also has a dramatic effect on the trajectory in terms of properties such as the local curvature of the loss surface or the gradient norm (Jastrzebski et al., 2020; Frankle et al., 2020). In particular, Achille et al. (2019); Jastrzębski et al. (2019); Golatkar et al. (2019); Lewkowycz et al. (2020); Leclerc & Madry (2020) independently suggest that rapid changes in the local curvature of the loss surface in the early phase critically affects the final generalization. Closely related to our work, Lewkowycz et al. (2020); Jastrzębski et al. (2019) show that using a large learning rate has a dramatic effect on the early optimization trajectory in terms of the loss curvature. These observations lead to a question: what is the mechanism by which regularization in the early phase impacts the optimization trajectory and generalization? We investigate this question mainly through the lens of the Fisher Information Matrix (FIM), a matrix that can be seen as approximating the local curvature of the loss surface in DNNs (Martens, 2020; Thomas et al., 2020).\nOur main contribution is to show that the implicit regularization effect of using a large learning rate or a small batch size can be modeled as an implicit penalization of the trace of the FIM (Tr(F)) from the very beginning of training. We demonstrate on image classification tasks that the value of Tr(F) early in training correlates with the final generalization performance across settings with different learning rates or batch sizes. We then show evidence that explicitly regularizing Tr(F) (which we call Fisher penalty) significantly improves generalization in training with a sub-optimal learning rate. On the other hand, growth of Tr(F) early in training, which may occur in practice when using\na relatively small learning rate, coincides with poor generalization. We call this phenomenon the catastrophic Fisher explosion. Figure 1 illustrates this effect on the TinyImageNet dataset (Le & Yang, 2015).\nOur second contribution is an analysis of why implicitly or explicitly regularizing Tr(F) impacts generalization. We reveal two effects of implicit or explicit regularization of Tr(F): (1) penalizing Tr(F) discourages memorizing noisy labels, (2) small Tr(F) in the early phase of training biases optimization towards a flat minimum, as characterized by the trace of the Hessian." }, { "heading": "2 IMPLICIT AND EXPLICIT REGULARIZATION OF THE FIM", "text": "Fisher Information Matrix Consider a probabilistic classification model pθ(y|x), where θ denotes its parameters. Let `(x, y;θ) be the cross-entropy loss function calculated for input x and label y. Let g(x, y;θ) = ∂∂θ `(x, y;θ) denote the gradient computed for an example (x, y). The central object that we study is the Fisher Information Matrix F defined as\nF(θ) = Ex∼X ,ŷ∼pθ(y|x)[g(x, ŷ)g(x, ŷ) T ], (1)\nwhere the expectation is often approximated using the empirical distribution X̂ induced by the training set. We denote its trace by Tr(F). Later, we also look into the Hessian H(θ) = ∂ 2\n∂θ2 `(x, y;θ). We denote its trace by Tr(H).\nThe FIM can be seen as an approximation to the Hessian (Martens, 2020). In particular, as p(y|x; θ) → p̂(y|x), where p̂(y|x) is the empirical label distribution, the FIM converges to the\nHessian. Thomas et al. (2020) showed on image classifications tasks that Tr(H) ≈ Tr(F) along the optimization trajectory, which we also evidence in Appendix F.\nFisher Penalty Several studies have presented evidence that the early phase has a drastic effect on the trajectory in terms of the local curvature of the loss surface (Achille et al., 2019; Jastrzębski et al., 2019; Gur-Ari et al., 2018; Lewkowycz et al., 2020; Leclerc & Madry, 2020). In particular, Lewkowycz et al. (2020); Jastrzębski et al. (2019) show that using a large learning rate in stochastic gradient descent biases training towards low curvature regions of the loss surface very early in training. For example, using a large learning rate in SGD was shown to result in a rapid decay of Tr(H) along the optimization trajectory Jastrzębski et al. (2019).\nOur main contribution is to propose and investigate a specific mechanism by which using a large learning rate or a small batch size implicitly influences final generalization. Our first insight is to shift the focus from studying the Hessian, to studying properties of the FIM. Concretely, we hypothesize that using a large learning rate or a small batch size improves generalization by implicitly penalizing Tr(F) from the very beginning of training.\nThe benefit of studying the FIM is that it can be directly and efficiently manipulated during training. In order to study the effect of implicit regularization of Tr(F), we introduce a regularizer, which we refer to as Fisher penalty, explicitly penalizing Tr(F). We derive this regularizer in the following way. First, we note that Tr(F) can be written as Tr(F) = Ex∼X ,ŷ∼pθ(y|x) [ ‖ ∂∂θ `(x, ŷ)‖ 2 2 ] .\nTo regularize Tr(F), we add the following term to the loss function:\n`′(x1:B , y1:B ;θ) = 1\nB B∑ i=1 `(xi, yi;θ) + α ∥∥∥∥∥ 1B B∑ i=1 g(xi, ŷi) ∥∥∥∥∥ 2 , (2)\nwhere (x1:B , y1:B) is a mini-batch, ŷi is sampled from pθ(y|xi), and α is a hyperparameter. We refer to this regularizer as Fisher penalty. The formulation is based on the empirical observation\nthat ∥∥∥ 1B ∑Bi=1 g(xi, ŷi)∥∥∥2 and Tr(F) correlate well during training. Crucially, this allows us to reduce the added computational cost of Fisher penalty to that of a single additional backpropagation call (Drucker & Le Cun, 1992). Finally, we compute the gradient of the second term only every 10 optimization steps, and in a given iteration use the most recently computed gradient. We discuss these approximations in detail in Appendix C.\nCatastrophic Fisher Explosion To illustrate the concepts mentioned in this section, we train a Wide ResNet model (depth 44, width 3) (Zagoruyko & Komodakis, 2016) on the TinyImageNet dataset with SGD and two different learning rates. We illustrate in Figure 1 that the small learning rate leads to dramatic overfitting, which coincides with a sharp increase in Tr(F) in the early phase of training. We also show in Appendix D that these effects cannot be explained by the difference in learning speed between runs with smaller and learning rates. We call this phenomenon the catastrophic Fisher explosion.\n3 EARLY-PHASE Tr(F) CORRELATES WITH FINAL GENERALIZATION\nUsing a large learning rate (η) or a small batch size (S) in SGD steers optimization to a lower curvature region of the loss surface. However, it remains a hotly debated topic whether this explains their strong regularization effect (Dinh et al., 2017; He et al., 2019; Maddox et al., 2020; Tsuzuku et al., 2019; Yoshida & Miyato, 2017). We begin by studying the connection between Tr(F) and generalization in experiments across which we vary η or S in SGD.\nExperimental setup We run our experiments in two settings: (1) ResNet-18 with Fixup He et al. (2015); Zhang et al. (2019) trained on the ImageNet dataset (Deng et al., 2009), (2) ResNet-26 initialized with Arpit et al. (2019) trained on the CIFAR-10 and CIFAR-100 datasets (Krizhevsky, 2009). We train each architecture using SGD, with various values of η, S, and random seed.\nWe define Tr(Fi) as Tr(F) during the initial phase of training. The early-phase Tr(F) is measured when the training loss crosses a task-specific threshold . For ImageNet, we use learning rates 0.001,\n0.01, 0.1, and = 3.5. For CIFAR-10, we use learning rates 0.007, 0.01, 0.05, and = 1.2. For CIFAR-100, we use learning rates 0.001, 0.005, 0.01, and = 3.5. In all cases, training loss reaches between 2 and 7 epochs across different hyper-parameter settings. We repeat similar experiments for different batch sizes in Appendix A.1. The remaining training details can be found in Appendix G.1.\nResults Figure 2 shows the correlation between Tr(Fi) and test accuracy across runs with different learning rates. We show results for CIFAR-10 and CIFAR-100 when varying the batch size in Figure 7 in the Appendix. We find that Tr(Fi) correlates well with the final generalization in our setting, which provides initial evidence for the importance of Tr(F). It also serves as a stepping stone towards developing a more granular understanding of the role of implicit regularization of Tr(F) in the following sections." }, { "heading": "4 FISHER PENALTY", "text": "To better understand the significance of the identified correlation between Tr(Fi) and generalization, we now run experiments in which we directly penalize Tr(F). We focus our attention on the identified effect of high learning rate on Tr(F).\nExperimental setting We use a similar setting as in the previous section, but we include larger models. We run experiments using Wide ResNet (Zagoruyko & Komodakis, 2016) (depth 44 and width 3, with or without BN layers), SimpleCNN (without BN layers), DenseNet (L=40, K=12) (Huang et al., 2017) and VGG-11 (Simonyan & Zisserman, 2015). We train these models on either the CIFAR-10 or the CIFAR-100 datasets. Due to larger computational cost, we replace ImageNet with the TinyImageNet dataset (Le & Yang, 2015) in these experiments.\nTo investigate if the correlation of Tr(Fi) and final generalization holds more generally, we apply Fisher penalty in two settings. First, we use a learning rate 10-30x smaller than the optimal one, which both incur up to 9% degradation in test accuracy and results in large value of Tr(Fi). We also remove data augmentation from the CIFAR-10 and the CIFAR-100 datasets to ensure that training with small learning rate does not result in underfitting. In the second setting, we add Fisher penalty in training with an optimized learning rate using grid search (η∗) and train with data augmentation.\nFisher penalty penalizes the gradient norm computed using labels sampled from pθ(y|x). We hypothesize that a similar, but weaker, effect can be introduced by other gradient norm regularizers. First, we compare FP to penalizing the input gradient norm ‖gx‖ = ∂∂x`(x, y), which we denote by GPx (Varga et al., 2018; Rifai et al., 2011; Drucker & Le Cun, 1992). We also experiment with penalizing the vanilla mini-batch gradient Gulrajani et al. (2017), which we denote by GP. Finally, we experiment with penalizing the mini-batch gradient computed with random labels ‖gr‖ = ∂∂x`(x, ŷ) where ŷ is sampled from a uniform distribution over the label set (GPr). We are not aware of any prior work using GP or GPr in supervised training, with the exception of Alizadeh et al. (2020) where the authors penalized `1 norm of gradients to compress the network towards the end of training.\nWe tune the hyperparameters on the validation set. More specifically for α, we test 10 different values spaced uniformly between 10−1×v to 101×v on a logarithmic scale with v ∈ R+. For TinyImageNet\nwe test 5 alternatives instead. To pick the optimal learning rate, we evaluate 5 values spaced equally on a logarithmic scale. We include the remaining experimental details in the Appendix G.2.\nFisher Penalty improves generalization Table 1 summarizes the results of the main experiment. First, we observe that a suboptimal learning rate (10-30x lower than the optimal) leads to dramatic overfitting. We observe a degradation of up to 9% in test accuracy, while achieving perfect training accuracy (see Table 6 in the Appendix).\nFisher penalty closes the gap in test accuracy between the small and optimal learning rate, and even achieves better performance than the optimal learning rate. A similar performance was observed when minimizing ‖gr‖. We will come back to this observation in the next section. GP and GPx reduce the early value of Tr(F) (see Table 4 in the Appendix). They, however, generally perform worse than Tr(F) or GPr and do not fully close the gap between small and optimal learning rate. We hypothesize they improve generalization by a similar but less direct mechanism than Tr(F) and GPr.\nIn the second experimental setting, we apply FP to a network trained with the optimal learning rate η∗. According to Table 2, Fisher Penalty improves generalization in 4 out of 5 settings. The gap between the baseline and FP is small in 3 out of 5 settings (below 1%), which is natural given that we already regularize training implicitly by using the optimal η and data augmentation.\nGeometry and generalization in the early phase of training Here, we investigate the temporal aspect of Fisher Penalty on CIFAR-10 and CIFAR-100. In particular, we study whether early penalization of Tr(F) matters for final generalization.\nFirst, we observe that all gradient-norm regularizers reduce the early value of Tr(F) closer to Tr(F) achieved when trained with the optimal learning rate η∗. We show this effect with Wide ResNet and VGG-11 on CIFAR-100 in Figure 3, and for other experimental settings in the Appendix. We also tabulate the maximum achieved values of Tr(F) over the optimization trajectory in Appendix A.2.\n0 50 100 Fisher Penalty onset\n46\n48\n50\n52\n54\nVa lid\nat io\nn Ac\ncu ra\ncy (%\n)\n(a) Wide ResNet CIFAR100 (w/o aug.)\n0 20 40 60 Fisher Penalty onset\n35.0\n37.5\n40.0\n42.5\n45.0\n47.5\nVa lid\nat io\nn Ac\ncu ra\ncy (%\n)\n(b) VGG-11 CIFAR-100 (w/o aug.)\n0 50 100 Fisher Penalty onset\n72\n74\n76\n78\n80\nVa lid\nat io\nn Ac\ncu ra\ncy (%\n)\n(c) Simple CNN CIFAR10 (w/o aug.)\n0 50 100 Fisher Penalty onset\n60\n62\n64\n66\nVa lid\nat io\nn Ac\ncu ra\ncy (%\n)\n(d) DenseNet CIFAR-100 (w/o aug.)\nFigure 4: Each subplot summarizes an experiment in which we apply Fisher Penalty starting from a certain epoch (x axis) and measure the final test accuracy (y axis). Fisher Penalty has to be applied from the beginning of training to close the generalization gap to the optimal learning rate (c.f. the red horizontal line to the blue horizontal line).\nTo test the importance of explicitly penalizing Tr(F) early in training, we start applying it after a certain number of epoch E ∈ {1, 2, 4, 8, 16, 32, 64, 128}. We use the best hyperparameter set from the previous experiments. Figure 4 summarizes the results. For both datasets, we observe a consistent pattern. When FP is applied starting from a later epoch, final generalization is significantly worse, and the generalization gap arising from a suboptimal learning rate is not closed." }, { "heading": "4.1 FISHER PENALTY REDUCES MEMORIZATION", "text": "It is not self-evident how regularizing Tr(F) influences generalization. In this section, we provide evidence that regularizing Tr(F) slows down learning on data with noisy labels. To study this, we replace labels of the examples in the CIFAR-100 dataset (25% or 50% of the training set) with labels sampled uniformly. While label noise in real datasets is not uniform, methods that perform well with uniform label noise generally are more robust to label noise in real datasets (Jiang et al., 2020a). We also know that datasets such as CIFAR-100 contain many labeling errors (Song et al., 2020). As such, examining how Tr(F) reduces memorization of synthetic label noise provides an insight into how it improves generalization in our prior experiments.\nWe expect FP to reduce memorization. When the predictive distribution pθ(y|x) and the true label distribution p∗(y|x) are both uniform, Tr(F) of the specific example x is equivalent to the squared loss gradient norm of the sample example. The proposed Fisher penalty thus minimizes the contribution of the loss gradient from the training examples whose labels were sampled uniformly. In other words, the Fisher penalty implicitly suppresses learning noisy examples, under the assumption that clean examples’ label distributions are not uniform.\nTo study whether the above happens in practice, we compare FP to GPx, GPr, and mixup (Zhang et al., 2018). While mixup is not the state-of-the-art approach to learning with noisy labels, it is competitive among approaches that do not require additional data nor multiple stages of training. In particular, it is a component in several state-of-the-art approaches (Li et al., 2020; Song et al., 2020). For gradient norm based regularizers, we evaluate 6 different hyperparameter values spaced uniformly on a logarithmic scale, and for mixup we evaluate β ∈ {0.2, 0.4, 0.8, 1.6, 3.2, 6.4}. We experiment with the Wide ResNet and VGG-11 models. We describe remaining experimental details in the Appendix G.3.\nResults We begin by studying the learning dynamics on data with noisy labels through the lens of training accuracy and mini-batch gradient norm. We show the results for VGG-11 and ResNet-50 in Figure 5 and Figure 9 in the Appendix. We observe that FP limits the ability of the model to memorize data more strongly than it limits its ability to learn from clean data. We can further confirm our interpretation of the effect Tr(F) has on training by studying the gradient norms. As visible in Figure 5, the gradient norm on examples with noisy labels is larger than on clean examples, and the ratio is closer to 1 when large regularization is applied.\nWe report test accuracy (at the best validation point) in Table 3. We observe that Tr(F) reduces memorization competitively to mixup. Furthermore, FP performs similarly to GPr, which agrees with our interpretation of why FP limits learning on examples with noisy labels.\n5 EARLY Tr(F) INFLUENCES FINAL CURVATURE\nTo provide further insight why it is important to regularize Tr(F) during the early phase of training, we establish a connection between the early phase of training and the wide minima hypothesis (Hochreiter & Schmidhuber, 1997; Keskar et al., 2017) which states that flat minima typically correspond to better generalization. Here, we use Tr(H) as a measure of flatness.\nExperimental setting We investigate how likely it is for an optimization trajectory to end up in a wide minimum in two scenarios. 1) When optimization exhibits small Tr(F) early on. 2) When optimization exhibits large Tr(F) early on. Specifically, we train two separate ResNet-26 models for 20 epochs using high and low regularization configurations. At epoch 20 we record Tr(F) for each model. We then use these two models as initialization for 8 separate models each, and continue training using the low regularization configuration with different random seeds. The motivation behind this experiment is to show that the degree of regularization in the early phase biases the model towards minima with certain flatness (Tr(H)) even though no further high regularization\nconfigurations are used during the rest of the training. For all these runs, we record the best test accuracy along the optimization trajectory along with Tr(H) at the point corresponding to the best test accuracy. We describe the remaining experimental details in Appendix G.4.\nResults We present the result in Figure 6 for the CIFAR-100 datasets, and for CIFAR-10 in Appendix A.4. A training run that shows a lower Tr(F) during the early phase is more likely to end up in a wider minimum as opposed to one that reaches large Tr(F) during the early phase. This happens despite that the late phases of both sets of models use the low regularization configuration. The latter runs have a high variance in the best test accuracy and always end up in sharper minima. In Appendix G.4 we also show evolution of Tr(H) throughout training, which suggests that this behavior can be attributed to curvature stabilization happening early during training." }, { "heading": "6 RELATED WORK", "text": "SGD’s implicit regularization effect has been argued to be a critical component of the empirical success of DNNs (Neyshabur, 2017; Zhang et al., 2016). Much of it is attributed to the choice of hyperparameters (Keskar et al., 2017; Smith & Le, 2018; Jastrzebski et al., 2017), the low complexity bias induced by gradient descent (Xu, 2018; Jacot et al., 2018; Hu et al., 2020) or the cross-entropy loss function (Poggio et al., 2018; Soudry et al., 2018). However, a more mechanistic understanding of how SGD implicitly regularizes DNNs remains a largely unsolved problem.\nPrior work on replicating SGD’s implicit regularization focused mainly on the loss curvature at the final minimum (Hochreiter & Schmidhuber, 1997). Chaudhari et al. (2019) propose a Langevin dynamics based algorithm for finding update directions that point towards wide minima. Wen et al. (2018) propose to find wide minima by averaging gradients at the neighborhood of the current parameter state. In contrast, we shift the focus to the FIM and the early phase of training. This new perspective allows us to more directly test our theory by explicitly penalizing Tr(F).\nPenalizing Tr(F) is related to regularizing the input gradient norm, which was shown to be an effective regularizer for deep neural networks (Drucker & Le Cun, 1992; Varga et al., 2018). Chatterjee (2020); Fort et al. (2020) show that SGD avoids memorization by extracting commonalities between examples due to following gradient descent directions shared between examples. Our work is complementary. We argue that SGD implicitly penalizes Tr(F), which also reduces memorization. Concurrently, Barrett & Dherin (2020) show that SGD implicitly penalizes the gradient norm for large learning rates and propose GP as an explicit regularizer. Similarly, we found that SGD implicitly regularizes Tr(F), which is the squared gradient norm under labels sampled from pθ(y|x). In contrast to them, we connected the implicit regularization effect of SGD to large curvature in the early phase. We also found GP to be a generally less effective regularizer than FP." }, { "heading": "7 CONCLUSION", "text": "Inspired by recent findings of rapid changes to the local curvature of the loss surface that happen in the early phase of training (Achille et al., 2019; Jastrzębski et al., 2019; Lewkowycz et al., 2020), we investigated more closely the connection between the loss geometry in the early phase of training of neural networks and generalization.\nWe proposed and investigated a hypothesis that SGD influences generalization by implicitly penalizing the trace of the Fisher Information Matrix (Tr(F)) from the very beginning of training. We show that (1) the value of early Tr(F) correlates with final generalization, and (2) explicitly regularizing Tr(F) can substantially improve generalization.\nTo gain further insight into the mechanism by which penalizing Tr(F) improves generalization, we investigated training on noisy data. We found that penalizing Tr(F) reduces memorization by penalizing examples with noisy labels more strongly than clean ones, which seems to happen because it penalizes more strongly their gradient norm. This sheds new light onto implicit regularization effects in SGD, and suggests the utility of penalizing Tr(F) as an explicit regularizer.\nAn interesting topic for the future is to put our findings in the context of transfer and continual learning. We hypothesize that catastrophic Fisher explosion (the initial growth of Tr(F) to a large value) can negatively impact not only generalization, but also transferability of the model." }, { "heading": "A ADDITIONAL RESULTS", "text": "A.1 EARLY PHASE Tr(F) CORRELATES WITH FINAL GENERALIZATION\nIn this section, we present the additional experimental results for Section 3. The experiments with varying batch size for CIFAR-100 and CIFAR-10 are shown in Figure 7. The conclusions are the same as discussed in the main text in Section 3.\nA.2 FISHER PENALTY\nWe first show additional metrics for experiments summarized in Table 1. In Table 6 we show the final training accuracy. Table 4 confirms that generally all gradient norm regularizers reduce the maximum value of Tr(F) (we measure Tr(F) starting from after one epoch of training because Tr(F) explodes in networks with batch normalization layers at initialization). Finally, Table 5 confirms that the regularizers incurred a relatively small additional computational cost.\nFigure 8 is a counterpart of Figure 3 for the other two models on the CIFAR-10 and the CIFAR-100 datasets.\nA.3 FISHER PENALTY REDUCES MEMORIZATION\nIn this section, we describe additional experimental results for Section 4.1. Figure 9 is the same as Figure 5, but for ResNet-50.\nA.4 EARLY Tr(F) INFLUENCES FINAL CURVATURE\nIn this section, we present additional experimental results for Section 5. The experiment on CIFAR-10 is shown in Figure 10. The conclusions are the same as discussed in the main text in Section 5.\nNext, to understand why smaller Tr(F) during early phase is more likely to end up in a wider final minimum, we track Tr(H) during the entire coarse of training and show that it stabilizes early during training. In this experiment, we create two sets of hyper-parameters: coarse-grained and fine-grained. For CIFAR-10, we use batch size S ∈ A ∪B, where A = {480, 500, 520} and B = {80, 100, 120}. For all batch size configurations, a learning rate of 0.02 is used. Overloading the symbols A and B for CIFAR-100, we use learning rate η ∈ A ∪ B, where A = {0.0008, 0.001, 0.0012} and B = {0.008, 0.01, 0.012}. For all learning rate configurations, a batch size of 100 is used. In both cases, the elements within each set (A and B) vary on a fine-grained scale, while the elements across the two sets vary on a coarse-grained scale. The remaining details and additional experiments can be found in Appendix G.4. The experiments are shown in Figure 11. Notice that after initialization (index 0 on x-axis), the first value is computed at epoch 10 (at which point previous experiments show that entanglement starts to hold with late phase).\nWe make three observations in this experiment. First, the relative ordering of Tr(H) values for runs between sets A vs B stay the same after the first 10 epochs. Second, the degree of entanglement is higher between any two epochs when looking at runs across sets A and B, while it is weaker when looking at runs within any one the sets. Finally, test accuracies for set B runs are always higher than those of set A runs, but this trend is not strong for runs within any one set. Note that the minimum loss values are roughly at a similar scale for each dataset and they are all at or below 10−2.\nB COMPUTATION OF Tr(H)\nWe computed Tr(H) in our experiments using the Hutchinson’s estimator Hutchinson (1990),\nTr(H) = Tr(H · I) = Tr(H · E[zzT ]) = E[Tr(H · zzT )] = E[zTH · z]\n≈ 1 M M∑ i=1 zTi H · zi\n= 1\nM M∑ i=1 zTi ∂ ∂θ ( ∂` ∂θT ) · zi\n= 1\nM M∑ i=1 zTi ∂ ∂θ\n( ∂`\n∂θ\nT\nzi\n) ,\nwhere I is the identity matrix, z is a multi-variate standard Gaussian random variable, and zi’s are i.i.d. instances of z. The larger the value of M , the more accurate the approximation is. We used M = 30. To make the above computation efficient, note that the gradient ∂`∂θ only needs to be computed once and it can be re-used in the summation over the M samples." }, { "heading": "C APPROXIMATIONS IN FISHER PENALTY", "text": "In this section, we describe the approximations made in Fisher Penalty in detail. Recall, that Tr(F) can be expressed as\nTr(F) = Ex∼X ,ŷ∼pθ(y|x) [ ‖ ∂ ∂θ `(x, ŷ)‖22 ] . (3)\nIn the preliminary experiments, we found empirically that we can use the norm of the expected gradient rather than the expected norm of the gradient, which is a more direct expression of Tr(F):\n∇Ex∼X ,ŷ∼pθ(y|x) [∥∥∥∥ ∂∂θ `(x, ŷ) ∥∥∥∥2 2 ] ≈ 1 N N∑ n=1 1 M M∑ m=1 ∇ ∥∥∥∥ ∂∂θ `(xn, ŷnm) ∥∥∥∥2 2\n≥ ∇ ∥∥∥∥∥ 1NM N∑ n=1 M∑ m=1 ∂ ∂θ `(xn, ŷnm) ∥∥∥∥∥ 2\n2\n,\nwhere N and M are the minibatch size and the number of samples from pθ(y|xn), respectively. This greatly improves the computational efficiency. With N = B and M = 1, we end up with the following learning objective function:\n`′(x1:B , y1:B ;θ) = 1\nB B∑ i=1 `(xi, yi;θ) + α ∥∥∥∥∥ 1B B∑ i=1 g(xi, ŷi) ∥∥∥∥∥ 2 . (4)\nWe found empirically that ∥∥∥ 1B ∑Bi=1 g(xi, ŷi)∥∥∥2, which we denote by Tr(FB), and Tr(F) correlate well during training. To demonstrate this, we train SimpleCNN on the CIFAR-10 dataset with 5 different learning rates (from 10−3 to 10−1). The outcome is shown in Figure 12. We see that for most of the training, with the exception of the final phase, Tr(FB) and Tr(F) correlate extremely well. Equally importantly, we find that using a large learning affects both Tr(FB) and Tr(F), which further suggests the two are closely connected.\nWe also update the gradient of Tr(FB) only every 10 optimization steps. We found empirically it does not affect generalization performance nor the ability to regularize Tr(F) in our setting. However, we acknowledge that it is plausible that this choice would have to be reconsidered in training with very large learning rates or with larger models.\nFigure 13 compares learning curves of training with FP recomputed every optimization step, or every 10 optimization steps. For each, we tune the hyperparameter α, checking 10 values equally spaced between 10−2 and 100 on a logarithmic scale. We observe that for the optimal value of α, both validation accuracy and Tr(F) are similar between the two runs. Both experiments achieve approximately 80% test accuracy.\nFinally, to ensure that using the approximation in Equation 2 does not negatively affect how Fisher Penalty improves generalization or reduces the value of Tr(F), we experiment with a variant of Fisher Penalty without the approximation. Please recall that we always measure Tr(F) (i.e. we do not use approximations in computing Tr(F) that is reported in the plots), regardless of what variant of penalty is used in regularizing the training.\nSpecifically, we augment the loss function with the norm of the gradient computed on the first example in the mini-batch as follows\n`′(x1:B , y1:B ;θ) = 1\nB B∑ i=1 `(xi, yi;θ) + α ‖g(x1, ŷ1)‖2 . (5)\nWe apply this penalty in each optimization step. We tune the hyperparameter α, checking 10 values equally spaced between 10−4 and 10−2 on a logarithmic scale.\nFigure 14 summarizes the results. We observe that the best value of α yields 79.7% test accuracy, compared to 80.02% test accuracy yielded by the Fisher Penalty. The effect on Tr(F) is also very similar. We observe that the best run corresponds to maximum value of Tr(F) of 24.16, compared to that of 21.38 achieved by Fisher Penalty. These results suggest that the approximation used in Fisher Penalty only improves the generalization and flattening effects of Fisher Penalty." }, { "heading": "D A CLOSER LOOK AT THE SURPRISING EFFECT OF LEARNING RATE ON THE LOSS GEOMETRY IN THE EARLY PHASE OF TRAINING", "text": "It is intuitive to hypothesize that the catastrophic Fisher explosion (the initial growth of the value of Tr(F)) occurs during training with a large learning rate, but is overlooked due to not sufficiently fine-grained computation of Tr(F). In this section we show evidence against this hypothesis based on the literature mentioned in the main text. We also run additional experiments in which we compute the value of Tr(F) at each iteration.\nThe surprising effect of the learning rate on the geometry of the loss surface (e.g. the value of Tr(F)) was demonstrated in prior works (Jastrzębski et al., 2019; Golatkar et al., 2019; Lewkowycz et al., 2020; Leclerc & Madry, 2020). In particular, Jastrzebski et al. (2020); Lewkowycz et al. (2020) show that training with large learning rate rapidly escapes regions of high curvature, where curvature is understood as the spectral norm of the Hessian evaluated at the current point of the loss surface. Perhaps the most direct experimental data against this hypothesis can be found in Anonymous (2021) in Figure 1, where training with Gradient Descent finds regions of the loss surface with large curvature for small learning rate rapidly in the early phase of training.\nWe also run the following experiment to provide further evidence against the hypothesis. We train SimpleCNN on the CIFAR-10 dataset using two different learning rates, while computing the value of Tr(F) for every mini-batch. We use 128 random samples in each iteration to estimate Tr(F).\nWe find that training with a large learning rate never (even for a single optimization step) enters a region with the value of Tr(F) as large as reached during training with a small learning rate. Figure 15 shows the experimental data.\nWe also found similar to hold when varying the batch size, see Section E, which further shows that the observed effects cannot be explained by the difference in learning speed incurred by using a small learning rate.\nTo summarize, both the published evidence of Jastrzebski et al. (2020); Lewkowycz et al. (2020); Anonymous (2021), as well as our additional experiments are inconsistent with the hypothesis that the results in this paper can be explained by differences in training speed between experiments using large and small learning rates." }, { "heading": "E CATASTROPHIC FISHER EXPLOSION HOLDS IN TRAINING WITH LARGE BATCH-SIZE", "text": "In this section, we show evidence that the conclusions transfer to large batch size training. Namely, we show that (1) catastrophic Fisher explosion also occurs in large batch size training, and (2) Fisher Penalty can improve generalization and close the generalization gap due to using a large batch size (Keskar et al., 2017).\nWe first train SimpleCNN on the CIFAR-10 dataset using three different batch sizes, while computing the value of Tr(F) for every mini-batch. We use 128 random samples in each iteration to estimate Tr(F). Figure 16 summarizes the experiment. We observe that training with a large batch size enters a region of the loss surface with a substantially larger value of Tr(F) than the small batch size.\nNext, we run a variant of one of the experiments in Table 1. Instead of using a suboptimal (smaller) learning rate, we use a suboptimal (larger) batch size. Specifically, we train SimpleCNN on the CIFAR-10 dataset (without augmentation) with a 10x larger batch size while keeping learning rate the same. Using a larger batch size results in 3.24% lower test accuracy (76.94% compared to 73.7% test accuracy, c.f. with Table 1).\nWe next experiment with Fisher Penalty. We apply the penalty in each optimization step and use the first 128 examples when computing the penalty. We also use a 2x lower learning rate, which stabilizes training but does not improve generalization on its own (training with this learning rate reaches 73.59% test accuracy). Figure 17 shows Tr(F) and validation accuracy during training for different values of the penalty. We observe that Fisher Penalty improves test accuracy from 73.59% to 78.7%. Applying Fisher Penalty also effectively reduces the peak value of Tr(F)/\nTaken together, the results suggest that Catastrophic Fisher explosion holds in large batch size training; using a small batch size improves generalization by a similar mechanism as using a large batch size, which can be introduced explicitly in the form of Fisher Penalty.\nF Tr(H) AND Tr(F) CORRELATE STRONGLY\nWe demonstrate a strong correlation between Tr(H) and Tr(F) for DenseNet, ResNet-56 and SimpleCNN in Figure 18. We calculate Tr(F) using a mini-batch. We see that Tr(F) has a smaller magnitude (because we use the mini-batch gradient which has lower variance), but correlates extremely well with Tr(H)." }, { "heading": "G ADDITIONAL EXPERIMENTAL DETAILS", "text": "G.1 EARLY PHASE Tr(F) CORRELATES WITH FINAL GENERALIZATION\nHere, we describe additional details for experiments in Section 3.\nIn the experiments with batch size, for CIFAR-10, we use batch sizes 100, 500 and 700, and = 1.2. For CIFAR-100, we use batch sizes 100, 300 and 700, and = 3.5. These thresholds are crossed between 2 and 7 epochs across different hyperparameter settings. The remaining details for CIFAR100 and CIFAR-10 are the same as described in main text. The optimization details for these datasets are as follows.\nImageNet: No data augmentation was used in order to allow training loss to converge to small values. We use a batch size of 256. Training is done using SGD with momentum set to 0.9, weight decay set to 1e− 4, and with base learning rate as per the aforementioned details. Learning rate is dropped by a factor of 0.1 after 29 epochs and training is ended at around 50 epochs at which most runs converge to small loss values. No batch normalization is used and weight are initialized using Fixup Zhang et al. (2019). For each hyperparameter setting, we run two experiments with different random seeds due to the computational overhead. We compute Tr(F) using 2500 samples (similarly to ?).\nCIFAR-10: We used random flipping as data augmentation. In the experiments with variation in learning rates, we use a batch size of 256. In the experiments with variation in batch size, we use a learning rate of 0.02. Training is done using SGD with momentum set to 0.9, weight decay set to 1e− 5, and with base learning rate as per the aforementioned details. Learning rate is dropped by a factor of 0.5 at epochs 60, 120, and 170, and training is ended at 200 epochs at which most runs converge to small loss values. No batch normalization is used and weight are initialized using Arpit et al. (2019). For each hyperparameter setting, we run 32 experiments with different random seeds. We compute Tr(F) using 5000 samples.\nCIFAR-100: No data augmentation was used for CIFAR-100 to allow training loss to converge to small values. We used random flipping as data augmentation for CIFAR-10. In the experiments with variation in learning rates, we use a batch size of 100. In the experiments with variation in batch size, we use a learning rates of 0.02. Training is done using SGD with momentum set to 0.9, weight decay set to 1e− 5, and with base learning rate as per the aforementioned details. Learning rate is dropped by a factor of 0.5 at epochs 60, 120, and 170, and training is ended at 200 epochs at which most runs converge to small loss values. No batch normalization is used and weight are initialized using Arpit et al. (2019). For each hyperparameter setting, we run 32 experiments with different random seeds. We compute Tr(F) using 5000 samples.\nG.2 FISHER PENALTY\nHere, we describe the remaining details for the experiments in Section 4. We first describe how we tune hyperparameters in these experiments. In the remainder of this section, we describe each setting used in detail .\nTuning hyperparameters In all experiments, we refer to the optimal learning rate η∗ as the learning rate optimized using grid search. In most experiments we check 5 different learning rate values uniformly spaced on a logarithmic scale, usually between 10−2 and 100. In some experiments we adapt the range to ensure that the range includes the optimal learning rate. We tune the learning rate only once for each configuration (i.e. we do not repeat it for different random seeds).\nIn the first setting, for most experiments involving gradient norm regularizers, we use 10× smaller learning rate than η∗. For TinyImageNet, we use 30× smaller learning rate than η∗. To pick the regularization coefficient α, we evaluate 10 different values uniformly spaced on a logarithmic scale between 10−1 × v to 101 × v with v ∈ R+. We choose the best performing α according to best validation accuracy. We pick the value of v manually with the aim that the optimal α is included in this range. We generally found that v = 0.01 works well for GP, GPr, and FP. For GPx we found in some experiments that it is necessary to pick larger values of v.\nMeasuring Tr(F) We measure Tr(F) using the number of examples equal to the batch size used in training. For experiments with Batch Normalization layers, we use Batch Normalization in evaluation mode due to the practical reason that computing Tr(F) uses batch size of 1, and hence Tr(F) is not defined for a network with Batch Normalization layers in training mode.\nDenseNet on the CIFAR-100 dataset We use the DenseNet (L=40, k=12) configuration from Huang et al. (2017). We largely follow the experimental setting in Huang et al. (2017). We use\nthe standard data augmentation (where noted) and data normalization for CIFAR-100. We hold out random 5000 examples as the validation set. We train the model using SGD with momentum of 0.9, a batch size of 128, and weight decay of 0.0001. Following Huang et al. (2017), we train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. To reduce variance, in testing we update Batch Normalization statistics using 100 batches from the training set.\nWide ResNet on the CIFAR-100 dataset We train Wide ResNet (depth 44 and width 3, without Batch Normalization layers). We largely follow experimental setting in He et al. (2015).We use the standard data augmentation and data normalization for CIFAR-100. We hold out random 5000 examples as the validation set. We train the model using SGD with momentum of 0.9, a batch size of 128, weight decay of 0.0010. Following He et al. (2015), we train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. We remove Batch Normalization layers. To ensure stable training we use the SkipInit initialization (De & Smith, 2020).\nVGG-11 on the CIFAR-100 dataset We adapt the VGG-11 model (Simonyan & Zisserman, 2015) to CIFAR-100. We do not use dropout nor Batch Normalization layers. We hold out random 5000 examples as the validation set. We use the standard data augmentation (where noted) and data normalization for CIFAR-100. We train the model using SGD with momentum of 0.9, a batch size of 128, and weight decay of 0.0001. We train the model for 300 epochs, and decay the learning rate by a factor of 0.1 after every 40 epochs starting from epoch 80.\nSimpleCNN on the CIFAR-10 dataset We also run experiments on the CNN example architecture from the Keras example repository (Chollet & others, 2015)1, which we change slightly. Specifically, we remove dropout and reduce the size of the final fully-connected layer to 128. We train it for 300 epochs and decay the learning rate by a factor of 0.1 after the epochs 150 and 225. We train the model using SGD with momentum of 0.9, a batch size of 128.\nWide ResNet on the TinyImageNet dataset We train Wide ResNet (depth 44 and width 3, with Batch Normalization layers) on TinyImageNet Le & Yang (2015). TinyImageNet consists of subset of 100,000 examples from ImageNet that we downsized to 32×32 pixels. We train the model using SGD with momentum of 0.9, a batch size of 128, and weight decay of 0.0001. We train for 300 epochs and decay the learning rate by a factor of 0.1 after epochs 150 and 225. We train the model using SGD with momentum of 0.9, a batch size of 128. We do not use validation in TinyImageNet due to its larger size. To reduce variance, in testing we update Batch Normalization statistics using 100 batches from the training set.\nG.3 FISHER PENALTY REDUCES MEMORIZATION\nHere, we describe additional experimental details for Section 4.1. We use two configurations described in Section G.2: VGG-11 trained on CIFAR-100 dataset, and Wide ResNe trained on the CIFAR-100 dataset. We tune the regularization coefficient α in the range {0.01, 0.1, 0.31, 10}, with the exception of GPx for which we use the range {10, 30, 100, 300, 1000}. We tuned mixup coefficient in the range {0.4, 0.8, 1.6, 3.2, 6.4}. We removed weight decay in these experiments.\nG.4 EARLY Tr(F) INFLUENCES FINAL CURVATURE\nCIFAR-10: We used random flipping as data augmentation for CIFAR-10. We use a learning rate of 0.02 for all experiments. Training is done using SGD with momentum 0.9, weight decay 1e− 5, and with batch size as shown in figures. Learning rate is drop by a factor of 0.5 at 80, 150, and 200 epochs, and training is ended at 250 epochs. No batch normalization is used and weight are initialized using Arpit et al. (2019). For each batch size, we run 32 experiments with different random seeds. We compute Tr(F) using 5000 samples.\nCIFAR-100: No data augmentation is used. We use a batch size of 100 for all experiments. Training is done using SGD with momentum 0.9, weight decay 1e− 5, and with base learning rate as shown in figures. Learning rate is drop by a factor of 0.5 at 80, 150, and 200 epochs, and training is ended at 250 epochs. No batch normalization is used and weight are initialized using Arpit et al. (2019). For\n1Accessible at https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py.\neach learning rate, we run 32 experiments with different random seeds. We compute Tr(F) using 5000 samples." } ]
2,020
CATASTROPHIC FISHER EXPLOSION: EARLY PHASE FISHER MATRIX IMPACTS GENERALIZATION